This Page

has been moved to new address

Sorry for inconvenience...

Redirection provided by Blogger to WordPress Migration Service
----------------------------------------------------- Blogger Template Style Name: Snapshot: Madder Designer: Dave Shea URL: mezzoblue.com / brightcreative.com Date: 27 Feb 2004 ------------------------------------------------------ */ /* -- basic html elements -- */ body {padding: 0; margin: 0; font: 75% Helvetica, Arial, sans-serif; color: #474B4E; background: #fff; text-align: center;} a {color: #DD6599; font-weight: bold; text-decoration: none;} a:visited {color: #D6A0B6;} a:hover {text-decoration: underline; color: #FD0570;} h1 {margin: 0; color: #7B8186; font-size: 1.5em; text-transform: lowercase;} h1 a {color: #7B8186;} h2, #comments h4 {font-size: 1em; margin: 2em 0 0 0; color: #7B8186; background: transparent url(http://www.blogblog.com/snapshot/bg-header1.gif) bottom right no-repeat; padding-bottom: 2px;} @media all { h3 { font-size: 1em; margin: 2em 0 0 0; background: transparent url(http://www.blogblog.com/snapshot/bg-header1.gif) bottom right no-repeat; padding-bottom: 2px; } } @media handheld { h3 { background:none; } } h4, h5 {font-size: 0.9em; text-transform: lowercase; letter-spacing: 2px;} h5 {color: #7B8186;} h6 {font-size: 0.8em; text-transform: uppercase; letter-spacing: 2px;} p {margin: 0 0 1em 0;} img, form {border: 0; margin: 0;} /* -- layout -- */ @media all { #content { width: 700px; margin: 0 auto; text-align: left; background: #fff url(http://www.blogblog.com/snapshot/bg-body.gif) 0 0 repeat-y;} } #header { background: #D8DADC url(http://www.blogblog.com/snapshot/bg-headerdiv.gif) 0 0 repeat-y; } #header div { background: transparent url(http://www.blogblog.com/snapshot/header-01.gif) bottom left no-repeat; } #main { line-height: 1.4; float: left; padding: 10px 12px; border-top: solid 1px #fff; width: 428px; /* Tantek hack - http://www.tantek.com/CSS/Examples/boxmodelhack.html */ voice-family: "\"}\""; voice-family: inherit; width: 404px; } } @media handheld { #content { width: 90%; } #header { background: #D8DADC; } #header div { background: none; } #main { float: none; width: 100%; } } /* IE5 hack */ #main {} @media all { #sidebar { margin-left: 428px; border-top: solid 1px #fff; padding: 4px 0 0 7px; background: #fff url(http://www.blogblog.com/snapshot/bg-sidebar.gif) 1px 0 no-repeat; } #footer { clear: both; background: #E9EAEB url(http://www.blogblog.com/snapshot/bg-footer.gif) bottom left no-repeat; border-top: solid 1px #fff; } } @media handheld { #sidebar { margin: 0 0 0 0; background: #fff; } #footer { background: #E9EAEB; } } /* -- header style -- */ #header h1 {padding: 12px 0 92px 4px; width: 557px; line-height: 1;} /* -- content area style -- */ #main {line-height: 1.4;} h3.post-title {font-size: 1.2em; margin-bottom: 0;} h3.post-title a {color: #C4663B;} .post {clear: both; margin-bottom: 4em;} .post-footer em {color: #B4BABE; font-style: normal; float: left;} .post-footer .comment-link {float: right;} #main img {border: solid 1px #E3E4E4; padding: 2px; background: #fff;} .deleted-comment {font-style:italic;color:gray;} /* -- sidebar style -- */ @media all { #sidebar #description { border: solid 1px #F3B89D; padding: 10px 17px; color: #C4663B; background: #FFD1BC url(http://www.blogblog.com/snapshot/bg-profile.gif); font-size: 1.2em; font-weight: bold; line-height: 0.9; margin: 0 0 0 -6px; } } @media handheld { #sidebar #description { background: #FFD1BC; } } #sidebar h2 {font-size: 1.3em; margin: 1.3em 0 0.5em 0;} #sidebar dl {margin: 0 0 10px 0;} #sidebar ul {list-style: none; margin: 0; padding: 0;} #sidebar li {padding-bottom: 5px; line-height: 0.9;} #profile-container {color: #7B8186;} #profile-container img {border: solid 1px #7C78B5; padding: 4px 4px 8px 4px; margin: 0 10px 1em 0; float: left;} .archive-list {margin-bottom: 2em;} #powered-by {margin: 10px auto 20px auto;} /* -- sidebar style -- */ #footer p {margin: 0; padding: 12px 8px; font-size: 0.9em;} #footer hr {display: none;} /* Feeds ----------------------------------------------- */ #blogfeeds { } #postfeeds { }

Wednesday, June 30, 2010

This morning I read an absolutely outstanding Blog post on the Clearwell eDiscovery 2.0 Blog by Dean Gonsowski titled, “Automated Review in Electronic Discovery Re-visited”. I don’t agree with Dean’s conclusion that “the automated review dog is still not ready to hunt”.

However, I think that he did an tremendous job framing the issues and was spot on in regards to his assessment that litigators are both risk adverse and generally slow to adopt new technology approaches.

I could go through the 7 points that Dean makes and discuss how today’s “categorization” or automated review technologies such as Equivio>Relevance, CategorIx and new technology from Orcatec, to name just a few, can address each point in a legally defensible and cost effective way. However, in the essence of time, I would prefer to point out that I am aware of numerous case studies in which automated review dramatically reduced the volume of documents that had to be reviewed and therefore the subsequent overall cost of eDiscovery of and in other cases proved in a statistically significant manner that the manual review process had been a complete failure.

There is no doubt that litigators are both risk adverse and generally slow to adopt new technology approaches. However, I believe that the automated review dog is absolutely ready to hunt and will prove itself to be a indispensible tool to the field of eDiscovery.

Manual review is still the major expense in the eDiscovery process and therefore any litigator that is not trying to find new ways to reduce those costs is doing his/her clients a disservice. Given this assumption, I think that the real debate should be whether or not the old legal dogs that won’t adopt new technology such as automated review and are therefore are no longer capable of hunting at the levels of the new dogs, should be retired? I suspect that it will end up just being a natural process of selection.

The full text of Dean’s post is as follows:

Almost two years ago I wrote one of my first blog posts entitled “Review-less E-Discovery Review.” Despite the tongue twister of a title, the post posited that “there is a very real possibility that we’re on the cusp of computers taking over a significant e-discovery task for attorneys.” I’d like to take a look and see how much (if at all) my prognostications have materialized.

A cynic might think that this is the moment where E-Discovery 2.0 jumps the shark. But no, this isn’t one of those sitcom episodes where they flashback to previous shows as an easy way to recycle content. Instead, it seems useful to see how the legal market has evolved from a litigation workflow perspective, particularly with some vendors touting the benefits of review-less technologies like predictive coding.

In the original blog, I noted that there was a “scenario where a non-manual review methodology may make sense” (while importantly noting that “this approach is not without risk”). Since my last post there has been the successful adoption of Evidence Rule 502,which makes this methodology (at least conceptually) safer.

But again (imagine dreamy flashback mode), here were the guidelines I previously proffered:

Large data set. This may sound a bit obvious, but a non-manual approach is best suited for large, unwieldy data sets. The corpus doesn’t need to be in the terabytes, but the data set should be evaluated in term of discovery processing costs and attorney review estimates.

Short Production Timelines. Once the above calculations are conducted, the next step is to determine if a human based review could even conceivably be conducted in the given time frame. In many instances, an eyes-on review process just won’t be feasible since there won’t be enough bodies to throw at the problem.

Next Gen “PAR” Tools. In order to pull this “review-less” review process off, both safely and quickly, the responding party needs to have access to fast, robust processing, analysis and review (“PAR”) tools. Certainly, it’s possible to have this scenario work with an e-discovery service provider, if they have the capability.

Relatively Small Amount in Controversy. For the time being, this approach should not be considered for any “bet the company” litigation, nor anything with significant downside risk (governmental inquiries, punitive damages, class actions, 2nd requests, etc.). Yet, for many standard commercial lawsuits, corporate investigations, HR claims, etc. this review-less approach may be worth considering.

Ability to Use a Clawback Provision. Entering into a clawback provision with the opposition is mandatory in this methodology since the chances of an inadvertent production are statistically ever-present. Yet, until Evidence Rule 502 is resolved, there will always be a risk that the clawback won’t be enforceable against 3rd parties.

Non-governmental Production. Most information in governmental productions becomes part of the public record, meaning that a clawback isn’t going to be feasible. Here, trade secret information, personally identifiably data and the like would be disastrous if pushed out into the public domain.

The goal of this post is to see if this dog is any more ready to hunt than it was two years ago. The short answer (right now) appears to be: No.

We all know that litigators are both risk adverse and generally slow to adopt new technology approaches. This is particularly true when there’s a perception that they won’t have insight into the technological black box behind automated coding/tagging decisions. Litigators are understandably sensitive about the ability to prove up the reasonability of their search and review processes. This “reasonableness” requirement lines up both with the Victor Stanley requirements and FRE 50(b), which eliminates the chance of a waiver only “if the holder of the privilege or work product protection took reasonable precautions to prevent disclosure.”

Given this ongoing hesitancy, the question remains shouldn’t we be seeing more movement in automated review than the glacial progress that’s been achieved to date, particularly with the known shortcomings of the eyes-on review process? Most are familiar with the 1985 STAIRS study by Blair and Marion where the percentage of relevant documents lawyers thought they had found using Boolean Keyword searches was 75% – when the percentage they actually found was 20%.

But, despite the known deficiencies of eyes-on review it follows into the “go with the devil you know” mindset that often makes sense when dealing with judges and juries who aren’t likely to grok newer-fangled approaches.

In addition to these high-level, almost dogmatic challenges, there is one other tactical element I’d add to my previous list (of 6 factors).

7. All documents processed up-front (no rolling collection). I’ve heard some in the trenches e-discovery experts claim that they’ve never had a case that didn’t involve at least some level of incremental data collections. Whether this is an overstatement is immaterial. The fact is that a large number of e-discovery projects involve ESI that is collected (and then processed) in dribs and drabs. This if often a good thing, largely attributable to the incremental (start slowly) nature of a well thought out e-discovery project where a smaller number of initial custodians are processed, then ECA is conducted and only then is the additional ESI added to the corpus. This common methodology causes some significant heartburn for a review-less methodology since the ever changing nature of the corpus makes it difficult/impossible for a sample to be truly extensible to what will eventually be the entire data set. For this reason, the review-less approach should be limited to where the entire corpus is collected and processed at once.

In sum, the seven foregoing factors appear to still be largely valid and create an environment where an automated, review-less methodology will only make sense in a relatively rare set of circumstances. This may change in the future, but given the risk adverse DNA of most litigators I can’t imagine this tipping point happening any time soon

Labels: , , , , ,

1 Comments:

At June 30, 2010 at 9:04 PM , Blogger Dean Gonsowski said...

Thanks for the kind words Charles. As a point of clarification, I'm not suggesting that the automated review technologies aren't defensible per se or that there aren't case studies out there that show the value. My point is merely that we haven't hit the adoption tipping point yet, and may not due so for a while, since most attorneys don't want to be the first one in the pool regarding a new (if even arguably better) methodology. Best, Dean Gonsowski

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home