This Page

has been moved to new address

The eDiscovery Paradigm Shift

Sorry for inconvenience...

Redirection provided by Blogger to WordPress Migration Service
----------------------------------------------------- Blogger Template Style Name: Snapshot: Madder Designer: Dave Shea URL: mezzoblue.com / brightcreative.com Date: 27 Feb 2004 ------------------------------------------------------ */ /* -- basic html elements -- */ body {padding: 0; margin: 0; font: 75% Helvetica, Arial, sans-serif; color: #474B4E; background: #fff; text-align: center;} a {color: #DD6599; font-weight: bold; text-decoration: none;} a:visited {color: #D6A0B6;} a:hover {text-decoration: underline; color: #FD0570;} h1 {margin: 0; color: #7B8186; font-size: 1.5em; text-transform: lowercase;} h1 a {color: #7B8186;} h2, #comments h4 {font-size: 1em; margin: 2em 0 0 0; color: #7B8186; background: transparent url(http://www.blogblog.com/snapshot/bg-header1.gif) bottom right no-repeat; padding-bottom: 2px;} @media all { h3 { font-size: 1em; margin: 2em 0 0 0; background: transparent url(http://www.blogblog.com/snapshot/bg-header1.gif) bottom right no-repeat; padding-bottom: 2px; } } @media handheld { h3 { background:none; } } h4, h5 {font-size: 0.9em; text-transform: lowercase; letter-spacing: 2px;} h5 {color: #7B8186;} h6 {font-size: 0.8em; text-transform: uppercase; letter-spacing: 2px;} p {margin: 0 0 1em 0;} img, form {border: 0; margin: 0;} /* -- layout -- */ @media all { #content { width: 700px; margin: 0 auto; text-align: left; background: #fff url(http://www.blogblog.com/snapshot/bg-body.gif) 0 0 repeat-y;} } #header { background: #D8DADC url(http://www.blogblog.com/snapshot/bg-headerdiv.gif) 0 0 repeat-y; } #header div { background: transparent url(http://www.blogblog.com/snapshot/header-01.gif) bottom left no-repeat; } #main { line-height: 1.4; float: left; padding: 10px 12px; border-top: solid 1px #fff; width: 428px; /* Tantek hack - http://www.tantek.com/CSS/Examples/boxmodelhack.html */ voice-family: "\"}\""; voice-family: inherit; width: 404px; } } @media handheld { #content { width: 90%; } #header { background: #D8DADC; } #header div { background: none; } #main { float: none; width: 100%; } } /* IE5 hack */ #main {} @media all { #sidebar { margin-left: 428px; border-top: solid 1px #fff; padding: 4px 0 0 7px; background: #fff url(http://www.blogblog.com/snapshot/bg-sidebar.gif) 1px 0 no-repeat; } #footer { clear: both; background: #E9EAEB url(http://www.blogblog.com/snapshot/bg-footer.gif) bottom left no-repeat; border-top: solid 1px #fff; } } @media handheld { #sidebar { margin: 0 0 0 0; background: #fff; } #footer { background: #E9EAEB; } } /* -- header style -- */ #header h1 {padding: 12px 0 92px 4px; width: 557px; line-height: 1;} /* -- content area style -- */ #main {line-height: 1.4;} h3.post-title {font-size: 1.2em; margin-bottom: 0;} h3.post-title a {color: #C4663B;} .post {clear: both; margin-bottom: 4em;} .post-footer em {color: #B4BABE; font-style: normal; float: left;} .post-footer .comment-link {float: right;} #main img {border: solid 1px #E3E4E4; padding: 2px; background: #fff;} .deleted-comment {font-style:italic;color:gray;} /* -- sidebar style -- */ @media all { #sidebar #description { border: solid 1px #F3B89D; padding: 10px 17px; color: #C4663B; background: #FFD1BC url(http://www.blogblog.com/snapshot/bg-profile.gif); font-size: 1.2em; font-weight: bold; line-height: 0.9; margin: 0 0 0 -6px; } } @media handheld { #sidebar #description { background: #FFD1BC; } } #sidebar h2 {font-size: 1.3em; margin: 1.3em 0 0.5em 0;} #sidebar dl {margin: 0 0 10px 0;} #sidebar ul {list-style: none; margin: 0; padding: 0;} #sidebar li {padding-bottom: 5px; line-height: 0.9;} #profile-container {color: #7B8186;} #profile-container img {border: solid 1px #7C78B5; padding: 4px 4px 8px 4px; margin: 0 10px 1em 0; float: left;} .archive-list {margin-bottom: 2em;} #powered-by {margin: 10px auto 20px auto;} /* -- sidebar style -- */ #footer p {margin: 0; padding: 12px 8px; font-size: 0.9em;} #footer hr {display: none;} /* Feeds ----------------------------------------------- */ #blogfeeds { } #postfeeds { }

Wednesday, January 5, 2011

Linear Review is an Outdated Methodology

As we trudge through the first week of 2011, I am going through my list of Blog posts that I wanted to comment on and the December 28, 2010 post on Linear Review by Venkat Rangan, Clearwell Systems CTO, seemed like a good place to start 2011.  The post titled, “Reinventing Review in Electronic Discovery”  discusses a topic that I am very familiar with and have been somewhat outspoken about it in the past couple of years.  Review costs still comprise over 70% of the overall cost of eDiscovery and therefore as an industry, we need to find better ways to approach review and, more importantly, reduce the costs.

Given my background in enterprise class applications development methodology and technology, I lived through the paradigm shift when that industry shifted from legacy waterfall methodology (i.e. linear) to rapid applications development (RAD) and now agile development methodology.  The increases in productivity were dramatic.

Mr. Rangan’s bases his Blog post on a excellent paper by The Demise of Linear Review by Bennett Borden of Williams Mullen.  Mr. Rangan states that the paper, citing several factual data from various studies, as well as drawing parallel to other similar anachronisms of the past, makes excellent arguments for rethinking how legal review is performed in eDiscovery.

I hope that in 2011, the litigation market begins to understand and embrace both the practical and financial benefits of replacing linear review with newer and more effective review methodologies and technologies.

The full text of Mr. Rangan’s Blog post is as follows:

In a recent workshop that I attended, I had the privilege of sharing thoughts on the latest electronic discovery trends with other experts in the market. Especially interesting to me was discussing the provocatively titled paper, The Demise of Linear Review by Bennett Borden of Williams Mullen. The paper, citing several factual data from various studies, as well as drawing parallel to other similar anachronisms of the past, makes excellent arguments for rethinking how legal review is performed in e-discovery.

When linear review is mentioned, the first mental picture one conjures up is boredom. It has generally been associated with a mental state that is a result of repetitive and monotonous tasks, with very little variation. To get a sense for how bad this can affect performance, one only needs to draw upon several studies of boredom at the workplace, especially in jobs such as mechanical assembly of the 1920s and the telephone switchboard operators of the 1950s. In fact, the Pentagon sponsored study, Implications for the design of jobs with variable requirements, from Navy Personnel Research and Development Center, presents an excellent treatise on contributors for workplace fatigue, stress, monotony, and distorted perception of time. This is best illustrated in their paper:

Mechanical assembly, inspection and monitoring, and continuous manual control are the principal kinds of tasks most frequently studied by researchers investigating the relationship between performance and presumed boredom. On the most repetitive tasks, degradation of performance has typically been found within 30 minutes (Fox & Embry, 1975; Saito, Kishida, Endo, & Saito, 1972). The early studies of the British Industrial Fatigue Board (Wyatt & Fraser, 1929) concluded that the worker’s experience of boredom could be identified by a characteristic output curve on mechanical assembly jobs. The magnitude of boredom was inversely related to output and was usually marked by a sharp decrement in the middle of a work period.
How does this apply to linear review? Well, a linear review is most often performed using a review application or tool, simulating a person reading and classifying a pile of documents. The reviewer is asked to read the document and apply a review code, based on their judgment. While it appears easy, it can be one of the most stressful, boring, and thankless jobs for a well-educated, well-trained knowledge worker. Even with technology and software advances a reviewer is required to read documents in relatively constrained workflows. Just scrolling through pages and pages of a document, comprehending its meaning and intent in the context of the production request can make it stressful. To add to this, reviewers are often measured for their productivity based on the number of documents or pages they review per day or per hour. In cases where large number of reviewers are involved, there are very direct comparisons of rates of review. Finally, the review effort is judged for quality without consideration for the very elements that impact quality. Imagine a workplace task where every action taken by a knowledge worker is monitored and evaluated to the minutest detail.

Given this, it is no wonder that study after study has found a straight plough-through linear review produces less than desirable results. A useful way to measure effectiveness of a review exercise is to submit the same collection of documents to multiple reviewers and assess their level of agreement on their classification of the reviewed documents in specific categories. One such study, Document Categorization in Legal Electronic Discovery: Computer Classification vs. Manual Review, finds that the level of agreement among human reviewers was only in the 70% range, even when agreement is limited to positive determination. As noted in the study, previous TREC inter-assessor agreement notes as well as other studies on this subject by Barnett et al., 2009 also shows a similar and consistent result. Especially noteworthy from TREC is the fact that only 9 out of 40 topics studied had an agreement level higher than 70%, while remarkably, four topics had no agreement at all. Some of the disagreement is due to the fact that most documents fall on varying levels of responsiveness which cannot easily be judged on binary yes/no decision (i.e., the “where do you draw the relevance line” problem). However, a significant source on variability is simply attributed to the boredom and fatigue that comes with repetitiveness of the task.

A further observation on reviewer effectiveness is available from the TREC 2009 Overview Report, which studied the appeals and adjudication process of that year’s Interactive Task. This study offers an excellent opportunity to assess the effectiveness of initial review and subsequent appeals and adjudication process. As noted in the study, the Interactive Task involves an initial run submission from participating teams which are sampled and reviewed by human assessors. Upon receiving their initial assessments, participating teams are allowed to appeal those judgments. Given the teams’ incentive to improve upon the initial results, they are motivated to construct an appeal for as many documents as they can, with each appeal containing a justification for re-classification. As noted in the study, the success rates of appeals were very high, with 84% to 97% of initial assessments being reversed. Such reversals were across the board and directly proportional to the number of appeals, suggesting that even the assessments that were not appealed could be suspect. Another aspect that is evidenced is that the appeals process requires a convincing justification from the appealing team, in the form of a snippet of the document, document summary, or a portion of the document highlighted for adjudication. This in itself biases the review and makes it easier for the topic assessor to get a clearer sense for the document on their attempt at adjudicating the appeal. This fact is also borne out by the aforementioned Computer Classification vs. Manual Review study where the senior litigator with the knowledge of the matter had the ability to offer the best adjudications.

Given that linear review is flawed, what are the remedies? As noted in Bennett’s paper, intelligent use of newer technologies along with a review workflow that leverages them can offer gains that are demonstrated in other industries. Let’s examine a few of them.

Response Variation Response variation is a strategy for coping with boredom by attempting to build variety into the task itself. In mechanical assembly lines, response variation is added through innovative floor and task layouts, such as Cellular Layout. On some tasks, response variation may involve only simple alternation behaviors, such as reversing the order in which subtasks are performed; on others, the variety may take more subtle forms reflected in an inconsistency of response times. In the context of linear review, it can help to organize your review batches so that your review teams alternate classifying documents for responsiveness, privilege and confidential etc. Another interesting approach would be to mix the review documents but suggest that each be reviewed for a specific target classification.

Free-Form Exploration Combining aspects of early case assessments and linear review is one form of exploration that is known to offer both a satisfying experience and effective results. While performing linear review, the ability to suspend the document being reviewed and jump to other similar documents and topics gives the reviewer a cognitive stimulus that improves knowledge acquisition. Doing so offers an opportunity for the reviewer to learn facts of the case that would normally be difficult to obtain, and approach the knowledge levels of a senior litigator of the case. After all, we depend on the knowledge of the matter to be a guide for reviewers, so attempts to increase their knowledge of the case can only be helpful. Also, on a free-form exploration, a reviewer may stumble on an otherwise difficult to obtain case fact and the sheer joy of finding something valuable would be rewarding.

Expanding the Work Product Besides simply judging the review disposition of a document, the generation of higher value output such as document summaries, critical snippets, and document meta-data that contribute to the assessment can both reduce the boredom of the current reviewer as well as contribute valuable insights to other reviewers. As noted earlier, being able to assist the review with such aids can be immensely helpful in your review process.

Review Technologies Of course, fundamentally changing linear review with specific technologies that radically changes the review workflow is an approach worth considering. While offering such aids, it must be remembered that human judgment is still needed and the process must incorporate both increasing their knowledge as well as their ability to apply judgment. We will examine these technologies in an upcoming post.


Labels: , , , , , , , , , ,

Tuesday, January 4, 2011

Big Week for Wisconsin

Having been born and raised in Wisconsin, this has been a big week for all of us “Cheeseheads”.  The Wisconsin Badgers won the Rose Bowl; The Green Bay Packers beat the Chicago Bears and made it into the NFL playoffs; and, most relevant to my Blog, the eDiscovery amendments to Wisconsin’s rules of civil procedure become effective.  The amendments, affecting Wis. Stat. §§ 802.10, 804.01, 804.08, 804.09, 804.12, and 805.07, address for the first time the discovery of electronically stored information (“ESI”).  Among other things, the amendments address issues including the parties’ obligation to meet and confer, the format of production, and safe harbor from sanctions when ESI is lost as the result of the routine, good-faith operation of an electronic system.

I would bet a Wisconsin Quarter that the lawyers of Wisconsin are more excited about the Badgers and the Packers than they are about the chagnes to the rules of civil procedure.

Labels: , ,