메뉴 건너뛰기




Volumn , Issue , 2015, Pages

e-Discovery Team at TREC 2015 Total Recall Track

Author keywords

active machine learning; AI enhanced review; CAL; CAR; Computer assisted review; continuous active learning; e discovery; electronic discovery; Hybrid Multimodal; legal search; predictive coding; predictive coding 3.0; relevant irrelevant training ratios; TAR; Technology assisted review

Indexed keywords

COMPUTER AIDED INSTRUCTION; INFORMATION RETRIEVAL; LEARNING SYSTEMS; MACHINE LEARNING;

EID: 85180014605     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (1)

References (21)
  • 2
    • 85180009968 scopus 로고    scopus 로고
    • The e-Discovery Team’s hybrid multimodal approach is similar to the method promoted by the Total Recall Track administrators, Maura Grossman and Gordon Cormack, in that they both use continuous active learning (CAL) in legal search as part of a technology-assisted review (TAR). It is, however, fundamentally different from Grossman and Cormack’s current methods in two ways. First, our approach relies upon and encourages participation of skilled reviewers in the search process, the hybrid approach, whereas the Grossman and Cormack approach seeks to eliminate the role of the skilled user, namely trained attorneys. The rationale for their automation goal is the unsubstantiated claim that the adversarial context of legal search makes attorneys untrustworthy. They claim that inherent user bias means fully automated approaches are the only reliable methods of legal search. Grossman & Cormack, Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review, CoRR abs/1504.06868 at pg. 1 (eDiscovery, the review is typically conducted in an adversarial context, which may offer the reviewer limited incentive to conduct the best possible search.”) Obviously the Team disputes this assumption and conclusion. We do not endorse the view of the inherent bias and untrustworthiness of attorneys Ralph Losey’s experience as a practicing attorney since 1980 such bias is the rare exception, not the norm, and should not be the basis of a legal search strategy. The better solution to this minor of trustworthiness is educational, to train more attorneys in search and in professional ethics. Since our core assumptions on process and attorney honesty are fundamentally different, so too are our methods and goal. Our aim is augmentation of skilled attorneys to perform legal search, not automation, not replacement. Second, our Team uses a variety of search methods, a multimodal approach, whereas the Grossman and Cormack approach relies solely upon the use of high-ranking documents to train a classifier. This is consistent with their aim to fully automate and eliminate attorneys from the legal search process, again based on the premise we dispute of attorney bias their words: “For the reasons stated above, it may be desirable to limit discretionary choices in the selection of search tools, tuning parameters, and search strategy. Id. We disagree and seek to empower attorneys with a variety of search tools, including the one search method that they endorse of reliance on high-ranking documents. Also and the discussion and citations in Endnote 19
    • The e-Discovery Team’s hybrid multimodal approach is similar to the method promoted by the Total Recall Track administrators, Maura Grossman and Gordon Cormack, in that they both use continuous active learning (CAL) in legal search as part of a technology-assisted review (TAR). It is, however, fundamentally different from Grossman and Cormack’s current methods in two ways. First, our approach relies upon and encourages participation of skilled reviewers in the search process, the hybrid approach, whereas the Grossman and Cormack approach seeks to eliminate the role of the skilled user, namely trained attorneys. The rationale for their automation goal is the unsubstantiated claim that the adversarial context of legal search makes attorneys untrustworthy. They claim that inherent user bias means fully automated approaches are the only reliable methods of legal search. Grossman & Cormack, Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review, CoRR abs/1504.06868 at pg. 1 (2015) (“In eDiscovery, the review is typically conducted in an adversarial context, which may offer the reviewer limited incentive to conduct the best possible search.”) Obviously the Team disputes this assumption and conclusion. We do not endorse the view of the inherent bias and untrustworthiness of attorneys. In Ralph Losey’s experience as a practicing attorney since 1980 such bias is the rare exception, not the norm, and should not be the basis of a legal search strategy. The better solution to this minor issue of trustworthiness is educational, to train more attorneys in search and in professional ethics. Since our core assumptions on process and attorney honesty are fundamentally different, so too are our methods and goal. Our aim is augmentation of skilled attorneys to perform legal search, not automation, not replacement. Second, our Team uses a variety of search methods, a multimodal approach, whereas the Grossman and Cormack approach relies solely upon the use of high-ranking documents to train a classifier. This is consistent with their aim to fully automate and eliminate attorneys from the legal search process, again based on the premise we dispute of attorney bias. In their words: “For the reasons stated above, it may be desirable to limit discretionary choices in the selection of search tools, tuning parameters, and search strategy.” Id. We disagree and seek to empower attorneys with a variety of search tools, including the one search method that they endorse of reliance on high-ranking documents. Also see and the discussion and citations in Endnote 19.
    • (2015)
  • 3
    • 85180005716 scopus 로고
    • these respects the e-Discovery Team follows the teachings of Gary Marchionini, Dean of the School of Information and Library Sciences of U.N.C. at Chapel Hill, who explained in Information Seeking in Electronic Environments (Cambridge 1995) that information seeking expertise is a critical skill for successful search. Professor Marchionini argues, and we agree, that: “One goal of human-computer interaction research is to apply computing power to amplify and augment these human abilities. We also follow the teachings of UCLA Professor Marcia J. Bates who has advocated for a multimodal approach to search since 1989. Bates, Marcia J., The Design of Browsing and Berrypicking Techniques for the Online Search Interface, Online Review 13 (October): As Professor Bates explained in 2011 in Quora: “An important thing we learned early on is that successful searching requires what I called “berrypicking. Berrypicking involves 1) searching many different places/sources, 2) using different search techniques in different places, and 3) changing your search goal as you go along and learn things along the way. This may seem fairly obvious when stated this way, but, in fact, many searchers erroneously think they will find everything they want in just one place, and second, many information systems have been designed to permit only one kind of searching, and inhibit the searcher from using the more effective berrypicking technique
    • In these respects the e-Discovery Team follows the teachings of Gary Marchionini, Dean of the School of Information and Library Sciences of U.N.C. at Chapel Hill, who explained in Information Seeking in Electronic Environments (Cambridge 1995) that information seeking expertise is a critical skill for successful search. Professor Marchionini argues, and we agree, that: “One goal of human-computer interaction research is to apply computing power to amplify and augment these human abilities.” We also follow the teachings of UCLA Professor Marcia J. Bates who has advocated for a multimodal approach to search since 1989. Bates, Marcia J., The Design of Browsing and Berrypicking Techniques for the Online Search Interface, Online Review 13 (October 1989): 407-424. As Professor Bates explained in 2011 in Quora: “An important thing we learned early on is that successful searching requires what I called “berrypicking.” … Berrypicking involves 1) searching many different places/sources, 2) using different search techniques in different places, and 3) changing your search goal as you go along and learn things along the way. This may seem fairly obvious when stated this way, but, in fact, many searchers erroneously think they will find everything they want in just one place, and second, many information systems have been designed to permit only one kind of searching, and inhibit the searcher from using the more effective berrypicking technique.”
    • (1989) , pp. 407-424
  • 5
    • 85180010940 scopus 로고    scopus 로고
    • The Total Recall Track fully automated method follows the Track Administrator’s preferred methodology of fully automated monomodal search (high ranking only) and their recently announced goal to eliminate attorney review in favor of full automation
    • Grossman & Cormack, Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review, supra at pg. 1 “Our goal is to fully automate these choices, so that the only input required from the reviewer is, at the outset, a short query, topic description, or single relevant document, followed by an assessment of relevance for each document, as it is retrieved. They call the method “Autonomous TAR. Id. at pg. 6. The protocols of the fully automated division of the Total Recall Track were apparently designed in part by Cormack and Grossman to test this premise, and the results they attained as participants in this division, along with all of the other fully automated participants from Universities around the world, are very impressive. Still, the e-Discovery Team, who did not participate in the 2015 automated division, notes that many of the protocols in this experiment are based on fictions and conditions not found in the real world of legal search, where the Team’s methods were developed. The differences include, but are not limited to: the existence of an omnipotent SME that instantly provides perfectly correct judgmental feedback as to relevance of all documents selected by the automated processes as probable relevant; simple, single-facet issues; relatively simple datasets stripped of most native metadata; and, perhaps most importantly, issues requiring little or no legal analysis or background legal knowledge. Note, in post hoc runs the e-Discovery Team ran a few fully automated runs on Kroll Ontrack systems and EDR. We used the same high ranking only Autonomous TAR training method and obtained the same results as all of the other fully automated division participants
    • The Total Recall Track fully automated method follows the Track Administrator’s preferred methodology of fully automated monomodal search (high ranking only) and their recently announced goal to eliminate attorney review in favor of full automation. Grossman & Cormack, Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review, supra at pg. 1 (2015): “Our goal is to fully automate these choices, so that the only input required from the reviewer is, at the outset, a short query, topic description, or single relevant document, followed by an assessment of relevance for each document, as it is retrieved.” They call the method “Autonomous TAR.” Id. at pg. 6. The protocols of the fully automated division of the Total Recall Track were apparently designed in part by Cormack and Grossman to test this premise, and the results they attained as participants in this division, along with all of the other fully automated participants from Universities around the world, are very impressive. Still, the e-Discovery Team, who did not participate in the 2015 automated division, notes that many of the protocols in this experiment are based on fictions and conditions not found in the real world of legal search, where the Team’s methods were developed. The differences include, but are not limited to: the existence of an omnipotent SME that instantly provides perfectly correct judgmental feedback as to relevance of all documents selected by the automated processes as probable relevant; simple, single-facet issues; relatively simple datasets stripped of most native metadata; and, perhaps most importantly, issues requiring little or no legal analysis or background legal knowledge. Note, in post hoc runs the e-Discovery Team ran a few fully automated runs on Kroll Ontrack systems and EDR. We used the same high ranking only Autonomous TAR training method and obtained the same results as all of the other fully automated division participants.
    • (2015)
  • 6
    • 85180012187 scopus 로고    scopus 로고
    • “Contract review attorney,” or simply “contract attorney,” is a term now in common parlance in the legal profession to refer to licensed attorneys who do document review on a project-by-project basis. Their pay under a project contract is usually by the hour and is at a far lower rate than attorneys in a law firm, typically only $50 to $75 per hour. Their only responsibility is to review documents under the direct supervision of law firm attorneys who have much higher billing rates.
    • “Contract review attorney,” or simply “contract attorney,” is a term now in common parlance in the legal profession to refer to licensed attorneys who do document review on a project-by-project basis. Their pay under a project contract is usually by the hour and is at a far lower rate than attorneys in a law firm, typically only $50 to $75 per hour. Their only responsibility is to review documents under the direct supervision of law firm attorneys who have much higher billing rates.
  • 7
    • 85180009416 scopus 로고    scopus 로고
    • Predictive Coding is defined by The Grossman-Cormack Glossary of Technology-Assisted Review
    • 2013 Fed. Cts. L. Rev. 7 (January 2013) (Grossman-Cormack Glossary) as: “An industry-specific term generally used to describe a Technology Assisted Review process involving the use of a Machine Learning Algorithm to distinguish Relevant from Non-Relevant Documents, based on Subject Matter Expert(s) Coding of a Training Set of Documents. A Technology Assisted Review process is defined as: “A process for Prioritizing or Coding a Collection of electronic Documents using a computerized system that harnesses human judgments of one or more Subject Matter Expert(s) on a smaller set of Documents and then extrapolates those judgments to the remaining Document Collection. TAR processes generally incorporate Statistical Models and/or Sampling techniques to guide the process and to measure overall system effectiveness. Article
    • Predictive Coding is defined by The Grossman-Cormack Glossary of Technology-Assisted Review, 2013 Fed. Cts. L. Rev. 7 (January 2013) (Grossman-Cormack Glossary) as: “An industry-specific term generally used to describe a Technology Assisted Review process involving the use of a Machine Learning Algorithm to distinguish Relevant from Non-Relevant Documents, based on Subject Matter Expert(s) Coding of a Training Set of Documents.” A Technology Assisted Review process is defined as: “A process for Prioritizing or Coding a Collection of electronic Documents using a computerized system that harnesses human judgments of one or more Subject Matter Expert(s) on a smaller set of Documents and then extrapolates those judgments to the remaining Document Collection. … TAR processes generally incorporate Statistical Models and/or Sampling techniques to guide the process and to measure overall system effectiveness.” Also see: Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, Richmond Journal of Law and Technology, Vol. XVII, Issue 3, Article 11 (2011).
    • (2011) Also see: Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, Richmond Journal of Law and Technology , vol.XVII , Issue.3 , pp. 11
  • 9
    • 85180013742 scopus 로고    scopus 로고
    • July 6–11, Grossman & Cormack, Comments on “The Implications of Rule 26(g) on the Use of Technology-Assisted Review 7 Federal Courts Law Review 286 (2014); Herbert Roitblat, series of five OrcaTec blog posts (1, 2, 3, 4, 5), May-August 2014; Herbert Roitblat, Daubert, Rule 26(g) and the eDiscovery Turkey OrcaTec blog, August 11th, 2014; Hickman & Schieneman, The Implications of Rule 26(g) on the Use of Technology-Assisted Review, 7 FED. CTS. L. REV. 239 (2013); Losey, R. Predictive Coding 3.0, part one (e-Discovery Team 10/11/15)
    • Grossman & Cormack, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, SIGIR’14, July 6–11, 2014; Grossman & Cormack, Comments on “The Implications of Rule 26(g) on the Use of Technology-Assisted Review”, 7 Federal Courts Law Review 286 (2014); Herbert Roitblat, series of five OrcaTec blog posts (1, 2, 3, 4, 5), May-August 2014; Herbert Roitblat, Daubert, Rule 26(g) and the eDiscovery Turkey OrcaTec blog, August 11th, 2014; Hickman & Schieneman, The Implications of Rule 26(g) on the Use of Technology-Assisted Review, 7 FED. CTS. L. REV. 239 (2013); Losey, R. Predictive Coding 3.0, part one (e-Discovery Team 10/11/15).
    • (2014) Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, SIGIR’14
  • 12
    • 84953775233 scopus 로고    scopus 로고
    • Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review
    • CoRR abs/1504.06868 August 09-13, 2015, Santiago, Chile. (2015)
    • Grossman & Cormack, Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review, CoRR abs/1504.06868 (2015); Multi-Faceted Recall of Continuous Active Learning for Technology-Assisted Review, SIGIR’15, August 09-13, 2015, Santiago, Chile. (2015).
    • (2015) Multi-Faceted Recall of Continuous Active Learning for Technology-Assisted Review, SIGIR’15
    • Grossman1    Cormack2
  • 14
    • 85180006349 scopus 로고    scopus 로고
    • Pt II, Act 4, Scene 2, (“The first thing we do, let's kill all the lawyers.”). This famous anti-lawyer line was spoken by “Dick the butcher, a traitor hoping to start a revolution and prop up his friend as an autocratic ruler
    • Shakespeare, W., Henry VI, Pt II, Act 4, Scene 2, 71-78 (“The first thing we do, let's kill all the lawyers.”). This famous anti-lawyer line was spoken by “Dick the butcher,” a traitor hoping to start a revolution and prop up his friend as an autocratic ruler.
    • Shakespeare, W.1    Henry, VI2
  • 15
    • 85180011533 scopus 로고    scopus 로고
    • Mancia v. Mayflower Begins a Pilgrimage to the New World of Cooperation, 10 Sedona Conf. J. 377 (2009 Supp.); Losey, R., Lawyers Behaving Badly
    • Predictive Coding 3.0, part one) (2015 e-Discovery Team), the subsection therein, Predictive Coding 1.0 and the First Patents, discussing common prejudice against lawyers by academics and IT that drove the ill-advised imposition of secret control sets in the first versions of predictive coding software. The new drive by Cormack and Grossman to fully automate legal search and eliminate SMEs and attorney search expertise from legal search seems based, at least in part, on the same false premises. Also Losey, R., 60 Mercer L. Rev. 983 (Spring 2009)
    • Losey, R., Predictive Coding 3.0, part one) (2015 e-Discovery Team), see the subsection therein, Predictive Coding 1.0 and the First Patents, discussing common prejudice against lawyers by academics and IT that drove the ill-advised imposition of secret control sets in the first versions of predictive coding software. The new drive by Cormack and Grossman to fully automate legal search and eliminate SMEs and attorney search expertise from legal search seems based, at least in part, on the same false premises. Also see Losey, R., Mancia v. Mayflower Begins a Pilgrimage to the New World of Cooperation, 10 Sedona Conf. J. 377 (2009 Supp.); Losey, R., Lawyers Behaving Badly, 60 Mercer L. Rev. 983 (Spring 2009).
    • Losey, R.1
  • 16
    • 85180006562 scopus 로고    scopus 로고
    • Zero Error Numerics for a partial list of quality control and quality assurance methods endorsed by the e-Discovery Team, found at ZeroErrorNumerics.com (ZEN Document Review). Also (Jan. 20)
    • See Zero Error Numerics for a partial list of quality control and quality assurance methods endorsed by the e-Discovery Team, found at ZeroErrorNumerics.com (ZEN Document Review). Also see: Concept Drift and Consistency: Two Keys to Document Review Quality, e-Discovery Team (Jan. 20, 2016).
    • (2016) Concept Drift and Consistency: Two Keys to Document Review Quality, e-Discovery Team
  • 17
    • 85180008533 scopus 로고    scopus 로고
    • The cost of traditional linear document review is often far higher than $1.00 per file in practice 2007 the U.S. Department of Justice spent $9.09 per document for review in the Fannie Mae case, even though it used contract lawyers for the review work re Fannie Mae Securities Litig., 552 F.3d 814, 817 (D.C. Cir) ($6,000,000/660,000 emails). At about the same time Verizon paid $6.09 per document for a massive second review project that enjoyed large economies of scale and, again, utilized contract review lawyers. Roitblat, Kershaw, and Oot, Document categorization in legal electronic discovery: computer classification manual review. 2010 ($14,000,000 to review 2.3 million documents in four months)
    • The cost of traditional linear document review is often far higher than $1.00 per file in practice. In 2007 the U.S. Department of Justice spent $9.09 per document for review in the Fannie Mae case, even though it used contract lawyers for the review work. In re Fannie Mae Securities Litig., 552 F.3d 814, 817 (D.C. Cir. 2009) ($6,000,000/660,000 emails). At about the same time Verizon paid $6.09 per document for a massive second review project that enjoyed large economies of scale and, again, utilized contract review lawyers. Roitblat, Kershaw, and Oot, Document categorization in legal electronic discovery: computer classification vs. manual review. Journal of the American Society for Information Science and Technology, 61(1):70–80, 2010 ($14,000,000 to review 2.3 million documents in four months).
    • (2009) Journal of the American Society for Information Science and Technology , vol.61 , Issue.1 , pp. 70-80
  • 18
    • 0033733783 scopus 로고    scopus 로고
    • Variations in relevance judgments and the measurement of retrieval Effectiveness
    • (on pooling); Oard, Baron, Hedlin, lewis, Tomlinson, Evaluation of Information Retrieval for E-Discovery, Journal Artificial Intelligence and Law, 18 4, December 2010 Pgs. 347-386
    • E. M. Voorhees, Variations in relevance judgments and the measurement of retrieval Effectiveness, Information Processing & Management, 36(5):697{716, 2000 (on pooling); Oard, Baron, Hedlin, lewis, Tomlinson, Evaluation of Information Retrieval for E-Discovery, Journal Artificial Intelligence and Law, Vol. 18 Issue 4, December 2010 Pgs. 347-386.
    • (2000) Information Processing & Management , vol.36 , Issue.5 , pp. 697-716
    • Voorhees, E. M.1
  • 19
    • 85180011073 scopus 로고    scopus 로고
    • Autonomy and Reliability, supra at pgs. 2-3 (“This paper offers a historical review of research efforts to achieve high recall ...” The paper also estimates the Blair Maron precision score of 20% and lists the top scores (without attribution) in most TREC years); Hedin, Tomlinson, Baron, and Oard, Overview of the TREC 2009 Legal Track (TREC 2009); Cormack, Grossman, Hedin, and Oard; Overview of the TREC 2010 Legal Track (TREC 2010); Grossman, Cormack, Hedin, and Oard, Overview of the TREC 2011 Legal Track (TREC 2011); Evaluation of Information Retrieval for E-Discovery, supra at pgs. 24-27. The top TREC results cited for the six years of Legal track are in the 60% to 70% F1 range with a couple of results in the low 80% F1 range. The Recommind participation in the last TREC Legal Track 2011, and their subsequent prohibited marketing advertisements claiming to “win,” which led to their lifetime ban from TREC, only attained a Recall of 62.3% in one topic (403). Overview of the TREC 2011 Legal Track (TREC 2011) supra. Contrast all of the prior TREC results with the e-Discovery Team results in 18 topics in the 80% to 100% F1 range, with numerous topics in the mid to high 90% F1 range. Of course, these different TREC events had varying experiments and test conditions and so direct comparisons between TREC studies are never valid, but general comparisons are instructive and frequently made in the cited literature.
    • Autonomy and Reliability, supra at pgs. 2-3 (“This paper offers a historical review of research efforts to achieve high recall ...” The paper also estimates the Blair Maron precision score of 20% and lists the top scores (without attribution) in most TREC years); Hedin, Tomlinson, Baron, and Oard, Overview of the TREC 2009 Legal Track (TREC 2009); Cormack, Grossman, Hedin, and Oard; Overview of the TREC 2010 Legal Track (TREC 2010); Grossman, Cormack, Hedin, and Oard, Overview of the TREC 2011 Legal Track (TREC 2011); Evaluation of Information Retrieval for E-Discovery, supra at pgs. 24-27. The top TREC results cited for the six years of Legal track are in the 60% to 70% F1 range with a couple of results in the low 80% F1 range. The Recommind participation in the last TREC Legal Track 2011, and their subsequent prohibited marketing advertisements claiming to “win,” which led to their lifetime ban from TREC, only attained a Recall of 62.3% in one topic (403). Overview of the TREC 2011 Legal Track (TREC 2011) supra. Contrast all of the prior TREC results with the e-Discovery Team results in 18 topics in the 80% to 100% F1 range, with numerous topics in the mid to high 90% F1 range. Of course, these different TREC events had varying experiments and test conditions and so direct comparisons between TREC studies are never valid, but general comparisons are instructive and frequently made in the cited literature.
  • 20
    • 85180013219 scopus 로고    scopus 로고
    • the report on the Electronic Discovery Institute (EDI) Oracle legal search experiments involving the largest number of legal search participants to date where a member of the e-Discovery Team attained high scores. EDI-Oracle Study: Humans Are Still Essential in E-Discovery: Phase I of the study shows that older lawyers still have e-discovery chops and you don’t want to turn EDD over to robots (11/20/13, LTN). Monica Bay, the Editor of Law Technology News, summarizes the conclusion of EDI from the study that: “Conclusion: Software is only as good as its operators. Human contribution is the most significant element. Patrick Oot, co-founder of the Electronic Discovery Institute presented the findings of Phase II of the Oracle Predictive Coding Survey at ILTACON Day 3, as reported in The Relativity Blog, 9/2/15: “[W]hen it comes to what some vendors call Continuous Active Learning, Oot indicated the debate was somewhat of a red herring, adding, “Continuous Active Learning is just a buzzword. Oot summed up his thoughts by stressing the human component of technology-assisted review. Noting that the best performing technology in the Oracle study was the one used by a senior attorney, Oot said, “A good artist with a good brush is best. Unfortunately the final results of the EDI Oracle study have not yet been published and, as participants in that study, we are currently constrained from any detailed reporting
    • See the report on the Electronic Discovery Institute (EDI) Oracle legal search experiments involving the largest number of legal search participants to date where a member of the e-Discovery Team attained high scores. Bay, M., EDI-Oracle Study: Humans Are Still Essential in E-Discovery: Phase I of the study shows that older lawyers still have e-discovery chops and you don’t want to turn EDD over to robots (11/20/13, LTN). Monica Bay, the Editor of Law Technology News, summarizes the conclusion of EDI from the study that: “Conclusion: Software is only as good as its operators. Human contribution is the most significant element.” Patrick Oot, co-founder of the Electronic Discovery Institute presented the findings of Phase II of the Oracle Predictive Coding Survey at ILTACON Day 3, as reported in The Relativity Blog, 9/2/15: “[W]hen it comes to what some vendors call Continuous Active Learning, Oot indicated the debate was somewhat of a red herring, adding, “Continuous Active Learning is just a buzzword.” Oot summed up his thoughts by stressing the human component of technology-assisted review. Noting that the best performing technology in the Oracle study was the one used by a senior attorney, Oot said, “A good artist with a good brush is best.” Unfortunately the final results of the EDI Oracle study have not yet been published and, as participants in that study, we are currently constrained from any detailed reporting.
    • Bay, M.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.