Bill Dimm will be speaking with John Tredennick and Tom Gricks on the TAR Talk podcast about his recent article TAR, Proportionality, and Bad Algorithms (1-NN). The podcast will be on Tuesday, November 20, 2018. You can register here or download it later on iTunes or Google Play.
This iteration of the challenge was performed during the Digging into TAR session at the 2018 Northeast eDiscovery & IG Retreat. The structure was similar to round 3, but the audience was bigger. As before, the goal was to see whether the audience could construct a keyword search query that performed better than technology-assisted review.
There are two sensible ways to compare performance. Either see which approach reaches a fixed level of recall with the least review effort, or see which approach reaches the highest level of recall with a fixed amount of review effort. Any approach comparing results having different recall and different review effort cannot give a definitive conclusion on which result is best without making arbitrary assumptions about a trade off between recall and effort (this is why performance measures, such as the F1 score, that mix recall and precision together are not sensible for ediscovery).
For the challenge we fixed the amount of review effort and measured the recall achieved, because that was an easier process to carry out under the circumstances. Specifically, we took the top 3,000 documents matching the search query, reviewed them (this was instantaneous because the whole population was reviewed in advance), and measured the recall achieved. That was compared to the recall for a TAR 3.0 process where 200 cluster centers were reviewed for training and then the top-scoring 2,800 documents were reviewed. If the system was allowed to continue learning while the top-scoring documents were reviewed, the result was called “TAR 3.0 CAL.” If learning was terminated after review of the 200 cluster centers, the result was called “TAR 3.0 SAL.” The process was repeated with 6,000 documents instead of 3,000 so you can see how much recall improves if you double the review effort.
Individuals in the audience submitted queries through a web form using smart phones or laptops and I executed some (due to limited time) of the queries in front of the audience. They could learn useful keywords from the documents matching the queries and tweak their queries and resubmit them. Unlike a real ediscovery project, they had very limited time and no familiarity with the documents. The audience could choose to work on any of three topics: biology, medical industry, or law. In the results below, the queries are labeled with the submitters’ initials (some people gave only a first name, so there is only one initial) followed by a number if they submitted more than one query. Two queries were omitted because they had less than 1% recall (the participants apparently misunderstood the task). The queries that were evaluated in front of the audience were E-1, U, AC-1, and JM-1. The discussion of the result follows the tables, graphs, and queries.
|Query||Top 3,000||Top 6,000|
|TAR 3.0 SAL||72.5%||91.0%|
|TAR 3.0 CAL||75.5%||93.0%|
|Query||Top 3,000||Top 6,000|
|TAR 3.0 SAL||67.3%||83.7%|
|TAR 3.0 CAL||80.7%||88.5%|
|Query||Top 3,000||Top 6,000|
|TAR 3.0 SAL||63.5%||82.3%|
|TAR 3.0 CAL||77.8%||87.8%|
E-1) biology OR microbiology OR chemical OR pharmacodynamic OR pharmacokinetic
E-2) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence
E-3) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis
E-4) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis OR study
E-5) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis OR study OR table
E-6) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis OR study OR table OR research
U) Transplant OR organ OR cancer OR hypothesis
AC-2) legal OR attorney OR (defendant AND plaintiff) OR precedent OR verdict OR deliberate OR motion OR dismissed OR granted
JM-1) Law OR legal OR attorney OR lawyer OR litigation OR liability OR lawsuit OR judge
JM-2) Law OR legal OR attorney OR lawyer OR litigation OR liability OR lawsuit OR judge OR defendant OR plaintiff OR court OR plaintiffs OR attorneys OR lawyers OR defense
K-1) Law OR lawyer OR attorney OR advice OR litigation OR court OR investigation OR subpoena
K-2) Law OR lawyer OR attorney OR advice OR litigation OR court OR investigation OR subpoena OR justice
C) (law OR legal OR criminal OR civil OR litigation) AND NOT (politics OR proposed OR pending)
R) Court OR courtroom OR judge OR judicial OR judiciary OR law OR lawyer OR legal OR plaintiff OR plaintiffs OR defendant OR defendants OR subpoena OR sued OR suing OR sue OR lawsuit OR injunction OR justice
None of the keyword searches achieved higher recall than TAR when the amount of review effort was equal. All six of the biology queries were submitted by one person. The first query was evaluated in front of the audience, and his first revision to the query did help, but subsequent (blind) revisions of the query tended to hurt more than they helped. For biology, review of 3,000 documents with TAR gave better recall than review of 6,000 documents with any of the queries. There was only a single query submitted for the medical industry, and it underperformed TAR substantially. Five people submitted a total of eight queries for the law category, and the audience had the best results for that topic, which isn’t surprising since an audience full of lawyers and litigation support people would be expected to be especially good at identifying keywords related to the law. Even the best queries had lower recall with review of 6,000 documents than TAR 3.0 CAL achieved with review of only 3,000 documents, but a few of the queries did achieve higher recall than TAR 3.0 SAL when twice as much document review was performed with the search query compared to TAR 3.0 SAL.
This iteration of the challenge, held at the Education Hub at ILTACON 2018, was structured somewhat differently from round 1 and round 2 to give the audience a better chance of beating TAR. Instead of submitting search queries on paper, participants submitted them through a web form using their phones, which allowed them to repeatedly tweak their queries and resubmit them. I executed the queries in front of the participants, so they could see the exact recall achieved (since all documents were marked as relevant or non-relevant by a human reviewer in advance) almost instantaneously and they could utilize the performance information for their queries and the queries of other participants to guide improvements to their queries. This actually gave the participants an advantage over what they would experience in a real e-discovery project since performance measurements would normally require human evaluation of a random sample from the search output, which would make execution of several iterations of a query guided by performance evaluations very expensive in terms of review labor. The audience got those performance evaluations for free even though the goal was to compare recall achieved for equal amounts of document review effort. On the other hand, the audience did still have the disadvantages of having limited time and no familiarity with the documents.
As before, recall was evaluated for the top 3000 and top 6000 documents, which was enough to achieve high recall with TAR (even with the training documents included, so total review effort for TAR and the search queries was the same). Audience members were free to work on any of the three topics that were used in previous versions of the challenge: law, medical industry, or biology. Unfortunately, the audience was much smaller than previous versions of the challenge, and nobody chose to submit a query for the biology topic.
Previously, the TAR results were achieved by using the TAR 3.0 workflow to train with 200 cluster centers, documents were sorted based on the resulting relevance scores, and top-scoring documents were reviewed until the desired amount of review effort was expended without allowing predictions to be updated during that review (e.g., review of 200 training docs plus 2,800 top scoring docs to get the “Top 3,000” result). I’ll call this TAR 3.0 SAL (SAL = Simple Active Learning, meaning the system is not allowed to learn during the review of top-scoring documents). In practice you wouldn’t do that. If you were reviewing top-scoring documents, you would allow the system to continue learning (CAL). You would use SAL only if you were producing top-scoring documents without reviewing them since allowing learning to continue during the review would reduce the amount of review needed to achieve a desired level of recall. I used TAR 3.0 SAL in previous iterations because I wanted to simulate the full review in front of the audience in a few seconds and TAR 3.0 CAL would have been slower. This time, I did the TAR calculations in advance and present both the SAL and CAL results so you can see how much difference the additional learning from CAL made.
One other difference compared to previous versions of the challenge is how I’ve labeled the queries below. This time, the number indicates which participant submitted the query and the letter indicates which one of his/her queries are being analyzed (if the person submitted more than one) rather than indicating a tweaking of the query that I added to try to improve the result. In other words, all variations were tweaks done by the audience instead of by me. Discussion of the results follows the tables, graphs, and queries below.
|Medical Industry||Top 3,000||Top 6,000|
|TAR 3.0 SAL||67.3%||83.7%|
|TAR 3.0 CAL||80.7%||88.5%|
|Law||Top 3,000||Top 6,000|
|TAR 3.0 SAL||63.5%||82.3%|
|TAR 3.0 CAL||77.8%||87.8%|
1a) Hospital AND New AND therapies
1b) Hospital AND New AND (physicians OR doctors)
2) Copyright AND mickey AND mouse
3a) Schedule OR Amendments OR Trial OR Jury OR Judge OR Circuit OR Courtroom OR Judgement
3b) Amendments OR Trial OR Jury OR Judge OR Circuit OR Courtroom OR Judgement OR trial OR law OR Patent OR legal
3c) Amendments OR Trial OR Jury OR Judge OR Circuit OR Courtroom OR Judgement OR trial OR law OR Patent OR legal OR Plaintiff OR Defendant
4) Privacy OR (Personally AND Identifiable AND Information) OR PII OR (Protected AND Speech)
TAR won across the board, as in previous iterations of the challenge. Only one person submitted queries for the medical industry topic. His/her revised query did a better job of finding relevant documents, but still returned fewer than 3,000 documents and fared far worse than TAR — the query was just not broad enough to achieve high recall. Three people submitted queries on the law topic. One of those people revised the query a few times and got decent results (shown in green), but still fell far short of the TAR result, with review of 6,000 documents from the best query finding fewer relevant documents than review of half as many documents with TAR 3.0 SAL (TAR 3.0 CAL did even better). It is unfortunate that the audience was so small, since a larger audience might have done better by learning from each other’s submissions. Hopefully I’ll be able to do this with a bigger audience in the future.
Should proportionality arguments allow producing parties to get away with poor productions simply because they wasted a lot of effort due to an extremely bad algorithm? This article examines one such bad algorithm that has been used in major review platforms, and shows that it could be made vastly more effective with a very minor tweak. Are lawyers who use platforms lacking the tweak committing malpractice by doing so?
Last year I was moderating a panel on TAR (predictive coding) and I asked the audience what recall level they normally aim for when using TAR. An attendee responded that it was a bad question because proportionality only required a reasonable effort. Much of the audience expressed agreement. This should concern everyone. If quality of result (e.g., achieving a certain level of recall) is the goal, the requesting party really has no business asking how the result was achieved–any effort wasted by choosing a bad algorithm is born by the producing party. On the other hand, if the target is expenditure of a certain amount of effort, doesn’t the requesting party have the right to know and object if the producing party has chosen a methodology that is extremely inefficient?
The algorithm I’ll be picking on today is a classifier called 1-nearest neighbor, or 1-NN. You may be using it without ever having heard that name, so pay attention to my description of it and see if it sounds familiar. To predict whether a document is relevant, 1-NN finds the single most similar training document and predicts the relevance of the unreviewed document to be the same. If a relevance score is desired instead of a yes/no relevance prediction, the relevance score can be taken to be the similarity value if the most similar training document is relevant, and it can be taken to be the negative of the similarity value if the most similar training document is non-relevant. Here is a precision-recall curve for the 1-NN algorithm used in a TAR 1.0 workflow trained with randomly-selected documents:
The precision falls off a cliff above 60% recall. This is not due to inadequate training–the cliff shown above will not go away no matter how much training data you add. To understand the implications, realize that if you sort the documents by relevance score and review from the top down until you reach the desired level of recall, 1/P at that recall tells the average number of documents you’ll review for each relevant document you find. At 60% recall, precision is 67%, so you’ll review 1.5 documents (1/0.67 = 1.5) for each relevant document you find. There is some effort wasted in reviewing those 0.5 non-relevant documents for each relevant document you find, but it’s not too bad. If you keep reviewing documents until you reach 70% recall, things get much worse. Precision drops to about 8%, so you’ll encounter so many non-relevant documents after you get past 60% recall that you’ll end up reviewing 12.5 documents for each relevant document you find. You would surely be tempted to argue that proportionality says you should be able to stop at 60% recall because the small gain in result quality of going from 60% recall to 70% recall would cost nearly ten times as much review effort. But does it really have to be so hard to get to 70% recall?
It’s very easy to come up with an algorithm that can reach higher recall without so much review effort once you understand why the performance cliff occurs. When you sort the documents by relevance score with 1-NN, the documents where the most similar training document is relevant will be at the top of the list. The performance cliff occurs when you start digging into the documents where the most similar training document is non-relevant. The 1-NN classifier does a terrible job of determining which of those documents has the best chance of being relevant because it ignores valuable information that is available. Consider two documents, X and Y, that both have a non-relevant training document as the most similar training document, but document X has a relevant training document as the second most similar training document and document Y has a non-relevant training document as the second most similar. We would expect X to have a better chance of being relevant than Y, all else being equal, but 1-NN cannot distinguish between the two because it pays no attention to the second most similar training document. Here is the result for 2-NN, which takes the two most similar training document into account:
Notice that 2-NN easily reaches 70% recall (1/P is 1.6 instead of 12.5), but it does have a performance cliff of its own at a higher level of recall because it fails to make use of information about the third most similar training document. If we utilize information about the 40 most similar training documents we get much better performance as shown by the solid lines here:
It was the presence of non-relevant training documents that tripped up the 1-NN algorithm because the non-relevant training document effectively hid the existence of evidence (similar training documents that were relevant) that a document might be relevant, so you might think the performance cliff could be avoided by omitting non-relevant documents from the training. The result of doing that is shown with dashed lines in the figure above. Omitting non-relevant training documents does help 1-NN at high recall, though it is still far worse than 40-NN with the non-relevant training documents include (omitting the non-relevant training documents actually harms 40-NN, as shown by the red dashed line). A workflow that focuses on reviewing documents that are likely to be relevant, such as TAR 2.0, rather than training with random documents, will be less impacted by 1-NN’s shortcomings, but why would you ever suffer the poor performance of 1-NN when 40-NN requires such a minimal modification of the algorithm?
You might wonder whether the performance cliff shown above is just an anomaly. Here are precision-recall curves for several additional categorization tasks with 1-NN on the left and 40-NN on the right.
Sometimes the 1-NN performance cliff occurs at high enough recall to allow a decent production, but sometimes it keeps you from finding even half of the relevant documents. Should a court accept less than 50% recall when the most trivial tweak to the algorithm could have achieved much higher recall with roughly the same amount of document review?
Of course, there are many factors beyond the quality of the classifier, such as the choice of TAR 1.0 (SPL and SAL), TAR 2.0 (CAL), or TAR 3.0 workflows, that impact the efficiency of the process. The research by Grossman and Cormack that courts have relied upon to justify the use of TAR because it reaches recall that is comparable to or better than an exhaustive human review is based on CAL (TAR 2.0) with good classifiers, whereas some popular software uses TAR 1.0 (less efficient if documents will be reviewed before production) and poor classifiers such as 1-NN. If the producing party vows to reach high recall and bears the cost of choosing bad software and/or processes to achieve that, there isn’t much for the requesting party to complain about (though the producing party could have a bone to pick with an attorney or service provider who recommended an inefficient approach). On the other hand, if the producing party argues that low recall should be tolerated because decent recall would require too much effort, it seems that asking whether the algorithms used are unnecessarily inefficient would be appropriate.
During my presentation at the South Central eDiscovery & IG Retreat I challenged the audience to create keyword searches that would work better than technology-assisted review (predictive coding). This is similar to the experiment done a few months earlier. See this article for more details. The audience again worked in groups to construct keyword searches for two topics. One topic, articles on law, was the same as last time. The other topic, the medical industry, was new (it replaced biology).
Performance was evaluated by comparing the recall achieved for equal amounts of document review effort (the population was fully categorized in advance, so measurements are exact, not estimates). Recall for the top 3000 keyword search matches was compared to recall from reviewing 202 training documents (2 seed documents plus 200 cluster centers using the TAR 3.0 method) and 2798 documents having the highest relevance scores from TAR. Similarly, recall from the top 6000 keyword search matches was compared to recall from review of 6000 documents with TAR. Recall from all documents matching a search query was also measured to find the maximum recall that could be achieved with the query.
The search queries are shown after the performance tables and graphs. When there is an “a” and “b” version of the query, the “a” version was the audience’s query as-is, and the “b” query was tweaked by me to remove restrictions that were limiting the number of relevant documents that could be found. The results are discussed at the end of the article.
|Query||Total Matches||Top 3,000||Top 6,000||All|
|Query||Total Matches||Top 3,000||Top 6,000||All|
1a) medical AND (industry OR business) AND NOT (scientific OR research)
1b) medical AND (industry OR business)
2) (revenue OR finance OR market OR brand OR sales) AND (hospital OR health OR medical OR clinical)
3a) (medical OR hospital OR doctor) AND (HIPPA OR insurance)
3b) medical OR hospital OR doctor OR HIPPA OR insurance
4a) (earnings OR profits OR management OR executive OR recall OR (board AND directors) OR healthcare OR medical OR health OR hospital OR physician OR nurse OR marketing OR pharma OR report OR GlaxoSmithKline OR (united AND health) OR AstraZeneca OR Gilead OR Sanofi OR financial OR malpractice OR (annual AND report) OR provider OR HMO OR PPO OR telemedicine) AND NOT (study OR research OR academic)
4b) earnings OR profits OR management OR executive OR recall OR (board AND directors) OR healthcare OR medical OR health OR hospital OR physician OR nurse OR marketing OR pharma OR report OR GlaxoSmithKline OR (united AND health) OR AstraZeneca OR Gilead OR Sanofi OR financial OR malpractice OR (annual AND report) OR provider OR HMO OR PPO OR telemedicine
5) FRCP OR Fed OR litigation OR appeal OR immigration OR ordinance OR legal OR law OR enact OR code OR statute OR subsection OR regulation OR rules OR precedent OR (applicable AND law) OR ruling
6) judge OR (supreme AND court) OR court OR legislation OR legal OR lawyer OR judicial OR law OR attorney
As before, TAR won across the board, but there were some surprises this time.
For the medical industry topic, review of 3000 documents with TAR achieved higher recall than any keyword search achieved with review of 6000 documents, very similar to results from a few months ago. When all documents matching the medical industry search queries were analyzed, two queries did achieve high recall (3b and 4b, which are queries I tweaked to achieve higher recall), but they did so by retrieving a substantial percentage of the 100,000 document population (16,756 and 58,510 documents respectively). TAR can reach any level of recall by simply taking enough documents from the sorted list—TAR doesn’t run out of matches like a keyword search does. TAR matches the 94.6% recall that query 4b achieved (requiring review of 58,510 documents) with review of only 15,500 documents.
Results for the law topic were more interesting. The two queries submitted for the law topic both performed better than any of the queries submitted for that topic a few months ago. Query 6 gave the best results, with TAR beating it by only a modest amount. If all 25,370 documents matching query 6 were reviewed, 95.7% recall would be achieved, which TAR could accomplish with review of 24,000 documents. It is worth noting that TAR 2.0 would be more efficient, especially at very high recall. TAR 3.0 gives the option to produce documents without review (not utilized for this exercise), plus computations are much faster due to there being vastly fewer training documents, which is handy for simulating a full review live in front of an audience in a few seconds.
During my presentation at the NorCal eDiscovery & IG Retreat I challenged the audience to create keyword searches that would work better than technology-assisted review (predictive coding) for two topics. Half of the room was tasked with finding articles about biology (science-oriented articles, excluding medical treatment) and the other half searched for articles about current law (excluding proposed laws or politics). I ran one of the searches against TAR in Clustify live during the presentation (Clustify’s “shadow tags” feature allows a full document review to be simulated in a few minutes using documents that were pre-categorized by human reviewers), but couldn’t do the rest due to time constraints. This article presents the results for all the queries submitted by the audience.
The audience had limited time to construct queries (working together in groups), they weren’t familiar with the data set, and they couldn’t do sampling to tune their queries, so I’m not claiming the exercise was comparable to an e-discovery project. Still, it was entertaining. The topics are pretty simple, so a large percentage of the relevant documents can be found with a pretty simple search using some broad terms. For example, a search for “biology” would find 37% of the biology documents. A search for “law” would find 71% of the law articles. The trick is to find the relevant documents without pulling in too many of the non-relevant ones.
To evaluate the results, I measured the recall (percentage of relevant documents found) from the top 3,000 and top 6,000 hits on the search query (3% and 6% of the population respectively). I’ve also included the recall achieved by looking at all docs that matched the search query, just to see what recall the search queries could achieve if you didn’t worry about pulling in a ton of non-relevant docs. For the TAR results I used TAR 3.0 trained with two seed documents (one relevant from a keyword search and one random non-relevant document) followed by 20 iterations of 10 top-scoring cluster centers, so a total of 202 training documents (no control set needed with TAR 3.0). To compare to the top 3,000 search query matches, the 202 training documents plus 2,798 top-scoring documents were used for TAR, so the total document review (including training) would be the same for TAR and the search query.
The search engine in Clustify is intended to help the user find a few seed documents to get active learning started, so it has some limitations. If the audience’s search query included phrases, they were converted an AND search enclosed in parenthesis. If the audience’s query included a wildcard, I converted it to a parenthesized OR search by looking at the matching words in the index and selecting only the ones that made sense (i.e., I made the queries better than they would have been with an actual wildcard). I noticed that there were a lot of irrelevant words that matched the wildcards. For example, “cell*” in a biology search should match cellphone, cellular, cellar, cellist, etc., but I excluded such words. I would highly recommend that people using keyword search check to see what their wildcards are actually matching–you may be pulling in a lot of irrelevant words. I removed a few words from the queries that weren’t in the index (so the words shown all actually had an impact). When there is an “a” and “b” version of the query, the “a” version was the audience’s query as-is, and the “b” query was tweaked by me to retrieve more documents.
The tables below show the results. The actual queries are displayed below the tables. Discussion of the results is at the end.
|Query||Total Matches||Top 3,000||Top 6,000||All Matches|
|Query||Total Matches||Top 3,000||Top 6,000||All Matches|
1) organism OR microorganism OR species OR DNA
2) habitat OR ecology OR marine OR ecosystem OR biology OR cell OR organism OR species OR photosynthesis OR pollination OR gene OR genetic OR genome AND NOT (treatment OR generic OR prognosis OR placebo OR diagnosis OR FDA OR medical OR medicine OR medication OR medications OR medicines OR medicated OR medicinal OR physician)
3) biology OR plant OR (phyllis OR phylos OR phylogenetic OR phylogeny OR phyllo OR phylis OR phylloxera) OR animal OR (cell OR cells OR celled OR cellomics OR celltiter) OR (circulation OR circulatory) OR (neural OR neuron OR neurotransmitter OR neurotransmitters OR neurological OR neurons OR neurotoxic OR neurobiology OR neuromuscular OR neuroscience OR neurotransmission OR neuropathy OR neurologically OR neuroanatomy OR neuroimaging OR neuronal OR neurosciences OR neuroendocrine OR neurofeedback OR neuroscientist OR neuroscientists OR neurobiologist OR neurochemical OR neuromorphic OR neurohormones OR neuroscientific OR neurovascular OR neurohormonal OR neurotechnology OR neurobiologists OR neurogenetics OR neuropeptide OR neuroreceptors) OR enzyme OR blood OR nerve OR brain OR kidney OR (muscle OR muscles) OR dna OR rna OR species OR mitochondria
4a) statistically AND ((laboratory AND test) OR species OR (genetic AND marker) OR enzyme) AND NOT (diagnosis OR treatment OR prognosis)
4b) (species OR (genetic AND marker) OR enzyme) AND NOT (diagnosis OR treatment OR prognosis)
5a) federal AND (ruling OR judge OR justice OR (appellate OR appellant))
5b) ruling OR judge OR justice OR (appellate OR appellant)
6) amendments OR FRE OR whistleblower
7) ((law OR laws OR lawyer OR lawyers OR lawsuit OR lawsuits OR lawyering) OR (regulation OR regulations) OR (statute OR statutes) OR (standards)) AND NOT pending
TAR beat keyword search across the board for both tasks. The top 3,000 documents returned by TAR achieved higher recall than the top 6,000 documents for any keyword search. In other words, if documents will be reviewed before production, TAR achieves better results (higher recall) with half as much document review compared to any of the keyword searches. The top 6,000 documents returned by TAR achieved higher recall than all of the documents matching any individual keyword search, even when the keyword search returned 27,000 documents.
DESI (Discovery of Electronically Stored Information) is a one-day workshop within ICAIL (International Conference on Artificial Intelligence and Law), which is held every other year. The conference was held in London last month. Rumor has it that the next ICAIL will be in North America, perhaps Montreal.
I’m not going to go into the DESI talks based on papers and slides that are posted on the DESI VII website since you can read that content directly. The workshop opened with a keynote by Maura Grossman and Gordon Cormack where they talked about the history of TREC tracks that are relevant to e-discovery (Spam, Legal, and Total Recall), the limitation on the recall that can be achieved due to ambiguous relevance (reviewer disagreement) for some documents, and the need for high recall when it comes to identifying privileged documents or documents where privacy must be protected. When looking for privileged documents it is important to note that many tools don’t make use of metadata. Documents that are missed may be technically relevant but not really important — you should look at a sample to see whether they are important.
Between presentations based on submitted papers there was a lunch where people separated into four groups to discuss specific topics. The first group focused on e-discovery users. Visualizations were deemed “nice to look at” but not always useful — does the visualization help you to answer a question faster? Another group talked about how to improve e-discovery, including attorney aversion to algorithms and whether a substantial number of documents could be missed by CAL after the gain curve had plateaued. Another group discussed dreams about future technologies, like better case assessment and redacting video. The fourth group talked about GDPR and speculated that the UK would obey GDPR.
DESI ended with a panel discussion about future directions for e-discovery. It was suggested that a government or consumer group should evaluate TAR systems. Apparently, NIST doesn’t want to do it because it is too political. One person pointed out that consumers aren’t really demanding it. It’s not just a matter of optimizing recall and precision — process (quality control and workflow) matters, which makes comparisons hard. It was claimed that defense attorneys were motivated to lobby against the federal rules encouraging the use of TAR because they don’t want incriminating things to be found. People working in archiving are more enthusiastic about TAR.
Following DESI (and other workshops conducted in parallel on the first day), ICAIL had three more days of paper presentations followed by another day of workshops. You can find the schedule is here. I only attended the first day of non-DESI presentations. There are two papers from that day that I want to point out. The first is Effectiveness Results for Popular e-Discovery Algorithms by Yang, David Grossman, Frieder, and Yurchak. They compared performance of the CAL (relevance feedback) approach to TAR for several different classification algorithms, feature types, feature weightings, and with/without LSI. They used several different performance metrics, though they missed the one I think is most relevant for e-discovery (review effort required to achieve an acceptable level of recall). Still, it is interesting to see such an exhaustive comparison of algorithms used in TAR / predictive coding. They’ve made their code available here. The second paper is Scenario Analytics: Analyzing Jury Verdicts to Evaluate Legal Case Outcomes by Conrad and Al-Kofahi. The authors analyze a large database of jury verdicts in an effort to determine the feasibility of building a system to give strategic litigation advice (e.g., potential award size, trial duration, and suggested claims) based on a data-driven analysis of the case.
Measuring the recall achieved to within +/- 5% to demonstrate that a production is defensible can require reviewing a substantial number of random documents. For a case of modest size, the amount of review required to measure recall can be larger than the amount of review required to actually find the responsive documents with predictive coding. This article describes a new method requiring much less document review to demonstrate that adequate recall has been achieved. This is a brief overview of a more detailed paper I’ll be presenting at the DESI VII Workshop on June 12th (slides available here).
The proportion of a population having some property can be estimated to within +/- 5% by measuring the proportion on a random sample of 400 documents (you’ll also see the number 385 being used, but using 400 will make it easier to follow the examples). To measure recall we need to know what proportion of responsive documents are produced, so we need a sample of 400 random responsive documents. Since we don’t know which documents in the population are responsive, we have to select documents randomly and review them until 400 responsive ones are found. If prevalence is 10% (10% of the population is responsive), that means reviewing roughly 4,000 documents to find 400 that are relevant so that recall can be estimated. If prevalence is 1%, it means reviewing roughly 40,000 random documents to measure recall. This can be quite a burden.
Once recall is measured, a decision must be made about whether it is high enough. Suppose you decide that if at least 300 of the 400 random responsive documents were produced (75%) the production is acceptable. For any actual level of recall, the probability of accepting the production can be computed (see figure to right). The probability of accepting a production where the actual recall is less than 70% will be very low, and the probability of rejecting a production where the actual recall is greater than 80% will also be low — this comes from the fact that a sample of 400 responsive documents is sufficient to measure recall to within +/- 5%.
The idea behind the new method is to achieve the same probability profile for accepting/rejecting a production using a multi-stage acceptance test. The multi-stage test gives the possibility of stopping the process and declaring the production accepted/rejected long before reviewing 400 random responsive documents. The procedure is shown in the flowchart to the right (click to enlarge). A decision may be reached after reviewing enough documents to find just 25 random documents that are responsive. If a decision isn’t made after reviewing 25 responsive documents, review continues until 50 responsive documents are found and another test is applied. At worst, documents will be reviewed until 400 responsive documents are found (the same as the traditional direct recall estimation method).
The figure to the right shows six examples of the multi-stage acceptance test being applied when the actual recall is 85%. Since 85% is well above the 80% upper bound of the 75% +/- 5% range, we expect this production to virtually always be accepted. The figure shows that acceptance can occur long before reviewing a full 400 random responsive documents. The number of random responsive documents reviewed is shown on the vertical axis. Toward the bottom of the graph the sample is very small and the percentage of the sample that has been produced may deviate greatly from the right answer of 85%. As you go up the sample gets larger and the proportion of the sample that is produced is expected to get closer to 85%. When a green decision boundary is touched, causing the production to be accepted as having sufficiently high recall, the color of the remainder of the path is changed to yellow — the yellow part represents the document review that is avoided by using the multi-stage acceptance method (since the traditional direct recall measurement would involve going all the way to 400 responsive documents). As you can see, when the actual recall is 85% the number of random responsive documents that must be reviewed is often 50 or 100, not 400.
The figure to the right shows the average number of documents that must be reviewed using the multi-stage acceptance procedure from the earlier flowchart. The amount of review required can be much less than 400 random responsive documents. In fact, the further above/below the 75% target (called the “splitting recall” in the paper) the actual recall is, the less document review is required (on average) to come to a conclusion about whether the production’s recall is high enough. This creates an incentive for the producing party to aim for recall that is well above the minimum acceptable level since it will be rewarded with a reduced amount of document review to confirm the result is adequate.
It is important to note that the multi-stage procedure provides an accept/reject result, not a recall estimate. If you follow the procedure until an accept/reject boundary is hit and then use the proportion of the sample that was produced as a recall estimate, that estimate will be biased (the use of “unbiased” in the paper title refers to the sampling being done on the full population, not on a subset [such as the discard set] that would cause a bias due to inconsistency in review of different subsets).
You may want to use a splitting recall other than 75% for the accept/reject decision — the full paper provides tables of values necessary for doing that.
George Socha, Doug Austin, David Horrigan, Bill Dimm, and Bill Speros will give presentations in this webinar on the history and future of ediscovery moderated by Mary Mack on December 1, 2016. Bill Dimm will talk about the evolution of predictive coding technologies and our understanding of best practices, including recall estimation, the evil F1 score, research efforts, pre-culling, and the TAR 1.0, 2.0, and 3.0 workflows. CLICK HERE FOR RECORDING OF WEBINAR, SLIDES, AND LINKS TO RELATED RESOURCES.
The 2016 Northeast eDiscovery & IG Retreat was held at the Ocean Edge Resort & Golf Club. It was the third annual Ing3nious retreat held in Cape Cod. The retreat featured two simultaneous sessions throughout the day in a beautiful location. My notes below provide some highlights from the sessions I was able to attend. You can find additional photos here.
The retreat started with peer-to-peer round tables where each table was tasked with answering the question: Why does e-discovery suck (gripes, pet peeves, issues, etc.) and how can it be improved? Responses included:
- How to drive innovation? New technologies need to be intuitive and simple to get client adoption.
- Why are e-discovery tools only for e-discovery? Should be using predictive coding for records management.
- Need alignment between legal and IT. Need ongoing collaboration.
- Handling costs. Cost models and comparing service providers are complicated.
- Info governance plans for defensible destruction.
- Failure to plan and strategize e-discovery.
- Communication and strategy. It is important to get the right people together.
- Why not more cooperation at meet-and-confer? Attorneys that are not comfortable with technology are reluctant to talk about it. Asymmetric knowledge about e-discovery causes problems–people that don’t know what they are doing ask for crazy things.
Catching Up on the Implementation of the Amended Federal Rules
I couldn’t attend this one.
Predictive Coding and Other Document Review Technologies–Where Are We Now?
It is important to validate the process as you go along, for any technology. It is important to understand the client’s documents. Pandora is more like TAR 2.0 than TAR 1.0, because it starts giving recommendations based on your feedback right away. The 2012 Rand Study found this e-discovery cost breakdown:73% document review, 8% collection, and 19% processing. A question from the audience about pre-culling with keyword search before applying predictive coding spurred some debate. Although it wasn’t mentioned during the panel, I’ll point out William Webber’s analysis of the Biomet case, which shows pre-culling discarded roughly 40% of the relevant documents before predictive coding was applied. There are many different ways of charging for predictive coding: amount of data, number of users, hose (total data flowing through) or bucket (max amount of data allowed at one time). Another barrier to use of predictive coding is lack of senior attorney time (e.g., to review documents for training). Factors that will aid in overcoming barriers: improving technologies, Sherpas to guide lawyers through the process, court rulings, influence from general counsel. Need to admit that predictive coding doesn’t work for everything, e.g., calendar entries. New technologies include anonymization tools and technology to reduce the size of collections. Existing technologies that are useful: entity extraction, email threading, facial recognition, and audio to text. Predictive coding is used in maybe less than 1% of cases, but email threading is used in 99%.
It’s All Greek To Me: Multi-Language Discovery Best Practices
Native speakers are important. An understanding of relevant industry terminology is important, too. The ALTA fluency test is poor–the test is written in English and then translated to other languages, so it’s not great for testing ability to comprehend text that originated in another language. Hot documents may be translated for presentation. This is done with a secure platform that prohibits the translator from downloading the documents. Privacy laws make it best to review in-country if possible. There are only 5 really good legal translation companies–check with large firms to see who they use. Throughput can be an issue. Most can do 20,000 words in 3 days. What if you need to do 200,000 in 3 days? Companies do share translators, but there’s no reason for good translators to work for low-tier companies–good translators are in high demand. QC foreign review to identify bad reviewers (need proficient managers). May need to use machine translation (MT) if there are millions of documents. QC the MT result and make sure it is actually useful–in 85% of cases it is not good enough. For CJK (Chinese, Japanese, Korean), MT is terrible. The translation industry is $40 billion. Google invested a lot in MT but it didn’t help much. One technology that is useful is translation memory, where repeated chunks of text are translated just once. People performing review in Japanese must understand the subtlety of the American legal system.
Top Trends in Discovery for 2016
I couldn’t attend this one
Measure Twice, Discover Once
Why measure in e-discovery? So you can explain what happened and why, for defensibility. Also important for cost management. The board of directors may want reports. When asked for more custodians you can show the cost and expected number of relevant documents that will be added by analyzing the number of keyword search hits. Everything gets an ID number for tracking and analysis (USB drives, batches of documents, etc.). Types of metrics ordered from most helpful to most harmful: useful, no metric, not useful, and misleading. A simple metric used often in document review is documents per hour per reviewer. What about document complexity, content complexity, number and type of issue codes, review complexity, risk tolerance instructions, number of “defect opportunities,” and number coded correctly? Many 6-sigma ideas from manufacturing are not applicable due to the subjectivity that is present in document review.
Information Governance and Data Privacy: A World of Risk
I couldn’t attend this one
The Importance of a Litigation Hold Policy
I couldn’t attend this one
Alone Together: Where Have All The Model TAR Protocols Gone?
If you are disclosing details, there are two types: inputs (search terms used to train, shared review of training docs) and outputs (target recall or disclosure of recall). Don’t agree to a specific level of recall before looking at the data–if prevalence is low it may be hard. Plaintiff might argue for TAR as a way to overcome cost objections from the defendant. There is concern about lack of sophistication from judges–there is “stunning” variation in expertise among federal judges. An attorney involved with the Rio Tinto case recommends against agreeing on seed sets because it is painful and focuses on the wrong thing. Sometimes there isn’t time to put eyes on all documents that will be produced. Does the TAR protocol need to address dupes, near-dupes, email threading, etc.?
Information Governance: Who Owns the Information, the Risk and the Responsibility?
I couldn’t attend this one
Bringing eDiscovery In-House — Savings and Advantages
I was on this panel so I didn’t take notes