Best Legal Blog Contest 2018

From a field of hundreds of potential nominees, the Clustify Blog received enough nominations to be selected to compete in The Expert Institute’s Best Legal Blog Contest in the Legal Tech category. best_legal_blog_nominee_2018

Now that the blogs have been nominated and placed into categories, it is up to readers to select the very best.  Each blog will compete for rank within its category, with the three blogs receiving the most votes in each category being crowned overall winners.  A reader can vote for as many blogs as he/she wants in each category, but can vote for a specific blog only once (this is enforced by requiring authentication with Google, LinkedIn, or Twitter).  Voting closes at 12:00 AM on December 17th, at which point the votes will be tallied and the winners announced.  You can find the Clustify Blog voting page here.

TAR vs. Keyword Search Challenge, Round 4

This iteration of the challenge was performed during the Digging into TAR session at the 2018 Northeast eDiscovery & IG Retreat.  The structure was similar to round 3, but the audience was bigger.  As before, the goal was to see whether the audience could construct a keyword search query that performed better than technology-assisted review.

There are two sensible ways to compare performance.  Either see which approach reaches a fixed level of recall with the least review effort, or see which approach reaches the highest level of recall with a fixed amount of review effort.  Any approach comparing results having different recall and different review effort cannot give a definitive conclusion on which result is best without making arbitrary assumptions about a trade off between recall and effort (this is why performance measures, such as the F1 score, that mix recall and precision together are not sensible for ediscovery).

For the challenge we fixed the amount of review effort and measured the recall achieved, because that was an easier process to carry out under the circumstances.  Specifically, we took the top 3,000 documents matching the search query, reviewed them (this was instantaneous because the whole population was reviewed in advance), and measured the recall achieved.  That was compared to the recall for a TAR 3.0 process where 200 cluster centers were reviewed for training and then the top-scoring 2,800 documents were reviewed.  If the system was allowed to continue learning while the top-scoring documents were reviewed, the result was called “TAR 3.0 CAL.”  If learning was terminated after review of the 200 cluster centers, the result was called “TAR 3.0 SAL.”  The process was repeated with 6,000 documents instead of 3,000 so you can see how much recall improves if you double the review effort.

Individuals in the audience submitted queries through a web form using smart phones or laptops and I executed some (due to limited time) of the queries in front of the audience.  They could learn useful keywords from the documents matching the queries and tweak their queries and resubmit them.  Unlike a real ediscovery project, they had very limited time and no familiarity with the documents.  The audience could choose to work on any of three topics: biology, medical industry, or law.  In the results below, the queries are labeled with the submitters’ initials (some people gave only a first name, so there is only one initial) followed by a number if they submitted more than one query.  Two queries were omitted because they had less than 1% recall (the participants apparently misunderstood the task).  The queries that were evaluated in front of the audience were E-1, U, AC-1, and JM-1.  The discussion of the result follows the tables, graphs, and queries.

Biology Recall
Query Top 3,000 Top 6,000
E-1 32.0% 49.9%
E-2 51.7% 60.4%
E-3 48.4% 57.6%
E-4 45.8% 60.7%
E-5 43.3% 54.0%
E-6 42.7% 57.2%
TAR 3.0 SAL 72.5% 91.0%
TAR 3.0 CAL 75.5% 93.0%
Medical Recall
Query Top 3,000 Top 6,000
U 17.1% 27.9%
TAR 3.0 SAL 67.3% 83.7%
TAR 3.0 CAL 80.7% 88.5%
Law Recall
Query Top 3,000 Top 6,000
AC-1 16.4% 33.2%
AC-2 40.7% 54.4%
JM-1 49.4% 69.3%
JM-2 55.9% 76.4%
K-1 43.5% 60.6%
K-2 43.0% 62.6%
C 32.9% 47.2%
R 55.6% 76.6%
TAR 3.0 SAL 63.5% 82.3%
TAR 3.0 CAL 77.8% 87.8%

tar_vs_search4_biology

tar_vs_search4_medical

tar_vs_search4_law

E-1) biology OR microbiology OR chemical OR pharmacodynamic OR pharmacokinetic
E-2) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence
E-3) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis
E-4) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis OR study
E-5) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis OR study OR table
E-6) biology OR microbiology OR pharmacodynamic OR cellular OR enzyme OR activation OR nucleus OR protein OR interaction OR genomic OR dna OR hematological OR sequence OR pharmacokinetic OR processes OR lysis OR study OR table OR research
U) Transplant OR organ OR cancer OR hypothesis
AC-1) law
AC-2) legal OR attorney OR (defendant AND plaintiff) OR precedent OR verdict OR deliberate OR motion OR dismissed OR granted
JM-1) Law OR legal OR attorney OR lawyer OR litigation OR liability OR lawsuit OR judge
JM-2) Law OR legal OR attorney OR lawyer OR litigation OR liability OR lawsuit OR judge OR defendant OR plaintiff OR court OR plaintiffs OR attorneys OR lawyers OR defense
K-1) Law OR lawyer OR attorney OR advice OR litigation OR court OR investigation OR subpoena
K-2) Law OR lawyer OR attorney OR advice OR litigation OR court OR investigation OR subpoena OR justice
C) (law OR legal OR criminal OR civil OR litigation) AND NOT (politics OR proposed OR pending)
R) Court OR courtroom OR judge OR judicial OR judiciary OR law OR lawyer OR legal OR plaintiff OR plaintiffs OR defendant OR defendants OR subpoena OR sued OR suing OR sue OR lawsuit OR injunction OR justice

None of the keyword searches achieved higher recall than TAR when the amount of review effort was equal.  All six of the biology queries were submitted by one person.  The first query was evaluated in front of the audience, and his first revision to the query did help, but subsequent (blind) revisions of the query tended to hurt more than they helped.  For biology, review of 3,000 documents with TAR gave better recall than review of 6,000 documents with any of the queries.  There was only a single query submitted for the medical industry, and it underperformed TAR substantially.  Five people submitted a total of eight queries for the law category, and the audience had the best results for that topic, which isn’t surprising since an audience full of lawyers and litigation support people would be expected to be especially good at identifying keywords related to the law.  Even the best queries had lower recall with review of 6,000 documents than TAR 3.0 CAL achieved with review of only 3,000 documents, but a few of the queries did achieve higher recall than TAR 3.0 SAL when twice as much document review was performed with the search query compared to TAR 3.0 SAL.

Highlights from the Northeast eDiscovery & IG Retreat 2018

The 2018 Northeast eDiscovery and Information Governance Retreat was northeast_2018_building1held at the Salamander Resort & Spa in Middleburg, Virginia.  It was a full day of talks with a parallel set of talks on Cybersecurity, Privacy, and Data Protection in the adjacent room. Attendees could attend talks from either track. Below are my notes (certainly not exhaustive) from the eDiscovery and IG sessions. My full set of photos is available here.

Stratagies For Data Minimization Of Legacy Data
Backup and archiving should be viewed as separate functions.  When it comes to spoliation (FRCP Rule 37), reasonableness of the company’s data retention plan is key.  Over preservation is expensive.  There are not many cases on Rule 37 relating to backup tapes.  People are changing their behavior due to the changes in the FRCP, especially in heavily regulated industries such as healthcare and financial services.  Studies find that typically 70% of data has no business value and is not subject to legal hold or retention requirements for compliance.  When using machine learning, you can focus on finding what to keep or what to get rid of.  It is often best to start with unsupervised machine learning.  Be mindful of destructive malware.  To mitigate security risks, it is important to know where your data (including backup tapes) is.  If a backup tape goes missing, do you need to notify customers (privacy)?  To get started, create a matrix showing what you need to keep, keeping in mind legal holds and privacy (GDPR).  Old backup tapes are subject to GDPR.  Does the right to be forgotten apply to backup tapes?  There is currently no answer.  It would be hard to selectively delete data from the tapes, so maybe have a process that deletes during the restore.  There can be conflicts between U.S. ediscovery and GDPR, so you must decide which is the bigger risk.

Preparing A Coordinated Response To Government Inquiries And Investigations
You might find out that you are being investigated by the FBI or other investigator approaching one of your employees — get an attorney. northeast_2018_horses Reach out to the investigator, take it seriously, and ask for a timeline.  You may receive a broad subpoena because the investigator whats to ensure they get everything important, but you can often get them to narrow it.  Be sure to retain outside counsel immediately.  In one case a CEO negotiated search terms with a prosecutor without discussing custodians, so they had to search all employees.  The prosecutor can’t handle a huge volume of data, so it should be possible to negotiate a reasonable production.  In addition to satisfying the subpoena, you need to simultaneously investigate whether there is an ongoing problem that needs to be addressed.  Is your IT group able to forensically preserve and produce the documents?  You don’t want to mess up a production in front of a regulator, so get expertise in place early.  Data privacy can be an issue.  When dealing with operations in Europe, it is helpful to get employee consent in advance — nobody wants to consent during an investigation.  Beware of data residing in disparate systems in different languages.  Google translate is not very good, e.g. you have to be careful about slang.    Employees may try to cover their tracks.  In one case an employee was using “chocolate” as an encoded way to refer to a payment.  In another case an employee took a hammer to a desktop computer, though the hard drive was still recoverable.  Look for gaps in email or anomalous email volume.  Note that employees may use WhatsApp or Signal to communicate.  The DOJ expects you to be systematic (e.g., use analytics) about compliance.  See what data is available, even if it wasn’t subpoenaed, since it may help your side (email usually doesn’t).

Digging Into TAR
I moderated this panel, so I didn’t take notes. We challenged the audience to create a keyword search that would work better than technology-assisted review. Results are posted here.

Implementing Information Governance – Nightmare On Corporate America Street?
You need to weigh the value of the data against the risk of keeping it.  What is your business model?  That will dictate information governance. northeast_2018_reception Domino’s was described as a technology company that happens to distribute hot bread.  Unstructured data has the biggest footprint and the most rapid growth.  Did you follow your policies?  Your insurance company may be very picky about that when looking for a reason not to pay out.  They may pay out and then sue you over the loss.  Fear is a good motivator.  Threats from the OCC or FDIC over internal data management can motivate change.  You can quantify risk because the cost of having a data breach is now known. Info governance is utilization awareness, not just data management.  Know where your data is.  What about the employee that creates an unauthorized AWS account?  This is the “shadow ecosystem” or “shadow IT.”  One company discovered they had 50,000 collaborative SharePoint sites they didn’t know about.  For info governance standards see The Sedona Conference and EDRM.

Technology Solution Update From Corporate, Law Firm And Service Provider Perspective
Artificial intelligence (AI) should not merely analyze; it should present a result in a way that is actionable.  It might tell you how much two people talk, their sentiment, and whether there are any spikes in communication volume.  AI can be used by law firms for budgeting by analyzing prior matters.  There are concerns about privacy with AI.  Many clients are moving to the cloud.  Many are using private clouds for collaboration, not necessarily for utilizing large computing power.  Office 365 is of interest to many companies.  There was extensive discussion about the ediscovery analytics capabilities being added from the Equivio acquisition, and a demo by Marcel Katz of Microsoft.  The predictive coding (TAR) capability uses simple active learning (SAL) rather than continuous active learning (CAL).  It is 20 times slower in the cloud than running Equivio on premises.  There is currently no review tool in Office 365, so you have to export the predictions out and do the review elsewhere.  Mobile devices create additional challenges for ediscovery.  The time when a text message is sent may not match the time when it is received if the receiving device is off when the message is sent.  Technology needs to be able to handle emojis.  There are many different apps with many different data storage formats.

The ‘Team Of Teams’ Approach To Enterprise Security And Threat Management
Fast response is critical when you are attacked.  Response must be automated because a human response is not fast enough.  It can take 200 days to detect an adversary on the network, so assume someone is already inside.  What are the critical assets, and what threats should you look for?  What value does the data have to the attacker?  What is the impact on the business?  What is the impact on the people?  Know what is normal for your systems.  Is a large data transfer at 2:00am normal?  Simulate a phishing attack and see if your employees fall for it.  In one case a CEO was known to be in China for a deal, so someone impersonating the CEO emailed the CFO to send $50 million for the deal.  The money was never recovered.  Have processes in place, like requiring a signature for amounts greater than $10,000.  If a company is doing a lot of acquisitions, it can be hard to know what is on their network.  How should small companies get started?  Change passwords, hire an external auditor, and make use of open source tools.

From Data To GRC Insight
Governance, risk management, and compliance (GRC) needs tonortheast_2018_building2 become centralized and standardized.  Practicing incident response as a team results in better responses when real incidents happen.  Growing data means growing risk.  Beware of storage of social security numbers and credit card numbers.  Use encryption and limit access based on role.  Detect emailing of spreadsheets full of data.  Know what the cost of HIPAA violations is and assign the risk of non-compliance to an individual.  Learn about the NIST Cybersecurity Framework.  Avoid fines and reputational risk, and improve the organization.  Transfer the risk by having data hosted by a company that provides security.  Cloud and mobile can have big security issues.  The company can’t see traffic on mobile devices to monitor for phishing.

 

TAR vs. Keyword Search Challenge, Round 3

This iteration of the challenge, held at the Education Hub at ILTACON 2018, was structured somewhat differently from round 1 and round 2 to give the audience a better chance of beating TAR.  Instead of submitting search queries on paper, participants submitted them through a web form using their phones, which allowed them to repeatedly tweak their queries and resubmit them.  I executed the queries in front of the participants, so they could see the exact recall achieved (since all documents were marked as relevant or non-relevant by a human reviewer in advance) almost instantaneously and they could utilize the performance information for their queries and the queries of other participants to guide improvements to their queries. This actually gave the participants an advantage over what they would experience in a real e-discovery project since performance measurements would normally require human evaluation of a random sample from the search output, which would make execution of several iterations of a query guided by performance evaluations very expensive in terms of review labor.  The audience got those performance evaluations for free even though the goal was to compare recall achieved for equal amounts of document review effort.  On the other hand, the audience did still have the disadvantages of having limited time and no familiarity with the documents.

As before, recall was evaluated for the top 3000 and top 6000 documents, which was enough to achieve high recall with TAR (even with the training documents included, so total review effort for TAR and the search queries was the same).  Audience members were free to work on any of the three topics that were used in previous versions of the challenge: law, medical industry, or biology.  Unfortunately, the audience was much smaller than previous versions of the challenge, and nobody chose to submit a query for the biology topic.

Previously, the TAR results were achieved by using the TAR 3.0 workflow to train with 200 cluster centers, documents were sorted based on the resulting relevance scores, and top-scoring documents were reviewed until the desired amount of review effort was expended without allowing predictions to be updated during that review (e.g., review of 200 training docs plus 2,800 top scoring docs to get the “Top 3,000” result).  I’ll call this TAR 3.0 SAL (SAL = Simple Active Learning, meaning the system is not allowed to learn during the review of top-scoring documents).  In practice you wouldn’t do that.  If you were reviewing top-scoring documents, you would allow the system to continue learning (CAL).  You would use SAL only if you were producing top-scoring documents without reviewing them since allowing learning to continue during the review would reduce the amount of review needed to achieve a desired level of recall.  I used TAR 3.0 SAL in previous iterations because I wanted to simulate the full review in front of the audience in a few seconds and TAR 3.0 CAL would have been slower.  This time, I did the TAR calculations in advance and present both the SAL and CAL results so you can see how much difference the additional learning from CAL made.

One other difference compared to previous versions of the challenge is how I’ve labeled the queries below.  This time, the number indicates which participant submitted the query and the letter indicates which one of his/her queries are being analyzed (if the person submitted more than one) rather than indicating a tweaking of the query that I added to try to improve the result.  In other words, all variations were tweaks done by the audience instead of by me.  Discussion of the results follows the tables, graphs, and queries below.

Recall
Medical Industry Top 3,000 Top 6,000
1a 3.0%
1b 17.4%
TAR 3.0 SAL 67.3% 83.7%
TAR 3.0 CAL 80.7% 88.5%

 

Recall
Law Top 3,000 Top 6,000
2 1.0%
3a 36.1% 42.3%
3b 45.3% 60.1%
3c 47.2% 62.6%
4 11.6% 13.8%
TAR 3.0 SAL 63.5% 82.3%
TAR 3.0 CAL 77.8% 87.8%

tar_vs_search3_medical

tar_vs_search3_law

 

1a)  Hospital AND New AND therapies
1b)  Hospital AND New AND (physicians OR doctors)
2)   Copyright AND mickey AND mouse
3a)  Schedule OR Amendments OR Trial OR Jury OR Judge OR Circuit OR Courtroom OR Judgement
3b)  Amendments OR Trial OR Jury OR Judge OR Circuit OR Courtroom OR Judgement OR trial OR law OR Patent OR legal
3c)  Amendments OR Trial OR Jury OR Judge OR Circuit OR Courtroom OR Judgement OR trial OR law OR Patent OR legal OR Plaintiff OR Defendant
4)  Privacy OR (Personally AND Identifiable AND Information) OR PII OR (Protected AND Speech)

TAR won across the board, as in previous iterations of the challenge.  Only one person submitted queries for the medical industry topic.  His/her revised query did a better job of finding relevant documents, but still returned fewer than 3,000 documents and fared far worse than TAR — the query was just not broad enough to achieve high recall.  Three people submitted queries on the law topic.  One of those people revised the query a few times and got decent results (shown in green), but still fell far short of the TAR result, with review of 6,000 documents from the best query finding fewer relevant documents than review of half as many documents with TAR 3.0 SAL (TAR 3.0 CAL did even better).  It is unfortunate that the audience was so small, since a larger audience might have done better by learning from each other’s submissions.  Hopefully I’ll be able to do this with a bigger audience in the future.

Photos from ILTACON 2018

ILTACON 2018 was held at the iltaGaylord National Resort & Convention Center in National Harbor, Maryland.  I wasn’t able to attend the sessions (so I don’t have any notes to share) because I was manning the Clustify booth in the exhibit hall, but I did take a lot of photos which you can view here.  The theme for the reception this year was video games, in case you are wondering about the oddly dressed people in some of the photos.

TAR, Proportionality, and Bad Algorithms (1-NN)

Should proportionality arguments allow producing parties to get away with poor productions simply because they wasted a lot of effort due to an extremely bad algorithm?  This article examines one such bad algorithm that has been used in major review platforms, and shows that it could be made vastly more effective with a very minor tweak.  Are lawyers who use platforms lacking the tweak committing malpractice by doing so?

Last year I was moderating a panel on TAR (predictive coding) and I asked the audience what recall level they normally aim for when using TAR.  An attendee responded that it was a bad question because proportionality only required a reasonable effort.  Much of the audience expressed agreement.  This should concern everyone.  If quality of result (e.g., achieving a certain level of recall) is the goal, the requesting party really has no business asking how the result was achieved–any effort wasted by choosing a bad algorithm is born by the producing party.  On the other hand, if the target is expenditure of a certain amount of effort, doesn’t the requesting party have the right to know and object if the producing party has chosen a methodology that is extremely inefficient?

The algorithm I’ll be picking on today is a classifier called 1-nearest neighbor, or 1-NN.  You may be using it without ever having heard that name, so pay attention to my description of it and see if it sounds familiar.  To predict whether a document is relevant, 1-NN finds the single most similar training document and predicts the relevance of the unreviewed document to be the same.  If a relevance score is desired instead of a yes/no relevance prediction, the relevance score can be taken to be the similarity value if the most similar training document is relevant, and it can be taken to be the negative of the similarity value if the most similar training document is non-relevant.  Here is a precision-recall curve for the 1-NN algorithm used in a TAR 1.0 workflow trained with randomly-selected documents:

knn_1

The precision falls off a cliff above 60% recall.  This is not due to inadequate training–the cliff shown above will not go away no matter how much training data you add.  To understand the implications, realize that if you sort the documents by relevance score and review from the top down until you reach the desired level of recall, 1/P at that recall tells the average number of documents you’ll review for each relevant document you find.  At 60% recall, precision is 67%, so you’ll review 1.5 documents (1/0.67 = 1.5) for each relevant document you find.  There is some effort wasted in reviewing those 0.5 non-relevant documents for each relevant document you find, but it’s not too bad.  If you keep reviewing documents until you reach 70% recall, things get much worse.  Precision drops to about 8%, so you’ll encounter so many non-relevant documents after you get past 60% recall that you’ll end up reviewing 12.5 documents for each relevant document you find.  You would surely be tempted to argue that proportionality says you should be able to stop at 60% recall because the small gain in result quality of going from 60% recall to 70% recall would cost nearly ten times as much review effort.  But does it really have to be so hard to get to 70% recall?

It’s very easy to come up with an algorithm that can reach higher recall without so much review effort once you understand why the performance cliff occurs.  When you sort the documents by relevance score with 1-NN, the documents where the most similar training document is relevant will be at the top of the list.  The performance cliff occurs when you start digging into the documents where the most similar training document is non-relevant.  The 1-NN classifier does a terrible job of determining which of those documents has the best chance of being relevant because it ignores valuable information that is available.  Consider two documents, X and Y, that both have a non-relevant training document as the most similar training document, but document X has a relevant training document as the second most similar training document and document Y has a non-relevant training document as the second most similar.  We would expect X to have a better chance of being relevant than Y, all else being equal, but 1-NN cannot distinguish between the two because it pays no attention to the second most similar training document.  Here is the result for 2-NN, which takes the two most similar training document into account:

knn_2

Notice that 2-NN easily reaches 70% recall (1/P is 1.6 instead of 12.5), but it does have a performance cliff of its own at a higher level of recall because it fails to make use of information about the third most similar training document.  If we utilize information about the 40 most similar training documents we get much better performance as shown by the solid lines here:

knn_40

It was the presence of non-relevant training documents that tripped up the 1-NN algorithm because the non-relevant training document effectively hid the existence of evidence (similar training documents that were relevant) that a document might be relevant, so you might think the performance cliff could be avoided by omitting non-relevant documents from the training.  The result of doing that is shown with dashed lines in the figure above.  Omitting non-relevant training documents does help 1-NN at high recall, though it is still far worse than 40-NN with the non-relevant training documents include (omitting the non-relevant training documents actually harms 40-NN, as shown by the red dashed line).  A workflow that focuses on reviewing documents that are likely to be relevant, such as TAR 2.0, rather than training with random documents, will be less impacted by 1-NN’s shortcomings, but why would you ever suffer the poor performance of 1-NN when 40-NN requires such a minimal modification of the algorithm?

You might wonder whether the performance cliff shown above is just an anomaly.  Here are precision-recall curves for several additional categorization tasks with 1-NN on the left and 40-NN on the right.

1nn_vs_40nn_several_tasks

Sometimes the 1-NN performance cliff occurs at high enough recall to allow a decent production, but sometimes it keeps you from finding even half of the relevant documents.  Should a court accept less than 50% recall when the most trivial tweak to the algorithm could have achieved much higher recall with roughly the same amount of document review?

Of course, there are many factors beyond the quality of the classifier, such as the choice of TAR 1.0 (SPL and SAL), TAR 2.0 (CAL), or TAR 3.0 workflows, that impact the efficiency of the process.  The research by Grossman and Cormack that courts have relied upon to justify the use of TAR because it reaches recall that is comparable to or better than an exhaustive human review is based on CAL (TAR 2.0) with good classifiers, whereas some popular software uses TAR 1.0 (less efficient if documents will be reviewed before production) and poor classifiers such as 1-NN.  If the producing party vows to reach high recall and bears the cost of choosing bad software and/or processes to achieve that, there isn’t much for the requesting party to complain about  (though the producing party could have a bone to pick with an attorney or service provider who recommended an inefficient approach). On the other hand, if the producing party argues that low recall should be tolerated because decent recall would require too much effort, it seems that asking whether the algorithms used are unnecessarily inefficient would be appropriate.

TAR vs. Keyword Search Challenge, Round 2

During my presentation at the South Central eDiscovery & IG Retreat I challenged the audience to create keyword searches that would work better than technology-assisted review (predictive coding).  This is similar to the experiment done a few months earlier.  See this article for more details.  The audience again worked in groups to construct keyword searches for two topics.  One topic, articles on law, was the same as last time.  The other topic, the medical industry, was new (it replaced biology).

Performance was evaluated by comparing the recall achieved for equal amounts of document review effort (the population was fully categorized in advance, so measurements are exact, not estimates).  Recall for the top 3000 keyword search matches was compared to recall from reviewing 202 training documents (2 seed documents plus 200 cluster centers using the TAR 3.0 method) and 2798 documents having the highest relevance scores from TAR.  Similarly, recall from the top 6000 keyword search matches was compared to recall from review of 6000 documents with TAR.  Recall from all documents matching a search query was also measured to find the maximum recall that could be achieved with the query.

The search queries are shown after the performance tables and graphs.  When there is an “a” and “b” version of the query, the “a” version was the audience’s query as-is, and the “b” query was tweaked by me to remove restrictions that were limiting the number of relevant documents that could be found.  The results are discussed at the end of the article.

Medical Industry Recall
Query Total Matches Top 3,000 Top 6,000 All
1a 1,618 14.4% 14.4%
1b 3,882 32.4% 40.6% 40.6%
2 7,684 30.3% 42.2% 46.6%
3a 1,714 22.4% 22.4%
3b 16,756 32.7% 44.6% 71.1%
4a 33,925 15.3% 20.3% 35.2%
4b 58,510 27.9% 40.6% 94.5%
TAR 67.3% 83.7%

 

Law Recall
Query Total Matches Top 3,000 Top 6,000 All
5 36,245 38.8% 56.4% 92.3%
6 25,370 51.9% 72.4% 95.7%
TAR 63.5% 82.3%

tar_vs_search2_medical

tar_vs_search2_law

 

1a) medical AND (industry OR business) AND NOT (scientific OR research)
1b) medical AND (industry OR business)
2) (revenue OR finance OR market OR brand OR sales) AND (hospital OR health OR medical OR clinical)
3a) (medical OR hospital OR doctor) AND (HIPPA OR insurance)
3b) medical OR hospital OR doctor OR HIPPA OR insurance
4a) (earnings OR profits OR management OR executive OR recall OR (board AND directors) OR healthcare OR medical OR health OR hospital OR physician OR nurse OR marketing OR pharma OR report OR GlaxoSmithKline OR (united AND health) OR AstraZeneca OR Gilead OR Sanofi OR financial OR malpractice OR (annual AND report) OR provider OR HMO OR PPO OR telemedicine) AND NOT (study OR research OR academic)
4b) earnings OR profits OR management OR executive OR recall OR (board AND directors) OR healthcare OR medical OR health OR hospital OR physician OR nurse OR marketing OR pharma OR report OR GlaxoSmithKline OR (united AND health) OR AstraZeneca OR Gilead OR Sanofi OR financial OR malpractice OR (annual AND report) OR provider OR HMO OR PPO OR telemedicine
5) FRCP OR Fed OR litigation OR appeal OR immigration OR ordinance OR legal OR law OR enact OR code OR statute OR subsection OR regulation OR rules OR precedent OR (applicable AND law) OR ruling
6) judge OR (supreme AND court) OR court OR legislation OR legal OR lawyer OR judicial OR law OR attorney

As before, TAR won across the board, but there were some surprises this time.

For the medical industry topic, review of 3000 documents with TAR achieved higher recall than any keyword search achieved with review of 6000 documents, very similar to results from a few months ago.  When all documents matching the medical industry search queries were analyzed, two queries did achieve high recall (3b and 4b, which are queries I tweaked to achieve higher recall), but they did so by retrieving a substantial percentage of the 100,000 document population (16,756 and 58,510 documents respectively).  TAR can reach any level of recall by simply taking enough documents from the sorted list—TAR doesn’t run out of matches like a keyword search does.  TAR matches the 94.6% recall that query 4b achieved (requiring review of 58,510 documents) with review of only 15,500 documents.

Results for the law topic were more interesting.  The two queries submitted for the law topic both performed better than any of the queries submitted for that topic a few months ago.  Query 6 gave the best results, with TAR beating it by only a modest amount.  If all 25,370 documents matching query 6 were reviewed, 95.7% recall would be achieved, which TAR could accomplish with review of 24,000 documents.  It is worth noting that TAR 2.0 would be more efficient, especially at very high recall.  TAR 3.0 gives the option to produce documents without review (not utilized for this exercise), plus computations are much faster due to there being vastly fewer training documents, which is handy for simulating a full review live in front of an audience in a few seconds.

Highlights from the South Central eDiscovery & IG Retreat 2018

The 2018 South Central eDiscovery and Information Governance Retreat was held at Lakeway Resort and Spa, outside of Austin.  It was a full day of talks with a parallel set of talks on Cybersecurity, Privacy, and Data Protection in the adjacent room.  Attendees could attend talks from either track. Below are my notes (certainly not exhaustive) from the eDiscovery and IG sessions. My full set of photos is available here.southcentral_2018_pool

Blowing The Whistle
eDiscovery can be used as a weapon to drive up costs for an adversary.  Make requests broad and make the other side reveal what they actually have.  Ask for “all communications” rather than “all Office 365 emails” or you may miss something (for example, they may use Slack).  The collection may be 1% responsive.  How can it be culled defensibly?  Ask for broad search terms, get hit rates, and then adjust.  The hit rates don’t tell how many documents were actually relevant, so use sampling.  When searching for patents, search for “123 patent” instead of just “123” to avoid false positives (patent references often use just the last 3 digits).  This rarely happens, but you might get the producing party to disclose top matches for the queries and examine them to give feedback on desired adjustments.  You should have a standard specification for the production format you want, and you should get it to the producing party as soon as possible, or you might get 20,000 emails produced in one large PDF that you’ll have to waste time dissecting, and meta data may be lost.  If keyword search is used during collection, be aware that Office 365 currently doesn’t OCR non-searchable content, so it will be missed.  Demand that the producing party OCR before applying any search terms.  In one production there were a lot of “gibberish” emails returned because the search engine was matching “ING” to all words ending in “ing” rather than requiring the full word to match.  If ediscovery disputes make it to the judge, it’s usually not a good thing since the judge may not be very technical.

Digging Into TAR
I moderated this panel, so I didn’t take notes.  We challenged the audience to create a keyword search that would work better than TAR.  Results are posted here.

Beyond eDiscovery – Creating Context By Connecting Disparate Data
Beyond the custodian, who else had access to this file?  Who should have access, and who shouldn’t?  Forensics can determine who accessed or printed a confidential file.  The Windows registry tracks how users access files.  When you print, an image is stored.  Figure out what else you can do with the tech you have.  For example, use Sharepoint workflows to help with ediscovery.  Predictive coding can be used with structured data.  Favorite quote: “Anyone who says they can solve all of my problems with one tool is a big fat liar.”southcentral_2018_keynote

Improving Review Efficiency By Maximizing The Use Of Clustering Technology
Clustering can lead to more consistent review by ensuring the same person reviews similar documents and reviews them together.  The requesting party can use clustering to get an overview of what they’ve received.  Image clustering identifies glyphs to determine document similarity, so it can detect things like a Nike logo, or it can be sensitive to the location on the page where the text occurs.  It is important to get the noise (e.g., email footers) out of the data before clustering.  Text messages and spreadsheets may cause problems.  Clustering can be used for ECA or keyword generation, where it is not making final determinations for a document.  It can reveal abbreviations scientists are using for technical terms.  It can also be used to identify clusters that can be excluded from review (not relevant).  It can be used to prioritize review, with more promising clusters reviewed first.  Should you tell the other side you are using clustering to come up with keywords?  No, you are just inviting controversy.

Technology Solution Update From Corporate, Law Firm And Service Provider Perspective
Migration to Office 365 and other cloud offerings can cause problems.  Data can be dumped into the cloud without tracking where it went.  Figuring out how to collect from the cloud can be difficult.  Microsoft is always changing Office 365, making it difficult to stay on top of the changes.  Favorite quote: “I’m always running to keep up.  I should be skinnier, but I’m not.”  Office 365 is supposed to have OCR soon.  What if the cloud platform gets hacked?  There can be throttling issues when collecting from One Drive by downloading everything (not using Microsoft’s tool).  Rollout of cloud services should be slow to make sure everyone knows what should be put in the cloud and what shouldn’t, and to ensure that you keep track of where everything is.  Be careful about emailing passwords since they may be recorded — use ephemeral communications instead of email for that.  Personal devices cause problems because custodians don’t like having their devices collected.  Policy is critical, but it is not a cure-all.  Policy must be surrounded by communication and re-certification to ensure it is followed.  Google mail is not a good solution for restricting data location since attachments are copied to the local disk when they are viewed.southcentral_2018_lunch

Achieving GDPR Compliance For Unstructured Content
Some technology was built for GDPR while other tech was build for some other purpose like ediscovery and tweaked for GDPR, so be careful.  For example. you don’t want to have to collect the data before determining whether it contains PII.  The California privacy law taking effect in 2020 is similar to GDPR, so U.S. companies cannot ignore the issue.  Backup tapes should be deleted after 90 days.  They are for emergencies, not retention.  Older backups often don’t work (e.g., referenced network addresses are no longer valid).

Escalating Cyber Risk From The IT Department To The Boardroom
One very effective way to change a company’s culture with respect to security is to break people up into white vs. black teams and hold war games where one team attacks and the other tries to come up with the best way to defend against it.  You need to point out both the risk and how to fix it to get the board’s attention.  Show the board a graph with the expected value lost in a breach on the vertical axis and cost to eliminate the risk on the horizontal axis — points lying above the 45 degsouthcentral_2018_buildingree line are risks that should be eliminated (doing so saves money).  On average, a server breach costs 28% of operating costs.  Investors may eventually care if someone on the board has a security certification.  It is OK to question directors, but don’t call out their b.s..  The Board cares most about what the CEO and CFO are saying.  Ethical problems tend to happen when things are too siloed.

TAR vs. Keyword Search Challenge

During my presentation at the NorCal eDiscovery & IG Retreat I challenged the audience to create keyword searches that would work better than technology-assisted review (predictive coding) for two topics.  Half of the room was tasked with finding articles about biology (science-oriented articles, excluding medical treatment) and the other half searched for articles about current law (excluding proposed laws or politics).  I ran one of the searches against TAR in Clustify live during the presentation (Clustify’s “shadow tags” feature allows a full document review to be simulated in a few minutes using documents that were pre-categorized by human reviewers), but couldn’t do the rest due to time constraints.  This article presents the results for all the queries submitted by the audience.

The audience had limited time to construct queries (working together in groups), they weren’t familiar with the data set, and they couldn’t do sampling to tune their queries, so I’m not claiming the exercise was comparable to an e-discovery project.  Still, it was entertaining.  The topics are pretty simple, so a large percentage of the relevant documents can be found with a pretty simple search using some broad terms.  For example, a search for “biology” would find 37% of the biology documents.  A search for “law” would find 71% of the law articles.  The trick is to find the relevant documents without pulling in too many of the non-relevant ones.

To evaluate the results, I measured the recall (percentage of relevant documents found) from the top 3,000 and top 6,000 hits on the search query (3% and 6% of the population respectively).  I’ve also included the recall achieved by looking at all docs that matched the search query, just to see what recall the search queries could achieve if you didn’t worry about pulling in a ton of non-relevant docs.  For the TAR results I used TAR 3.0 trained with two seed documents (one relevant from a keyword search and one random non-relevant document) followed by 20 iterations of 10 top-scoring cluster centers, so a total of 202 training documents (no control set needed with TAR 3.0).  To compare to the top 3,000 search query matches, the 202 training documents plus 2,798 top-scoring documents were used for TAR, so the total document review (including training) would be the same for TAR and the search query.

The search engine in Clustify is intended to help the user find a few seed documents to get active learning started, so it has some limitations.  If the audience’s search query included phrases, they were converted an AND search enclosed in parenthesis.  If the audience’s query included a wildcard, I converted it to a parenthesized OR search by looking at the matching words in the index and selecting only the ones that made sense (i.e., I made the queries better than they would have been with an actual wildcard).  I noticed that there were a lot of irrelevant words that matched the wildcards.  For example, “cell*” in a biology search should match cellphone, cellular, cellar, cellist, etc., but I excluded such words.  I would highly recommend that people using keyword search check to see what their wildcards are actually matching–you may be pulling in a lot of irrelevant words.  I removed a few words from the queries that weren’t in the index (so the words shown all actually had an impact).  When there is an “a” and “b” version of the query, the “a” version was the audience’s query as-is, and the “b” query was tweaked by me to retrieve more documents.

The tables below show the results.  The actual queries are displayed below the tables.  Discussion of the results is at the end.

Biology Recall
Query Total Matches Top 3,000 Top 6,000 All Matches
1 4,407 34.0% 47.2% 47.2%
2 13,799 37.3% 46.0% 80.9%
3 25,168 44.3% 60.9% 87.8%
4a 42 0.5% 0.5%
4b 2,283 20.9% 20.9%
TAR 72.1% 91.0%
Law Recall
Query Total Matches Top 3,000 Top 6,000 All Matches
5a 2,914 35.8% 35.8%
5b 9,035 37.2% 49.3% 60.6%
6 534 2.9% 2.9%
7 27,288 32.3% 47.1% 79.1%
TAR 62.3% 80.4%

tar_vs_search_biology

tar_vs_search_law

1) organism OR microorganism OR species OR DNA

2) habitat OR ecology OR marine OR ecosystem OR biology OR cell OR organism OR species OR photosynthesis OR pollination OR gene OR genetic OR genome AND NOT (treatment OR generic OR prognosis OR placebo OR diagnosis OR FDA OR medical OR medicine OR medication OR medications OR medicines OR medicated OR medicinal OR physician)

3) biology OR plant OR (phyllis OR phylos OR phylogenetic OR phylogeny OR phyllo OR phylis OR phylloxera) OR animal OR (cell OR cells OR celled OR cellomics OR celltiter) OR (circulation OR circulatory) OR (neural OR neuron OR neurotransmitter OR neurotransmitters OR neurological OR neurons OR neurotoxic OR neurobiology OR neuromuscular OR neuroscience OR neurotransmission OR neuropathy OR neurologically OR neuroanatomy OR neuroimaging OR neuronal OR neurosciences OR neuroendocrine OR neurofeedback OR neuroscientist OR neuroscientists OR neurobiologist OR neurochemical OR neuromorphic OR neurohormones OR neuroscientific OR neurovascular OR neurohormonal OR neurotechnology OR neurobiologists OR neurogenetics OR neuropeptide OR neuroreceptors) OR enzyme OR blood OR nerve OR brain OR kidney OR (muscle OR muscles) OR dna OR rna OR species OR mitochondria

4a) statistically AND ((laboratory AND test) OR species OR (genetic AND marker) OR enzyme) AND NOT (diagnosis OR treatment OR prognosis)

4b)  (species OR (genetic AND marker) OR enzyme) AND NOT (diagnosis OR treatment OR prognosis)

5a) federal AND (ruling OR judge OR justice OR (appellate OR appellant))

5b) ruling OR judge OR justice OR (appellate OR appellant)

6) amendments OR FRE OR whistleblower

7) ((law OR laws OR lawyer OR lawyers OR lawsuit OR lawsuits OR lawyering) OR (regulation OR regulations) OR (statute OR statutes) OR (standards)) AND NOT pending

TAR beat keyword search across the board for both tasks.  The top 3,000 documents returned by TAR achieved higher recall than the top 6,000 documents for any keyword search.  In other words, if documents will be reviewed before production, TAR achieves better results (higher recall) with half as much document review compared to any of the keyword searches.  The top 6,000 documents returned by TAR achieved higher recall than all of the documents matching any individual keyword search, even when the keyword search returned 27,000 documents.

Similar experiments were performed later with many similarities but also some notable differences in the results.  You can read about them here: round 2, and round 3.