Tag Archives: predictive coding

Substantial Reduction in Review Effort Required to Demonstrate Adequate Recall

Measuring the recall achieved to within +/- 5% to demonstrate that a production is defensible can require reviewing a substantial number of random documents.  For a case of modest size, the amount of review required to measure recall can be larger than the amount of review required to actually find the responsive documents with predictive coding.  This article describes a new method requiring much less document review to demonstrate that adequate recall has been achieved.  This is a brief overview of a more detailed paper I’ll be presenting at the DESI VII Workshop on June 12th (slides available here).

The proportion of a population having some property can be estimated to within +/- 5% by measuring the proportion on a random sample of 400 documents (you’ll also see the number 385 being used, but using 400 will make it easier to follow the examples).  To measure recall we need to know what proportion of responsive documents are produced, so we need a sample of 400 random responsive documents.  Since we don’t know which documents in the population are responsive, we have to select documents randomly and review them until 400 responsive ones are found.  If prevalence is 10% (10% of the population is responsive), that means reviewing roughly 4,000 documents to find 400 that are relevant so that recall can be estimated.  If prevalence is 1%, it means reviewing roughly 40,000 random documents to measure recall.  This can be quite a burden.

multistage_acceptance_from_multistageOnce recall is measured, a decision must be made about whether it is high enough.  Suppose you decide that if at least 300 of the 400 random responsive documents were produced (75%) the production is acceptable.  For any actual level of recall, the probability of accepting the production can be computed (see figure to right).  The probability of accepting a production where the actual recall is less than 70% will be very low, and the probability of rejecting a production where the actual recall is greater than 80% will also be low — this comes from the fact that a sample of 400 responsive documents is sufficient to measure recall to within +/- 5%.

multistage_acceptance_procedureThe idea behind the new method is to achieve the same probability profile for accepting/rejecting a production using a multi-stage acceptance test.  The multi-stage test gives the possibility of stopping the process and declaring the production accepted/rejected long before reviewing 400 random responsive documents.  The procedure is shown in the flowchart to the right (click to enlarge).  A decision may be reached after reviewing enough documents to find just 25 random documents that are responsive.  If a decision isn’t made after reviewing 25 responsive documents, review continues until 50 responsive documents are found and another test is applied.  At worst, documents will be reviewed until 400 responsive documents are found (the same as the traditional direct recall estimation method).

multistage_barriers_85_recall_pathsThe figure to the right shows six examples of the multi-stage acceptance test being applied when the actual recall is 85%.  Since 85% is well above the 80% upper bound of the 75% +/- 5% range, we expect this production to virtually always be accepted.  The figure shows that acceptance can occur long before reviewing a full 400 random responsive documents.  The number of random responsive documents reviewed is shown on the vertical axis.  Toward the bottom of the graph the sample is very small and the percentage of the sample that has been produced may deviate greatly from the right answer of 85%.  As you go up the sample gets larger and the proportion of the sample that is produced is expected to get closer to 85%.  When a green decision boundary is touched, causing the production to be accepted as having sufficiently high recall, the color of the remainder of the path is changed to yellow — the yellow part represents the document review that is avoided by using the multi-stage acceptance method (since the traditional direct recall measurement would involve going all the way to 400 responsive documents).  As you can see, when the actual recall is 85% the number of random responsive documents that must be reviewed is often 50 or 100, not 400.

multistage_effort_for_multistageThe figure to the right shows the average number of documents that must be reviewed using the multi-stage acceptance procedure from the earlier flowchart.  The amount of review required can be much less than 400 random responsive documents.  In fact, the further above/below the 75% target (called the “splitting recall” in the paper) the actual recall is, the less document review is required (on average) to come to a conclusion about whether the production’s recall is high enough.  This creates an incentive for the producing party to aim for recall that is well above the minimum acceptable level since it will be rewarded with a reduced amount of document review to confirm the result is adequate.

It is important to note that the multi-stage procedure provides an accept/reject result, not a recall estimate.  If you follow the procedure until an accept/reject boundary is hit and then use the proportion of the sample that was produced as a recall estimate, that estimate will be biased (the use of “unbiased” in the paper title refers to the sampling being done on the full population, not on a subset [such as the discard set] that would cause a bias due to inconsistency in review of different subsets).

You may want to use a splitting recall other than 75% for the accept/reject decision — the full paper provides tables of values necessary for doing that.

Webinar: 10 Years Forward and Back: Automation in eDiscovery

George Socha, Doug Austin, David Horrigan, Bill Dimm, and Bill Speros will give presentations in this webinar on the history and future of ediscovery moderated by Mary Mack on December 1, 2016.  Bill Dimm will talk about the evolution of predictive coding technologies and our understanding of best practices, including recall estimation, the evil F1 score, research efforts, pre-culling, and the TAR 1.0, 2.0, and 3.0 workflows.  CLICK HERE FOR RECORDING OF WEBINAR, SLIDES, AND LINKS TO RELATED RESOURCES.

Highlights from the Northeast eDiscovery & IG Retreat 2016

The 2016 Northeast eDiscovery & IG Retreat was held at the Ocean Edge Resort & Golf Club.  It was the third annual Ing3nious retreat held in Cape Cod.  The retreat featured two 2016northeast_mansionsimultaneous sessions throughout the day in a beautiful location.  My notes below provide some highlights from the sessions I was able to attend.  You can find additional photos here.

Peer-to-Peer Roundtables
The retreat started with peer-to-peer round tables where each table was tasked with answering the question: Why does e-discovery suck (gripes, pet peeves, issues, etc.) and how can it be improved?  Responses included:

  • How to drive innovation?  New technologies need to be intuitive and simple to get client adoption.
  • Why are e-discovery tools only for e-discovery?  Should be using predictive coding for records management.
  • Need alignment between legal and IT.  Need ongoing collaboration.
  • Handling costs.  Cost models and comparing service providers are complicated.
  • Info governance plans for defensible destruction.
  • Failure to plan and strategize e-discovery.
  • Communication and strategy.  It is important to get the right people together.
  • Why not more cooperation at meet-and-confer?  Attorneys that are not comfortable with technology are reluctant to talk about it.  Asymmetric knowledge about e-discovery causes problems–people that don’t know what they are doing ask for crazy things.

Catching Up on the Implementation of the Amended Federal Rules
I couldn’t attend this one.

Predictive Coding and Other Document Review Technologies–Where Are We Now?
It is important to validate the process as you go along, for any technology.  It is important to understand the client’s documents.  Pandora is more like TAR 2.0 than TAR 1.0, because it starts giving recommendations based on your feedback right away.  The 2012 Rand Study found this e-discovery cost breakdown:73% document review, 8% collection, and 19% processing.  A question from the audience about pre-culling with keyword search before applying predictive coding spurred some debate.  Although it wasn’t mentioned during the panel, I’ll point out William Webber’s analysis of the Biomet case, which shows pre-culling discarded roughly 40% of the relevant documents before predictive coding was applied.  There are many different ways of charging for predictive coding: amount of data, number of users, hose (total data flowing through) or bucket (max amount of data allowed at one time).  Another barrier to use of predictive coding is lack of senior attorney time (e.g., to review documents for training).  Factors that will aid in overcoming barriers: improving technologies, Sherpas to guide lawyers through the process, court rulings, influence from general counsel.  Need to admit that predictive coding doesn’t work for everything, e.g., calendar entries.  New technologies include anonymization tools and technology to reduce the size of collections.  Existing technologies that are useful: entity extraction, email threading, facial recognition, and audio to text.  Predictive coding is used in maybe less than 1% of cases, but email threading is used in 99%.

It’s All Greek To Me: Multi-Language Discovery Best Practices 2016northeast_intro
Native speakers are important.  An understanding of relevant industry terminology is important, too.  The ALTA fluency test is poor–the test is written in English and then translated to other languages, so it’s not great for testing ability to comprehend text that originated in another language.  Hot documents may be translated for presentation.  This is done with a secure platform that prohibits the translator from downloading the documents.  Privacy laws make it best to review in-country if possible.  There are only 5 really good legal translation companies–check with large firms to see who they use.  Throughput can be an issue.  Most can do 20,000 words in 3 days.  What if you need to do 200,000 in 3 days?  Companies do share translators, but there’s no reason for good translators to work for low-tier companies–good translators are in high demand.  QC foreign review to identify bad reviewers (need proficient managers).  May need to use machine translation (MT) if there are millions of documents.  QC the MT result and make sure it is actually useful–in 85% of cases it is not good enough.  For CJK (Chinese, Japanese, Korean), MT is terrible.  The translation industry is $40 billion.  Google invested a lot in MT but it didn’t help much.  One technology that is useful is translation memory, where repeated chunks of text are translated just once.  People performing review in Japanese must understand the subtlety of the American legal system.

Top Trends in Discovery for 2016
I couldn’t attend this one

Measure Twice, Discover Once 2016northeast_beach
Why measure in e-discovery?  So you can explain what happened and why, for defensibility.  Also important for cost management.  The board of directors may want reports.  When asked for more custodians you can show the cost and expected number of relevant documents that will be added by analyzing the number of keyword search hits.  Everything gets an ID number for tracking and analysis (USB drives, batches of documents, etc.).  Types of metrics ordered from most helpful to most harmful: useful, no metric, not useful, and misleading.  A simple metric used often in document review is documents per hour per reviewer.  What about document complexity, content complexity, number and type of issue codes, review complexity, risk tolerance instructions, number of “defect opportunities,” and number coded correctly?  Many 6-sigma ideas from manufacturing are not applicable due to the subjectivity that is present in document review.

Information Governance and Data Privacy: A World of Risk
I couldn’t attend this one

The Importance of a Litigation Hold Policy
I couldn’t attend this one

Alone Together: Where Have All The Model TAR Protocols Gone? 2016northeast_roof
If you are disclosing details, there are two types: inputs (search terms used to train, shared review of training docs) and outputs (target recall or disclosure of recall).  Don’t agree to a specific level of recall before looking at the data–if prevalence is low it may be hard.  Plaintiff might argue for TAR as a way to overcome cost objections from the defendant.  There is concern about lack of sophistication from judges–there is “stunning” variation in expertise among federal judges.  An attorney involved with the Rio Tinto case recommends against agreeing on seed sets because it is painful and focuses on the wrong thing.  Sometimes there isn’t time to put eyes on all documents that will be produced.  Does the TAR protocol need to address dupes, near-dupes, email threading, etc.?

Information Governance: Who Owns the Information, the Risk and the Responsibility?
I couldn’t attend this one

Bringing eDiscovery In-House — Savings and Advantages
I was on this panel so I didn’t take notes

Webinar: How Automation is Revolutionizing eDiscovery

Doug Austin, Bill Dimm, and Bill Speros will give presentations in this webinar moderated by Mary Mack on August 10, 2016.  In addition to broad topics on automation in e-discovery, expect a fair amount on technology-assisted review, including a description of TAR 1.0, 2.0, and 3.0, comparison to human review, and controversial thoughts on judicial acceptance.  CLICK HERE FOR RECORDED WEBINAR

Highlights from the Masters Conference in NYC 2016

The 2016 Masters Conference in NYC was a one-day e-discovery conference held at the New Yorker.  There were two simultaneous sessions throughout the day, so I couldn’t attend everything.  Here are my notes:MastersNYC2016_lunch

Faster, Better, Cheaper: How Automation is Revolutionizing eDiscovery
I was on this panel, so I didn’t take notes.

Five Forces Changing Corporate eDiscovery
68% of corporations are using some type of SaaS/cloud service.  Employees want to use things like Dropbox and Slack, but it is a challenge to deal with them in ediscovery–the legal department is often the roadblock to the cloud.  Consumer products don’t have compliance built-in.  Ask the vendor for corporate references to check on ediscovery issues.  72% of corporations have concerns about the security of distributing ediscovery data to law firms and vendors.  80% rarely or never audit the technical competence of law firms and vendors (the panel members were surprised by this).  Audits need to be refreshed from time to time.  Corporate data disposition is the next frontier due to changes in the Federal Rules and cybersecurity concerns.  Keeping old data will cause problems later if there is a lawsuit or the company is hacked. Need to make sure all copies are deleted.  96% of corporations use metrics and reporting on their legal departments.  Only 28% think they have enough insight into the discovery process of outside counsel (the panel members were surprised by this since they collaborate heavily with outside counsel).  What is tracked:

65% Data Managed
57% eDiscovery Spend
52% eDiscovery Spend per GB
48% Review Staffing
48% Total Review Spend
39% Technologies Used
30% Review Efficiency

28% of the litigation budget is dedicated to ediscovery. 44% of litigation strategies are affected by ediscovery costs.  92% would use analytics more often if cost was not an issue.  The panelists did not like extra per-GB fees for analytics–they prefer an all-inclusive price (sidenote: If you assume the vendor is collecting money from you somehow in order to pay for development of analytics software, including analytics in the all-inclusive price makes the price higher than it would need to be if analytics were excluded, so your non-analytics cases are subsidizing the cases where analytics are used).

Benefits and Challenges in Creating an Information Governance (IG) Program
I couldn’t attend this one.

Connected Digital Discovery: Can We Get There?
There is an increasing push for BYOD, but 48% of BYOD employees disable security.  Digital investigation, unlike ediscovery, involves “silent holds” where documents are collected without employee awareness.  When investigating an executive, must also investigate or do a hold on the executive’s assistant.  The info security department has a different tool stack than ediscovery (e.g., network monitoring tools), so it can be useful to talk to them.

How to Handle Cross-Border Data Transfers in the Aftermath of the Schrems Case
I couldn’t attend this one.

TAR in litigation and government investigation: Possible Uses and Problems
Tracy Greer said the DOJ wants to know the TAR process used.  Surprisingly, it is often found to deviate from the vendor’s recommended best practices.  They also require disclosure of a random sample (less than 5,000 documents) from the documents that were predicted to be non-relevant (referred to as the “null set” in the talk, though I hate that name).  Short of finding a confession of a felony, they wouldn’t use the documents from the sample against the company–they use the sample to identify problems as early as possible (e.g., misunderstandings about what must be turned over) and really want people to feel that disclosing the sample is safe.  Documents from second requests are not subject to FOIA.  They are surprised that more people don’t seem to do email domain filtering.  Doing keyword search well (sampling and constructing good queries) is hard.  TAR is not always useful.  For example, when looking for price fixing of ebooks by Apple and publishers it is more useful to analyze volume of communications.  TAR is also not useful for analyzing database systems like Peoplesoft and payroll systems.  Recommendations:

Keyword search before TAR No
Initial review by SME Yes
Initial review by large team No
De-dupe first Yes
Consolidate threads No

The “overturn rate” is the rate at which another reviewer disagrees with the relevance determination of the initial reviewer. A high overturn rate could signal a problem. The overturn rate is expected to decrease over time. The DOJ expects the overturn rate to be reported, which puts the producing party on notice that they must monitor quality. The DOJ doesn’t have a specific recall expectation–they ask that sampling be done and may accept a a smaller recall if it makes sense.  Judge Hedges speculated that TAR will be challenged someday and it will be expensive.

The Internet of Things (IoT) Creates a Thousand Points of (Evidentiary) Light.  Can You See It?
I couldn’t attend this one.

The Social Media (R)Evolution: How Social Media Content Impacts e-Discovery Risks and Costs
Social media is another avenue of attack by hackers.  They can hijack an account and use it to send harmful links to contacts.  Hackers like to attack law firms doing M&A due to the information they have.  Once hacked, reliability of all data is now in question–it may have been altered.  Don’t allow employees to install software or apps.  Making threats on social media, even in jest, can bring the FBI to your doorstep in hours, and they won’t just talk to you–they’ll talk to your boss and others.

From Case Management to Case Intelligence: Surfacing Legal Business IntelligenceMastersNYC2016_panel
I couldn’t attend this one.

Early Returns from the Federal Rules of Civil Procedure Changes
New rule 26(b)(1) removes “reasonably calculated to lead to the discovery of admissible evidence.”  Information must be relevant to be discoverable.  Should no longer be citing Oppenheimer.  Courts are still quoting the removed language.  Courts have picked up on the “proportional to the needs of the case” change.  Judge Scheindlin said she was concerned there would be a lot of motion practice and a weakening of discovery with the new rules, but so far the courts aren’t changing much.  Changes were made to 37(e) because parties were over-preserving.  Sanctions were taken out, though there are penalties if there was an intent to deprive the other party of information.  Otherwise, the cure for loss of ESI may be no greater than necessary to cure prejudice.  Only applies to electronic information that should have been preserved, only applies if there was a failure to take reasonable steps, and only applies if the information cannot be restored/replaced via additional discovery.  What are “reasonable steps,” though?  Rule 1 requires cooperation, but that puts lawyers in an odd position because clients are interested in winning, not justice.  This is not a sanctions rule, but the court can send you back.  Judge Scheindlin said judges are paying attention to this.  Rule 4(m) reduces the number of days to serve a summons from 120 to 90.  16(b)(2) reduces days to issue a scheduling order after defendant is served from 120 to 90, or from 90 to 60 after defendant appears.  26(c)(1)(B) allows the judge to allocate expenses (cost shiftinMastersNYC2016_receptiong).  34(b)(2)(B) and 34(b)(2)(C) require greater specificity when objecting to production (no boilerplate) and the objection must state if responsive material was withheld due to the objection.  The 50 states are not all going along with the changes–they don’t like some parts.

Better eDiscovery: Leveraging Technology to its Fullest
When there are no holds in place, consider what you can get rid of.  Before discarding the discovery set, analyze it to see how many of the documents violated the retention policy–did those documents hurt your case?  TAR can help resolve the case faster.  Use TAR on incoming documents to see trends.  Could use TAR to help with finding privileged documents (thought the panelist admitted not having tried it).  Use TAR to prioritize documents for review even if you plan to review everything.MastersNYC2016_empire_state  Clustering helps with efficiency because all documents of a particular type can be assigned to the same lawyer.  Find gaps in the production early–the judge will be skeptical if you wait for months.  Can use clustering on custodian level to see topics involved.  Analyze email domains.

Vendor Selection: Is Cost the Only Consideration?
I couldn’t attend this one.

The conference ended with a reception at the top of the Marriott.  The conference also promoted a fundraiser for the victims of the shooting in Orlando.

Highlights from the Southeast eDiscovery & IG Retreat 2016

This retreat was the first one held by Ing3nious in the Southeast.  It was at the Chateau Elan2016_SE_retreat_outside Winery & Resort in Brasel­ton, Geor­gia.  Like all of the e-discovery retreats organized by Chris LaCour, it featured informative panels in a beautiful setting.  My notes below offer a few highlights from the sessions I attended.  There were often two sessions occurring simultaneously, so I couldn’t attend everything.

Peer-to-Peer Roundtables
My table discussed challenges people were facing.  These included NSF files (Lotus Notes), weird native file formats, and 40-year-old documents that had to be scanned and OCRed. Companies having a “retain everything” culture are problematic (e.g., 25,000 backup tapes).  One company had a policy of giving each employee a DVD containing all of their emails when they left the company.  When they got sued they had to hunt down those DVDs to retrieve emails they no longer had.  If a problem (information governance) is too big, nothing will be done at all.  In Canada there are virtually never sanctions, so there is always a fight about handing anything over.2016_SE_retreat_roundtables

Proactive Steps to Cut E-Discovery Costs
I couldn’t attend this one.

The Intersection of Legal and Technical Issues in Litigation Readiness Planning
It is important to establish who you should go to.  Many companies don’t have a plan (figure it out as you go), but it is a growing trend to have one due to data security and litigation risk.  Having an IT / legal liaison is becoming more common.  For litigation readiness, have providers selected in advance.  To get people on board with IG, emphasize cost (dollars) vs. benefit (risk).  Should have an IG policy about mobile devices, but they are still challenging.  Worry about data disposition by a third party provider when the case is over.  Educate people about company policies.2016_SE_retreat_panel

Examining Your Tools & Leveraging Them for Proactive Information Governance Strategy
I couldn’t attend this one.

Got Data? Analytics to the Rescue
Only 56% of in-house counsel use analytics, but 93% think it would be useful.  Use foreign language identification at start to know what you are dealing with.  Be careful about coded language (e.g., language about fantasy sports that really means something else) — don’t cull it!  Graph who is speaking to whom.  Who are emails being forwarded to?  Use clustering to find themes.  Use assisted redaction of PII, but humans should validate the result (this approach gives a 33% reduction in time).  Re-OCR after redaction to make sure it is really gone.  Alex Ponce de Leon from Google said they apply predictive coding immediately as early-case assessment and know the situation and critical documents before hiring outside counsel (many corporate attorneys in the audience turned green with envy).  Predictive coding is also useful when you are the requesting party.  Use email threading to identify related emails.  The requesting party may agree to receive just the last email in the thread.  Use analytics and sampling to show the judge the burden of adding custodians and the number of relevant documents expected — this is much better than just throwing around cost numbers.  Use analytics for QC and reviewer analysis.  Is someone reviewing too slow/fast (keep in mind that document type matters, e.g. spreadsheets) or marking too many docs as privileged?

The Power of Analytics: Strategies for Investigations and Beyond
Focus on the story (fact development), not just producing documents.  Context is very important for analyzing instant messages.  Keywords often don’t work for IMs due to misspellings.  Analytics can show patterns and help detect coded language.  Communicate about how emails are being handled — are you producing threads or everything, and are you logging threads or everything (producing and logging may be different).  Regarding transparency, are the seed set and workflow work product?  When working with the DOJ, showed them results for different bands of predictive coding results and they were satisfied with that.  Nobody likes the idea of doing a clawback agreement and skipping privilege review.

Freedom of Speech Isn’t Free…of Consequences
The 1st Amendment prohibits Congress from passing laws restricting speech, but that doesn’t keep companies from putting restrictions on employees.  With social media, cameras everywhere, and the ability of things to go viral (the grape lady was mentioned), companies are concerned about how their reputations could be damaged by employees’ actions, even outside the workplace.  A doctor and a Taco Bell executive were fired due to videos of them attacking Uber drivers.  Employers creating policies curbing employee behavior must be careful about Sec. 8 of the National Labor Relations Act, which prohibits employers from interfering with employees’ Sec. 7 rights to self-organize or join/form a labor organization.  Taken broadly, employers cannot prohibit employees from complaining about working conditions since that could be seen as a step toward organizing.  Employers have to be careful about social media policies or prohibiting employees from talking to the media because of this.  Even a statement in the employee handbook saying employees should be respectful could be problematic because requiring them to be respectful toward their boss could be a violation.  The BYOD policy should not prohibit accessing Facebook (even during work) because Facebook could be used to organize.  On the other hand, employers could face charges of negligent retention/hiring if they don’t police social media.

Generating a Competitive Advantage Through Information Governance: Lessons from the Field
I couldn’t attend this one.

Destruction Zone
The government is getting more sophisticated in its investigations — it is important to give 2016_SE_retreat_insidethem good productions and avoid losing important data.  Check to see if there is a legal hold before discarding old computer systems and when employees leave the company.  It is important to know who the experts are in the company and ensure communication across functions.  Information governance is about maximizing value of information while minimizing risks.  The government is starting to ask for text messages.  Things you might have to preserve in the future include text messages, social media, videos, and virtual reality.  It’s important to note the difference between preserving the text messages by conversation and by custodian (where things would have to be stitched back together to make any sense of the conversation).  Many companies don’t turn on recording of IMs, viewing them as conversational.

Managing E-Discovery as a Small Firm or Solo Practitioner
I couldn’t attend this one.

Overcoming the Objections to Utilizing TAR
I was on this panel, so I didn’t take notes.

Max Schrems, Edward Snowden and the Apple iPhone: Cross-Border Discovery and Information Management Times Are A-Changing
I couldn’t attend this one.

Highlights from the ACEDS 2016 E-Discovery Conference

The conference was held at the Grand Hyatt in New York City this year.  There were two full days of talks, often with several simultaneous sessions.  My notes below provide only a few highlights from the subset of the sessions that I was able to attend.aceds2016_panel

Future Forward Stewardship of Privacy and Security
David Shonka, Acting General Counsel for the FTC discussed several privacy concerns, such as being photographed in public and having that photo end up online.  Court proceedings are made public–should you have to give up your privacy to prosecute a claim or defend against a frivolous claim?  BYOD and working from home on a personal computer present problems for possession, custody, and control of company data.  If there is a lawsuit, what if the person won’t hand over the device/computer?  What about the privacy rights of other people having data on that computer?  Data brokers have 3,000 data points on each household.  Privacy laws are very different in Europe.  Info governance is necessary for security–you must know what you have in order to protect it.

The Art & Science of Computer Forensics: Why Hillary Clinton’s Email & Tom Brady’s Cell Phone Matter
Email headers can be faked–who really sent/received the email?  Cracking an iPhone may fail Daubert depending on how it is done.  SQLite files created by apps may contain deleted info.  IT is not forensics, though some very large companies do have specialists on staff.  When trying to get accepted by the court as an expert, do they help explain reliable principles and methods?  If they made their own software, that could hurt.  They need to be understandable to other experts.  Certifications and relevant training and experience are helpful.  Have they testified before (state, federal)?  Could be bad if they’ve testified for the same client before–seen as biased.  Reports should avoid facts that don’t contribute to the conclusion.  Include screenshots and write clearly.  With BYOD, what happens when the employee leaves and wipes the phone?  Companies might consider limiting website access (no gmail).

The Secrets No One Tells You: Taking Control of Your Time, Projects, Meetings, and Other Workplace Time-Stealers
I couldn’t attend this one.

Ethics Rules for the Tech Attorney
I couldn’t attend this one.

Hiring & Retaining E-Discovery Leaders
I couldn’t attend this one.

Piecing the Puzzle Together: Understanding How Associations can Enhance Your Career
I couldn’t attend this one.

Tracking Terrorism in the Digital Age & Its Lessons for EDiscovery – A Technical Approach
I couldn’t attend this one.

E-Discovery Project Management: Ask Forgiveness, Not Permission
I couldn’t attend this one.

The Limits of Proportionality
I couldn’t attend this one.

What Your Data Governance Team Can Do For You
I couldn’t attend this one.

Financial Industry Roundtable
I couldn’t attend this one.

Using Analytics & Visualizations to Gain Better Insight into Your Data
I couldn’t attend this one.aceds2016_lunch

Defending and Defeating TAR
Rules of Professional Conduct 5.3 says a lawyer must supervise non-lawyers.  Judge doesn’t want to get involved in arguments over e-discovery–work it out yourselves.  After agreeing on an approach like TAR, it is difficult to change course if it turns out to be more expensive than anticipated.  Make sure you understand what can be accomplished.  Every case is different.  Text rich documents are good for TAR.  Excel files may not work as well.  If a vendor claims success with TAR, ask what kind of case, how big it was, and how they trained the system.  Tolerance for transparency depends on who the other side is.  Exchanging seed sets is “almost common practice,” but you can make an argument against disclosing non-relevant documents.  One might be more reluctant to disclose non-relevant documents to a private party (compared to disclosing to the government, where they “won’t go anywhere”).  Recipient of seed documents doesn’t have any way to know if something important was missing from the seed set (see this article for more thoughts on seed set disclosure).  Regulators don’t like culling before TAR is applied.  In the Biomet case, culling was done before TAR and the court did not require the producing party to redo it (in spite of approximately 40% of the relevant documents being lost in the culling).

Training was often done by a subject matter expert in the past.  More and more, contract reviewers are being used.  How to handle foreign language documents?  Should translations be reviewed?  Should the translator work with the reviewer?  Consider excluding training documents having questionable relevance.  When choosing the relevance score threshold that will determine which documents will be reviewed, you can tell how much document review will be required to reach a certain level of recall, so proportionality can be addressed.  “Relevance rank” is a misnomer–it is measuring how similar (in a sense) the document is to relevant documents from the training set.

Judge Peck has argued that Daubert doesn’t apply to TAR, whereas Judge Waxse has argued that it does apply (neither of them were present).  Judge Hedges thinks Waxse is right.  TAR is not well defined–definitions vary and some are very broad.  If some level of recall is reached, like 80%, the 20% that was missed could contain something critical.  It is important to ensure that metrics are measuring the right thing.  The lawyer overseeing e-discovery should QC the results and should know what the document population looks like.

Managing Your Project Manager’s Project Manager: Who’s On First?
I couldn’t attend this one.

E-Discovery & Compliance
I couldn’t attend this one.

Solving the Privilege Problem
I couldn’t attend this one.

What Your Data Governance Team Can Do For You
I couldn’t attend this one.

The Anatomy of a Tweet
Interpreting content from social media is challenging.  Emojis could be important, though they are now passé (post a photo of yourself making the face instead).  You can usually only collect public info unless it is your client’s account.  Social media can be used to show that someone wasn’t working at a particular time.  A smartphone may contain additional info about social media use that is not available from the website (number of tweets can reveal existence of private tweets or tweets that were posted and deleted).  Some tools for collecting from social media are X1, Nextpoint, BIA, and Hanzo.  They are all different and are suitable for different purposes.  You may want to collect metadata for analysis, but may want to present a screenshot of the webpage in court because it will be more familiar.  Does the account really belong to the person you think it does?

The Essence of E-Discovery Education
I couldn’t attend this one.

Women to Know: What’s Your Pitch / What’s Your Story
I couldn’t attend this one.

Establishing the Parameters of Ethical Conduct in the Legal Technology Industry – LTPI Working Session
I couldn’t attend this one.

Tracking Terrorism in the Digital Age & Its Lessons for EDiscovery – Judicial Perspectives
Judges Francis, Hedges, Rodriguez, and Sciarrino discussed legal issues around Apple not wanting to crack the iPhone the San Bernardino killers used and other issues around corporate obligations to aid an investigation.  The All Writs Act can compel aid if it is aceds2016_judgesnot too burdensome.  The Communications Assistance for Law Enforcement Act of 1992 (CALEA) may be a factor.  Apple has claimed two burdens: 1) the engineering labor required, and 2) its business would be put at a competitive disadvantage if it cracked the phone because of damage to its reputation (though nobody would have known if they hadn’t taken it to court).   They dropped (1) eventually.  The government ultimately dropped the case because they cracked the phone without Apple’s help.  Questions that should be asked are how much content could be gotten without cracking the phone (e.g., from cloud backup, though the FBI messed that up by changing a password), and what do you think you will find that is new?  Microsoft is suing to be allowed to tell a target that their info has been requested by the government.  What is Microsoft’s motivation for this?  An audience member suggested it may be to improve their image in privacy-conscious Germany.  Congress should clarify companies’ obligations.

EDna Challenge Part 2
The EDna challenge attempted to find low-cost options for handling e-discovery for a small case.  The challenge was recently revisited with updated parameters and requirements.  SaaS options from CSDisco, Logikull, and Lexbe were too opaque about pricing to evaluate.  The SaaS offering from Cloudnine came in at $4660, which includes training.  The SaaS offering from Everlaw came in at $2205.  Options for local applications included Prooffinder by Nuix at $600 (which goes to charity) and Intella by Vound at $4780.  Digital WarRoom has apparently dropped their express version, so they came in above the allowed price limit at $8970.  FreeEed.org is an open source option that is free aside from AWS cloud hosting costs.  Some questioned the security of using a solution like FreeEed in the cloud.  Compared to the original EDna challenge, it is now possible to accomplish the goal with purpose-built products instead of cobbling together tools like Adobe Acrobat.  An article by Greg Buckles says one of the biggest challenges is high monthly hosting charges.

“Bring it In” House
I couldn’t attend this one.

The Living Dead of E-Discovery
I couldn’t attend this one.

The Crystal “Ball”: A Look Into the Future of E-Discovery
Craig Ball pointed out that data is growing at 40% per year.  It is important to be aware of all of the potential sources of evidence.  For example, you cannot disable a phone’s geolocation capability because it is needed for 911 calls.  You may be able to establish someone’s location from their phone pinging WiFi.  The average person uses Facebook 14 times per day, so that provides a record of their activity.  We may be recorded by police body cameras, Google Glass, and maybe someday by drones that follow us around.  Car infotainment systems store a lot of information.  NFC passive tags may be found in the soles of your new shoes.  These things aren’t documents–you can’t print them out.  Why are lawyers so afraid of such data when it can lead to the truth?  Here are some things that will change in the future.

Changing of the guard:  Judge Facciola retired and Judge Scheindlin will retire soon.  Retraction of e-discovery before it explodes: New rules create safe harbors–need to prove the producing party failed on purpose.  Analytics “baked into” the IT infrastructure: Microsoft’s purchase of Equivio.  Lawyers may someday be able to look at a safe part of the source instead of making a copy to preserve the data.  Collection from devices will diminish due to data being held in the cloud.  Discovery from automobiles will be an emerging challenge.

Traditional approaches to digital forensics will falter.  Deleted files may be recovered from hard disk drives if the sectors are not overwritten with new data, but recovering data from an SSD (solid-state drive) will be much harder or impossible.  I’ll inject my own explanation here:  A data page on a SSD must have old data cleared off before new data can be written to it, which can be time consuming.  To make writing of data faster, newer drives and operating systems support something called TRIM, which allows the operating system to tell the drive to clear off content from a deleted file immediately so there will no be no slowness introduced by clearing it later when new data must be written.  So a SSD with TRIM will erase the file content shortly after the file is deleted, whereas a hard drive with leave it on the disk and simply overwrite it later if the space is needed to hold new data.  For more on forensics with SSDs see this article.

Encryption will become a formidable barrier.  Lawyers will miss the shift from words to data (e.g., fail to account for the importance of emoticons when analyzing communications).  Privacy will impact the scope of discovery.

Metrics That Matter
I couldn’t attend this one.

Avoiding Sanctions in 2016
I couldn’t attend this one.

Master Class: Interviewing in eDiscovery
I couldn’t attend this one.

E-Discovery & Pro-Bono Workshop
I couldn’t attend this one.