Search terms are a lot like the game Codenames, where parties find themselves guessing terms of art to discover relevant electronically stored information. In a lawsuit between news-aggregator cell-phone apps, the Plaintiffs learned the Defendants referred to the Plaintiff’s app by the codename “Ajax.” The Defendants admitted there were 5,126 unique documents including family members that contain the term “Ajax” that had not been reviewed for responsiveness. The Plaintiffs brought a motion to compel the Defendants to review the documents and produce the responsive ones. Updateme Inc. v. Se, 2018 U.S. Dist. LEXIS 175562, at *2 (N.D. Cal. Oct. 11, 2018).
The Defendants claimed that the codename was about the threatened lawsuit and not to describe the Plaintiff’s product. Updateme, at *2. The Defendants even claimed they sampled the data and confirmed their position. The problem…. they did not explain any details of how they sampled the data. Updateme, at *2-3.
The Plaintiffs offered 93 produced documents by the Defendants, and focused on two of them directly, to refute the Defendants’ claims Ajax was just a codename for the litigation. The Defendants argued that the 93 documents were subject to pending clawback requests and were clearer in referring to the dispute in German. Updateme, at *3. The Defendants further proclaimed the reviewing the search hits was not proportional to the needs of the case and the term Ajax was not in the ESI Protocol. Id.
Judge Laurel Beeler stated whether “Ajax” referred to the Plaintiff or the litigation was a “distinction without a difference.” Updateme, at *4. The term was going to have responsive hits for the Plaintiffs request for production for communications concerning the Plaintiff. Id.
The Court ordered the Defendants to randomly sample 10% of the un-reviewed documents and review them with their families for responsiveness. Responsive documents were to be produced and if any were withheld based on privilege claims, a privilege log was to be produced. The Defendants were to further produce a chart listing the number of documents and families review and the rate of responsiveness in one week. Updateme, at *5.
Bow Tie Thoughts
Document review of search term hits can be described as “trust but verify.” No matter how confident a party is that search terms will identify relevant hits, it is radically dangerous to trust the results without verifying the accuracy of the hits. The Court’s order to review a random sample of the un-reviewed documents with families was a smart one. There are review applications that have the ability to generate a random sample of data to review. This is a good practice when starting document review to validate the effectiveness of search terms.
The Court’s order to produce both a chart of the reviewed documents that identified whether or not the documents were relevant, non-responsive, or privileged, can be generated from review applications that allow results to be exported to a CSV or Excel file. This would require setting up issue coding that identified responsiveness or privilege to be exported for production to the opposing party.
Properly using eDiscovery review applications can meet the demands of responding to requests for production. This requires understanding the software and developing strategies to find the responsive data.