US Magistrate Judge Sarah Cave entered the fray on a dispute over search terms in a lawsuit with allegations of targeting employees to join a competitor exchanged proposed search terms to identify relevant information after discovery requests had been propounded. As one can expect when there is a motion to compel, it did not go well. Precision Med. Grp. v. Blue Matter, LLC, CIVIL ACTION NO.: 20 Civ. 2974 (PGG) (SLC) (S.D.N.Y. Dec. 15, 2020).
Much to the parties’ credit, search term efficiency reports were discussed over proposed search terms. The Plaintiff’s first set of proposed search terms had 40,000 hits and over 60,000 when families were included. The Defendant claimed reviewing 60,000 records was unduly burdensome. Plaintiff provided a second set of search terms with proximity searches, which had 30,000 hits and 55,000 including families. Precision, at *3-4.
The Defendants provided their own search terms, which the Plaintiff claimed were deficient, because they excluded terms relating to non-compete and disclosure. Further exchanges of proposed terms and deletion of terms followed. . Precision, at *4-5.
The Plaintiff argued that it was not unreasonable for the Defendant to review 47,000 records either manually or with technology assisted review. The Defendant [naturally] disagreed with the Plaintiff. Precision, at *5.
The Defendant claimed it would take them 470 hours to review the 47,000 records, assuming that a reviewer averaged 100 records an hour. They argued for eliminating 12 terms that were designed to identify general business activities, which they claimed were irrelevant. Precision, at *5.
The Court ordered the Defendant to conduct searches using the Defendant’s second proposed search terms, which was its revisions to the Plaintiff’s third set of proposed search terms, plus a proximity search that contained an “OR” connector for relevant individuals in the lawsuit. The Court found this specific search string to be narrowly tailored and not overly burdensome for the Defendants, as it only added 1,190 records for review. Precision, at *5-6.
Bow Tie Thoughts
Disputes over search terms can go down a rabbit hole quickly. The parties in this case proposed search terms and proximity searches, followed by discussion over search term efficiency reports. Seeing that detail of discussion is refreshing, because it shows parties are discussing specifics. It is not a matter of one party claiming “document review is hard,” without any evidence of how many records needed to be reviewed and a time estimate.
The idea of a dispute with 50,000 records does not sound unreasonable out of the gate. It would have been helpful to have information such as the number of senders of messages, domain names, date ranges, and other analytics to evaluate the burden.
The other issue is that just because there are 50,000 records, does not mean 50,000 records need to be reviewed. Review applications with clustering can help identify relevant and irrelevant ESI. Moreover, a predictive coding model could also help focus review to relevant information.
Is it possible there were an actual 50,000 records to manually review? Yes, absolutely. However, we have tools to narrow that volume of ESI down to a manageable number beyond lawyers conducting document review one email at a time. Analytical tools would really be helpful in determining what was reasonable in this case besides the search efficiency report.