Proteus Blog | eDiscovery & Managed Review

How Generative AI is Changing What It Means to Be “Review Ready” in eDiscovery

Written by Sarah Barth | Dec 2, 2025 1:30:00 PM

Last month, I attended a session titled Uniquely LLM: How Generative AI is Transforming eDiscovery. While the event centered on large language models (LLMs), what stood out most was what the technology means for the future of document review, and more importantly, how we lead and deliver that work in my own job.

As Director of Managed Review, my job is to ensure that every review we deliver is defensible, efficient, and tailored to our client’s case strategy. I left this session thinking deeply about how the capabilities of generative AI are reshaping not just the "how" of review, but the very definition of "review ready."

 

Smarter Tech, Better Questions

Traditional predictive coding tools like TAR and CAL have long helped reduce document volumes. But LLMs go further by letting us give natural language instructions, like “Identify documents related to off-label marketing strategies,” and get reasoned, citation-backed answers across massive datasets.

As someone who has supervised many reviews, this matters. It’s about asking better questions and getting better insights earlier in the case lifecycle.

One of the most compelling takeaways from the session was how LLMs are broadening the definition of what’s possible in document review. Instead of reviewers needing to manually identify inconsistencies, LLMs can now analyze the internal logic of a document – flagging mismatched names, dates, or figures without requiring line-by-line scrutiny. That kind of contextual awareness also allows these tools to operate effectively even when data richness is low. In other words, they can surface rare but critical issues, like references to whistleblower complaints or unusual billing activity, that traditional models might overlook due to insufficient volume.

These models also eliminate longstanding barriers in multilingual matters. With LLMs, we no longer need to translate documents just to determine their relevance, the models interpret content directly, regardless of language. And they’re not limited to text; AI can interpret handwritten notes, images, and embedded visual content (formats that have historically been cumbersome in review).

For me, that kind of flexibility is a major advantage because it allows us to adapt review workflows to the realities of each matter and gives cases more fluidity, nuance, and precision within the technology.

 

Human Judgment, Still Required

One concept I particularly appreciated was the emphasis on “RAG” (retrieval-augmented generation). Essentially, it’s what turns a chat with an AI model into a documented, traceable research process, critical for defensibility. AI might summarize testimony or identify contradictions, but it also shows its work. This transparency is why I remain cautiously optimistic about integrating these tools into our managed review protocols.

That said, this session reinforced something I’ve always held at Proteus: trust, but verify. This principle applies just as much to AI as it does to any first-pass review – someone has to double-check the work.

 

Final Thought

LLMs aren’t a magic wand, and I’m not planning to turn over our review queues to “the robots”. But the ability to interact with data conversationally, uncover hidden context, and audit the results is something I believe will meaningfully elevate how we serve our clients. As always, it’s not about replacing human reviewers, but rather equipping them with the smartest tools to deliver the highest quality work product.