The initial benefit of generative AI is more basic than many in-house lawyers might realize, corporate legal specialists say.
Having a generic generative AI tool like ChatGPT won’t be much help when it comes to legal work but it’s perfectly fine for brainstorming ideas.
“It’s a very powerful use case,” Julian Tsisin, director of legal and compliance technology for Meta, said in a webcast hosted by New York Law School. “It can help you with a lot of ideations — help you with this blank-page syndrome when you have an idea but you don’t know how to express it yet. When you ask the model, it will come back to you with something. It’s not going to be good quality initially, but it’s a good first step to start your drafting.”
You can expect a high degree of hallucinations in a case like this because the AI tool is drawing from the generic corpus of data it was trained on, so you wouldn’t use it beyond getting something on paper, Tsisin said.
For advanced work, you want to use specialized software that builds on top of the generative AI algorithm but limits searches to documents that you provide — like a law you want to analyze or the contracts your organization has entered into or your company’s policies or all of the legal memos your team has asked for from outside counsel over the years, he said.
Now the quality of the output is higher, the information more useful and the risk of hallucinations lower. This kind of use case — what Tsisin calls grounded interrogations of documents — is the most popular today and can lead to efficiencies in in-house operations.
“If you want to generate an outline for a PowerPoint, based on documents you already have in your possession, you can upload those documents and prompt a model to generate an outline based on information in those documents,” he said.
Probably 80% of use cases in legal departments today is some version of that, he said.
“You used to provide a few key words and [through] semantic matching, receive documents back,” he said. “You read those documents to find the answer to your question. Now the tool synthesizes that answer for you.
“If we can gain 20% to 30% efficiency,” he added, “that’s more than enough to justify whatever investment we need to make into this technology. ”
These types of use cases are essentially limitless, said Ashley Christakis, senior manager of e-discovery and legal operations at CrowdStrike. “Use cases mostly fall under interrogating documents [as long as] you have a highly trusted source of content that you're looking to create a chatbot against,” she said. “What is our retention policy for xyz? What’s the policy on global travel?”
The alternative is to search documents yourself or, if your organization uses a ticketing system, submit a ticket with the question and wait for someone to do a manual search and get back to you with the answer.
“I’ve heard of amazing results already with this,” she said. “People are seeing major cost reductions in resources and time.”
E-discovery has been the low-hanging fruit of generative AI, she said. E-discovery teams have been using the tool to conduct first-level reviews of documents. “You’ll have to cull it down a bit, just like you do with search terms,” she said, but it effectively creates “a protocol for document reviewers” that saves time and effort.
AI legal assistant tools like those available from Westlaw, LexisNexis and Casetext, among others, follow the same principle of grounded interrogations in which the tool limits its searches to a limited set of documents. But in the case of these tools, they come with the set of documents built in: laws, court decisions, agency regulations and so on.
“They all fall in the … category” of grounded interrogation, Tsisin said. “The difference is, these vendors provide that predefined set of documents to ground the model. It’s obviously a large set of documents, opinions and case law, but conceptually it’s the same thing” as providing the documents yourself.
These early use cases come with accuracy levels based on the data the algorithm searches, but each case can make legal teams more efficient if they’re used with an understanding of how they can, and can’t, help you, the specialists suggested.