After a year of hype over the use of natural-language generative AI technology, legal teams will be putting use cases to the test this year even as trust concerns persist, law and technology specialists say.
For in-house teams looking to save on outside counsel costs, AI can help them build a knowledge base on matters that they would typically outsource to law firms, says Carlos Garcia-Egocheaga, CEO of knowledge management company Lexsoft.
One Japanese carmaker that Garcia-Egocheaga spoke with is looking to build a database of memos and briefs and other work products from outside counsel that the company has outsourced over the years so its in-house attorneys can do more of the initial legalwork on cases themselves.
“They were sending work outside of the company to external law firms,” Garcia-Egocheaga told Legal Dive. “Now, if they can get all of the answers they’ve gotten over the years into a well organized, well classified knowledge base, with proper taxonomy so they can search all of them, that would save substantial money.”
For a big multinational like the automaker that has in-house lawyers working in different countries and in different departments, company lawyers don’t always know what their legal colleagues are working on or whether they’ve worked on similar matters.
“If suddenly your cars are having problems with a particular part, that’s something that could have happened over the years many times,” Garcia-Egocheaga said. “There could be different departments of that same automobile company, asking very similar questions and could be using different law firms to get the same answer.”
If a company does a lot of M&A, in another example, the company can create a knowledge base using all of the previous outside counsel work that has been done on deals so in-house lawyers have a ready-made workflow for the next deal.
“Take all of those final M&A deals you’ve done, all in a single place, so the next time you do an M&A deal, you can search for similar ones to know exactly what the documentation is that you need,” he said. “So, you’re going to have savings, because you’re going to use less time of your own lawyers and if you’re referring out work to external firms, you can ask for less in billable hours.”
Any kind of repetitive in-house legal task can benefit from using AI to tap into a knowledge base and create workflow templates, the specialists say.
Filing an H-1B work permit petition is a good example, says Vishal Sunak, CEO of enterprise legal technology company LinkSquares.
Many companies, especially in the technology sector, rely on work permits to bring in specialists from outside the country to work on cutting-edge products and services. For companies that rely on this outside talent, having an AI-assisted workflow template and checklists can help in-house counsel get the petitions across the finish line more quickly and at less cost than they otherwise could.
“If you don’t know all 12 steps for getting an H-1B, you can ask the LLM [large language model] to say, ‘I need the standard steps to getting an H-1B done. What do we have to do?’ and it’ll make the suggestion,” says Sunak. “You can then turn that into subtasks of a project and that’s helpful because it gains tons of depths of knowledge on topics that you can have instantly and supplements the knowledge that people have internal to the company.”
Sunak said his company is building that kind of task assistance into its Prioritize line of legal operations tools, with the goal of freeing up attorneys to focus on the substance of the documentation rather than the process.
“Technology that makes the team more effective and efficient to get more work done in a smaller amount of time is still the North Star for everyone,” Sunak told Legal Dive.
Internal HR investigations is another example. “You can set up a template with all the checklists and checkboxes you need, subtasks, and use the LLM to help suggest what should go into this standard template,” he said.
In the e-discovery space, AI can comb through hundreds of documents in seconds to help attorneys spot patterns, prepare a sentiment or fact analysis and surface documents that contain information needed in a matter.
“An area that’s really breaking open is fact analysis,” says Chuck Kellner, strategic discovery advisor at e-discovery software company Everlaw. “You take a group of 100 or 200 key documents that attorneys have identified as hot docs and say, ‘Give me a position paper on these documents. Give me a fact analysis in chronological order on these documents.’ And instead of it taking 2-3 days to prepare that draft, it’s taking 2-3 minutes, and another 20 minutes to have a first draft completed, including with hyperlinks and references to pieces of evidence.”
Lingering concerns
Even as in-house teams move these and other novel use cases from ideas to practice, trust issues remain in the background.
An analysis by Stanford University researchers found that raw generative AI tools like ChatGPT, Llama and PaLM, when used for legal case analyses, are notoriously error-prone, in some cases producing errors in almost 90% of cases and, when asked to make complex precedential analyses, produce results that are no better than random guessing.
Users should have “significant concerns about the reliability of LLMs in legal contexts, underscoring the importance of careful, supervised integration of these AI technologies into legal practice,” the researchers say in their report, called Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models.
Although the error-rate finding is alarming, and reinforces news stories about hallucination-filled legal filings, the reality is different for purpose-built AI tools that sit on top of LLMs, legal technology specialists say. These tools, like those for e-discovery, contract management or drafting, use LLMs as their operating engines but don’t pull from the LLM’s unwieldy training corpus; they pull from restricted sources of vetted data.
Purpose-built legal tools
In the case of contract management, the training corpus is the CLM company’s own playbook and clause library or, for users that have deeper resources, the training corpus is their own playbook and clause library, not the wider LLM corpus.
“If the customer has a tightly defined playbook, [the AI] can play with that,” said Sunak. “If they don’t have a tightly defined playbook, and they want generative AI to make suggestions on how to make a contract position more favorable for the company, or less problematic in certain areas, you can use [our playbook].”
It’s similar with tools for assisting in drafting documents. Purpose-built tools, like Thomson Reuters’ plugin with Microsoft 365 Copilot or Lexis + AI, pull from standard legal sources like Westlaw and LexisNexis, which helps ensure whatever material the tools pull from is part of a standard, well-vetted corpus.
“It’s our job as tech developers to deliver those findings, those recommendations, in ways that attorneys can readily verify, whether those recommendations are valid or not,” Kellner told Legal Dive. “That’s the big challenge for 2024.”
The trust issue goes both ways: it’s not just bad information coming into the corpus that the tools pull from; it’s also protecting an organization’s sensitive and proprietary information from getting pulled into the broader LLM training corpus.
Established legal tech software companies have built into their tools mechanisms to keep sensitive information within the user’s knowledge base, which should go a long way in making legal leaders comfortable that their team is pulling only from pools of good information and that their sensitive information stays secure within their knowledge base.
“Our generative AI strategy factors in the security and privacy and knowledge that is proprietary to each one of our customers in the form of how we’re building this technology for our suite,” Sunak said.
His company’s partnership with AWS is a big part of its security approach, he said.
“AWS solved [information leakage problems] with its bedrock service,” he said. “That’s why we partnered with AWS. They already figured out how to prevent LLM providers from using proprietary information to make their LLM.”
To keep their data secure, legal teams must make the investment in enterprise licenses when they’re subscribing to an AI service, because those licenses are what keep enterprise data within the enterprise’s knowledge base.
“With the enterprise license model, the data only goes one way,” said Kellner. “It can be submitted [to the LLM], but it’s quickly erased, anywhere from seconds, minutes, hours, days or maybe up to a month, depending on the service level into which you buy for that enterprise license.”
Given the expanding menu of use cases and the presence of safety measures in purpose-built tools, in-house legal leaders can expect 2024 to be the year AI matures into on-the-ground experiments that help teams do more with less, more quickly and with better accuracy, as technology champions have promised.