It’s better for attorneys to think about AI as artificial instinct rather than artificial intelligence, says David Wang, chief innovation officer at Wilson Sonsini Goodrich & Rosati.
Until it matures, AI is good at mining data and generating quick responses in a way that’s analogous to people relying on instinct when they don’t have time to make a considered decision, Wang said in a webcast hosted by contract automation company Lexion.
That makes the technology useful in an assistive capacity, like quickly surfacing contract clauses, which can help accelerate workflows, and a little trickier for use in an advisory capacity – using it for writing first drafts of motions, for example.
“I see people put in prompts like, ‘Draft a motion to dismiss,’” he said. “Guess what? It’s not going to come up with a good answer just as if you told a junior associate to draft a motion to dismiss in an email” with little additional context. “It requires much more than people expecting a magic wizard.”
Guardrails needed
Any use of an AI tool in a legal context has to have guardrails built in, Wang said. That means the workflow must include a process for documents to be checked and improved by the right legal specialist.
The more knowledgeable the attorney, the better the tools can work, Wang believes, because the attorney who has a deep knowledge of a legal area can see immediately what part of the document needs more work or includes incorrect references.
It can be helpful to new attorneys, too, to the extent they use it to get started on a draft on a topic they’re still learning about, but only if guardrails are in place to ensure more experienced eyes look at it, he said.
Purpose-built tools
ChatGPT is the most widely available generative AI tool but it’s not what you want to use for legal work, Wang suggested. It’s better to use a purpose-built tool that sits on top of the large language model that ChatGPT uses.
The purpose-built tool, whether it's for document drafting or some other purpose, will typically incorporate your organization's knowledge base into its training corpus, he said. That means the clause library and contract archive of your organization get built into the model, so contracts drafted using the tool pull from what the organization has already done. What’s more, as new contracts get drafted, the latest language gets built in as well.
The purpose-built tool will also integrate into the organization's enterprise platform and will have safety systems in place to keep any organization-specific data from leaking into the wider ChatGPT data corpus.
Liability risks
For counsel concerned about liability from use of the tools, what can and can’t be controlled from a performance standpoint will depend on where the organization fits into the data pipeline.
If the organization adds data to the model that other entities use – if yours is a company that offers an AI product that adds data to the foundational model, for example – that could raise issues that an end-user of your product wouldn’t face. If the data you add gets pushed up all the way into the foundational model, that raises its own issues.
“What are you doing with respect to the data, if anything?” Wang said. “If you’re with an application and literally all you’re doing is hooking it up to a third-party large language model, and your end-users are putting prompts in and going directly to the model and coming back out, that’s a different scenario than if you’re embedding a model into an application. How you mitigate the risks in those cases is going to be different.”
For end-users that don’t add data to the corpus, the main concern will be to do the due diligence to ensure the output doesn’t include problematic content or the hallucinations that have been getting reported on in the media, including cases in which made-up content has gotten included in legal filings.
“Do you understand the provenance of how this model works, step by step?” Wang said.
Where in the data pipeline you fall is important to know if you’re negotiating a contract with, say, a SaaS startup that’s providing an AI application for your organization to use, Wang said. How much leverage you have, in terms of who takes what liability, or how much, depends in part on who’s adding what data to the model.
OpenAI and Microsoft, for example, as the builders of the LLM behind ChatGPT, will have different leverage than a small SaaS startup that’s offering a purpose-built application that sits on top of the LLM.
“Especially if you’re in a B2C or B2b context, do you really have leverage to negotiate with, say, Microsoft?” Wang said. “That’s more of a take-it-or-leave-it scenario. In a case like that, it’s an understanding of what you know and don’t know [about what goes into the model], and what comes out of the box in terms of rights and obligations the provider assumes. By using those capabilities, you’re assuming these risks, so it’s making sure your organization understands those risks and communicates them appropriately to people within your organization.”