Johannes (Jan) Scholtes is chief data scientist at IPRO and Geoffrey Vance is a partner at Perkins Coie. Views are the authors’ own.
The breakthroughs in artificial intelligence that have occurred over the past few months make it clear that various new platforms will dramatically change the legal industry over the next three to five years.
But figuring out how to harness AI responsibly without damaging the trust and credibility of the legal field remains an open question. So, too, are questions about ensuring the technology is defensible and transparent.
One of the many challenges is that lawyers do not fully understand where these AI models come from, how they work and what their limitations are — and that is a risk.
That is why we are jointly proposing in this column that law firms and legal operations teams recognize that they don’t know what they don’t know and work closer than ever before with legal technology providers.
Lawyers aren’t supposed to be the captains of the application of generative AI in the legal industry.
In our view, their role is more like that of a first officer, whose responsibility is to assist the primary pilot in getting to the ultimate destination.
The law firms and lawyers who don’t reach that understanding fast will be left behind. Those who do will lead the future of the legal industry.
Without understanding, transparency and a proper framework for (legal) defensibility, there will be no trust. Without trust, the legal industry will not accept AI, and that, our friends, is a good thing — often coined “professional skepticism.”
Lawyers are not alone in their lack of a technical understanding of AI. Very few people in the world truly understand these large language models.
People who rely on the models do not fully understand the technology, and many of the companies involved in the models have not fully disclosed the specifics of the algorithms.
Put another way, the technology was the easy part; understanding it is much, much more challenging.
We also believe that generative AI models can be used today for a myriad of tasks — but must be monitored closely.
AI, in its current form, should be viewed as a “legal copilot” — meaning AI can be used to do some of the work, but its work must be constantly monitored, reviewed and corrected.
Using AI as a legal copilot has the potential to revolutionize the legal industry by enabling law firms and legal departments to work more efficiently, accurately and cost-effectively, while also expanding access to legal services for a broader range of clients.
While there are still challenges to be addressed, such as ensuring the transparency and accountability of AI-based systems, the adoption of legal copilots by leading law firms suggests that this technology is likely to play an increasingly important role in the future of the legal profession.
The generative AI models we currently have, while amazing breakthroughs, are not intended to be the reliable legal search assistant that lawyers are looking for, clients expect and courts demand.
AI models take shortcuts and do not disclose why they make certain decisions and not others.
But we can already use ChatGPT for legal applications for which it is not designed.
ChatGPT is a generative language model, designed to produce language based on a statistical process mimicking human language.
ChatGPT on its own is not a search engine, nor a source of knowledge, nor a translator, nor a linguistic classifier.
Indeed, given how fast ChatGPT returns results with such a highly-confident tone, it “feels” like it is always providing accurate information and gives the perception that it will continue to improve its results over time.
Despite its limitations, ChatGPT can be used to assist with the following tasks:
Contract drafting: Contract drafting is likely one of the most promising applications of textual generative AI because contracts are typically highly structured documents that contain specific legal language, terms and conditions.
Query expansion: As we all know, humans are always biased. When humans define (Boolean) search queries, the search keywords chosen by a human operator are subject to this bias. Generative AI can help users define a search query and come up with keywords end-users would not have thought of on their own.
Summarization: Legal documents can be lengthy and often contain boilerplate text. Summarization can provide a quick overview of the most important aspects of such a document and ChatGPT is very good at summarization.
By using generative AI models, law firms and in-house legal teams can relieve staff lawyers from mundane tasks like those listed above.
Humans are not cognitively suited to quickly find relevant case law in paper binders, manually review 100 million emails consistently or redact hundreds of thousands of documents containing privileged information without making errors in the process.
That is why AI and humans working in tandem can be more effective than AI or humans working alone because they bring different strengths and abilities to the table.
AI is good at processing large volumes of data quickly, identifying patterns and trends and making predictions based on statistical analysis. It is also not subject to emotional reactions, which can sometimes cloud human judgment.
On the other hand, humans are better at tasks that require creativity, critical thinking and the ability to interpret complex information. They also bring a wealth of experience, knowledge and intuition that cannot be replicated by AI.
By combining AI and human capabilities, organizations can leverage the strengths of both to improve decision-making and achieve better outcomes.
Finally, we believe AI and humans can work together to improve the quality of AI models over time. Humans can provide feedback on the accuracy and relevance of AI-generated recommendations, which can be used to refine and improve the AI models.
The law firms and in-house lawyers that begin using this technology today will be better positioned to use it effectively as it continues to evolve and moves toward more widespread use in the future.