Seth Price is founder and general counsel of Price Benowitz. Views are the author’s own.
In a survey of lawyers conducted by Thomson Reuters last spring, 82% felt ChatGPT and generative AI could be applied to legal work, with 51% emphasizing that it should be.
Since then, ChatGPT has received mixed reviews. A study by MIT showed workers using it to complete writing projects 40% faster with nearly 20% improvement in quality. Yet research by Boston Consulting Group found users performing nearly 20% worse than peers on certain tasks.
Why the discrepancy?
It boils down to leadership. Some workers aren’t sure how to apply these tools while others use them secretly so they’re not seen as being lazy.
When it comes to law, many are concerned over accuracy. The last thing in-house counsel want is to repeat the embarrassment of two attorneys who submitted a legal brief to a judge laden with fictional cases and citations generated by ChatGPT.
Still, the potential gains can’t be ignored. Risk-averse legal teams that a wait-and-see approach could miss out on cost reductions while enabling competitors to leapfrog ahead. As an IDC analyst noted when the firm projected organizations to increase their AI spending by roughly 30% in 2023: “Companies that are slow to adopt AI will be left behind – large and small.”
Tips for harnessing tools
It’s time corporate legal leaders encourage in-house staff to experiment with generative AI – as long as there are guardrails in place to ensure they don’t veer off into trouble. The following four tips can help you get started:
- Put it in their hands. One of the best ways to learn is through hands-on experience. Have your staff start using ChatGPT, just not with privileged material and on select databases only. This will help team members discover safe ways to use these tools, while undoubtedly producing material illustrating why thorough reviews are needed before details are presented to anyone outside their organization.
- Set policies and police users. Develop policies for using these tools and guidelines on how they should be leveraged. Prohibit the external use of free or open source tools because these are trained on the public internet and could bring in undesirable and inaccurate data. General counsel should also see that activities are policed to ensure teams aren’t working in areas that are off limits like with sensitive client information.
- Create an evaluation team. Assemble a team to evaluate the tools and usage in order to create policies and enhance them as needed. Make sure all in-house functions using the technology are represented, and encourage participation by young and old team members – from digital natives like Gen Z to technology cautious boomers. This will help you integrate tools across multi-generational teams, determine best practices and spur adoption.
- Review religiously. There is a “black box” element to generative AI tools because you can’t always find the origin of information it uses to verify its accuracy. With this in mind, it’s critical all materials created by ChatGPT are reviewed to ensure they’re accurate, not offensive or plagiarized.
Turning risks into rewards
The rewards of generative AI are well worth the risks. Tools like ChatGPT can analyze legal documents and automate the first draft of materials, from briefs to contracts to email responses, saving significant time and costs. When used with chatbots, customers can get the answers they need easily, facilitating the use of services.
The trick for general counsel and legal leaders is integrating generative AI safely across operations. It’s doable with the right approach, and when you do, you’ll have a means to continually lower overhead, speed up processes and make the lives of staff so much easier.