Jeffrey Solomon is vice president and general manager, LegalVIEW BillAnalyzer, at Wolters Kluwer. Views are the author’s own.
ChatGPT quickly became the fastest-growing consumer application in history, reaching more than 100 million users in its first two months.
Given that number, chances are you or one of your colleagues has at least tried it out, adding to an everyday AI arsenal that includes asking questions of Siri and Alexa, letting Google Docs automatically complete a phrase or sentence, or receiving a restaurant recommendation based on past OpenTable activity.
Even if you’re an AI luddite in your personal life, that’s probably not the case when it comes to managing your legal matters.
According to FTI’s 2023 General Counsel Report, 20% of survey respondents are using some form of AI today, and 67% believe that the use of AI will increase over the next five years.
Indeed, AI is embedded in many of the workflows you and your colleagues use today to create first drafts of simple contracts, monitor for billing invoice irregularities or guideline violations and more.
But while AI is ideal for many things, it’s not right for every use case. Thus, the trick is not only to use AI — it’s to understand how to use it in the right situations so you can maximize its benefits for your corporate legal department.
AI: What is it good for?
You’ve probably heard a lot of people say that AI has been a game-changer for the legal industry, and that’s true.
AI is great at dealing with concrete and explicit problems and challenges that are heavily data-driven. For example, AI is excellent for bill review and spend and matter management. It can quickly find patterns in invoices and flag potential guidelines and inaccuracies and can create predictive budgets based on historical data culled from past legal matters to predict budgets and legal spend.
Its predictive analytics capabilities can provide recommendations on which law firms to use for certain matters based on past performance and other factors, leading to more favorable matter management.
In short, the more specific the problem, and the more data behind it, the more accurate and effective the AI will be in solving the challenge and the more likely it will deliver tangible benefits.
Everything from budgeting, invoicing and other key corporate legal responsibilities can be greatly improved through AI’s ability to pull actionable intelligence from large amounts of data. This results in more strategic decision-making, better outcomes and the ability for attorneys to spend time practicing law, not engaging in tedious tasks like scanning for billing guideline violations or errors.
Not everything is made for AI
As amazing as AI can be, it shouldn’t be considered for every use case. If you are one of the millions of people who have used ChatGPT, you probably know that while the output is impressive, it’s no replacement for the human touch. There are many instances where having a person in the loop or not using AI at all is a better option.
For instance, corporate attorneys typically spend years developing relationships with outside counsel. They understand their strengths and weaknesses. No AI can, should or will ever replace the rapport that these parties have built with one another.
Likewise, while AI is undoubtedly intelligent, and has the power to become even smarter over time, it cannot match the amount of expertise contained within your own legal team. This expertise has been built over years and multitudes of cases and matters. It’s what allows attorneys to use their knowledge to discover subtleties that a machine could never realize.
So, while AI can make recommendations, it’s still up to your team to act upon them and make decisions based on their own experience, relationships and other intangibles that software simply cannot grasp.
Building trust
Still, when AI is applied in the right manner, it can be extraordinarily effective, so long as the system’s data is accurate and unbiased.
This is where, historically, things have gotten a bit tricky. Many attorneys are understandably concerned about the trustworthiness of AI, especially since they work with highly sensitive data. But building trust in AI can be achieved by following a few guidelines.
First, it’s important to ensure that data comes from high-quality sources and that the AI system you’re using was trained properly. Well-trained AI systems usually employ large databases.
The idea is that the more information the AI is pulling from, the higher the degree of accuracy and the less likelihood of bias since the system will be drawing insights from many different sources.
AI systems designed for legal purposes should also be trained not only by data scientists but by experts in the legal field, so the technology can deliver accurate recommendations for different legal use cases.
Second, AI should have the ability to learn, adapt, and scale. Humans must be able to input feedback into the system, and the system should have the capacity to ingest that feedback, apply it, learn from it and become smarter over time.
Today you might be using it to scan invoices; tomorrow you might use it to predict matter outcomes or for even more advanced use cases. The technology must be able to grow commensurate with your team’s evolving needs.
While those needs will change over time, one thing that won’t change is the fact that AI has become part of our daily lives. There’s no going back. The challenge is to make sure you’re using it correctly and in a way that will give you and your team the most benefit. If you can do that, you’ll achieve greater productivity and better outcomes.