Meredith Brown is senior managing director and Brandon Lee is managing director at FTI Technology. Views are the authors’ own.
Since early 2023, a new wave of artificial intelligence hype has saturated nearly every market and field of practice — to the point that as many as 72% of organizations are currently exploring ways to use generative AI.
This latest form of advancement has come to dominate news headlines, conference keynotes, boardroom discussions and an array of questions and concerns within corporate legal departments.
As a result, legal operations professionals are considering how generative AI will impact their functions, the risk areas they need to be aware of and how soon the potential opportunities will come to bear in real world use cases.
One important point in the AI conversation that often goes overlooked is that while recent advancements in generative AI such as the mainstream release of ChatGPT have fueled renewed hype, there are many forms of AI that may be applied to legal department functions.
Before legal operations professionals can participate in serious assessment of AI solutions and tools, they must first understand the nuances between different forms of AI.
Types of AI
In basic terms, AI has been described as “machines acting in ways that seem intelligent” (Thomas Malone, Director, MIT Center for Collective Intelligence).
This functionality is generally categorized in three groups:
1) Narrow AI, which describes a learning algorithm that performs a single task and does not automatically apply learnings from that task to other tasks.
2) General AI, which is the representation of generalized human abilities, so that the system can find a solution to unfamiliar tasks (today’s latest advancements sit in this category).
3) Super AI, a form of technology that surpasses human intelligence, can perform any task better than a human and has the ability to evoke unique emotions.
More specifically, machine learning, natural language processing and large language models are all subsets of general AI:
- Machine learning — gives computers the ability to learn without explicitly being programmed.
- Natural language processing — the process of teaching computers to understand human language.
- Large language models — a family of machine learning (deep learning) which codify enough information about a human language to enable multiple tasks, such as text classification and generation, conversational question answering and translation. Generative AI is derived from these models.
AI capabilities
The most powerful generative AI algorithms are built upon foundational models that are trained on a vast quantity of unlabeled data in a self-supervised way to identify underlying patterns for a wide range of tasks.
Capabilities include the generation of new content and ideas across a range of modalities, improved efficiency through the acceleration of manual and repetitive tasks, and creation of content tailored to a specific audience, providing personalized experiences.
In the legal field, 82% of corporate legal professionals said they think generative AI can be used for legal work, according to a Thomson Reuters survey. Perhaps even more notable is that 54% think generative AI should be used for legal work.
All the same, most feel generative AI won’t replace the strategic work of lawyers and a small percentage of legal teams are already using these tools for department operations tasks.
While adoption progressively catches up to perceived opportunity, many legal professionals are looking to understand what routine, high-volume tasks these tools can improve.
Legal use cases considered to be the most likely applications according to responses in the aforementioned Thomson Reuters report include:
- Contract drafting and review
- Legal research
- Question answering service
- Memo drafting and review
- Knowledge management
- Back office functions
Regulatory landscape
Beyond exploring practical applications for generative AI, legal professionals must also pay close attention to the evolving regulatory environment around new technologies.
Globally, regulatory agencies and lawmakers are looking to implement risk-based approaches to AI regulation.
The European Union, which to date has been the most proactive government body on AI regulation, has defined areas of risk as unacceptable risk (prohibited), high risk (permitted and subject to requirements), limited risk (permitted and subject to transparency obligations) and minimal or no risk (unrestricted).
In the 2023 legislative session, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and 15 states and Puerto Rico adopted resolutions or enacted legislation.
Legal ethics
Ethical obligations are even murkier than the evolving regulatory landscape, yet remain important considerations for any legal department implementation of AI.
American Bar Association Resolution 112 urges courts and lawyers to address the emerging ethical and legal issues related to the usage of AI in the practice of law. It specifically calls for the courts and attorneys to contemplate the bias, explainability and transparency of automated decisions made by technology, the ethical usage of AI and the controls and oversight involved.
Model rules for the ABA and other state bar associations also include ethical obligations for the use of technology — such as competence and diligence, communication, confidentiality and supervision — which should also be applied to uses of AI in legal functions.
Because the technology’s reliability and functionality remains largely unproven, it’s difficult to forecast the long-term effects and inherent risks. At a minimum, organizations should consider several risk factors when experimenting with generative AI. These include:
- Black box features and capabilities
- The possibility and vulnerabilities for data leakage
- Lack of transparency in how results are generated
- Bias and data provenance
- Hallucination
- Copyright and data ownership
Concluding thoughts
For the foreseeable future, AI’s implications for the legal department, and legal operations functions, will be fluid and should be expected to change at a rapid pace.
For now, most legal departments are taking an approach of cautious optimism, recognizing that sweeping blacklisting or whitelisting will not be effective.
Many are attempting to explore the potential for generative AI while mitigating risk by developing new policies, training employees and refreshing governance structures.
Thorough assessment of what’s under the hood with new technologies, attention to ethical considerations and a keen eye on regulatory requirements will be essential to embracing AI in a safe and effective manner.