Corrie Hurm is head of claims at Embroker. Views are the author’s own.
Artificial intelligence has rapidly become a cornerstone of modern business practices, offering unparalleled efficiencies and innovation. However, its deployment introduces new legal and financial risks, making it imperative for in-house counsel to address AI-related liability through robust policies, risk assessments, and appropriate insurance coverage.
![Hurm](https://imgproxy.divecdn.com/l-F9NFLLv-c04w4kK1cTmTg-yM8ofsUtRdg1m_nvO-o/g:ce/rs:fit:1600:0/Z3M6Ly9kaXZlc2l0ZS1zdG9yYWdlL2RpdmVpbWFnZS9IdXJtLnBuZw==.webp)
AI use cases such as automated hiring tools, resume screening, and employee performance evaluations are common but fraught with legal exposure. These tools often rely on algorithms that unintentionally perpetuate bias, leading to potential violations of employment laws such as the Americans with Disabilities Act (ADA).
Additionally, the creative outputs of AI — whether text, images, or deepfake technology — carry risks of copyright and trademark infringement. These liabilities are no longer theoretical; they are increasingly resulting in high-profile litigation.
The legal industry remains cautious about AI adoption.
While AI is transforming industries at large, only 22% of lawyers report using AI at work, reflecting ongoing concerns about risks and ethical implications. This hesitation is partly due to the uncertain regulatory environment. President Joe Biden’s Executive Order on AI, issued in October 2023, was the first U.S. government effort to provide guardrails for mitigating AI risks. President Donald Trump rescinded Biden’s AI order, and is pressing for U.S. dominance in AI.
More guidance is needed: over half of lawyers expect government oversight of AI to increase, and 44% cite policy and litigation changes as significant risks going forward.
Ethical and legal obligations
For in-house counsel, hesitations surrounding AI adoption are justified, given their fiduciary, regulatory, and ethical obligations to safeguard client information. Under ABA Rule 1.6 Confidentiality of Information, attorneys are required to make reasonable efforts to prevent breaches and avoid the loss of client data. Failure to comply can result in ethical violations under ABA Formal Opinion 483 and expose companies to costly lawsuits.
Consider a notable case from the summer of 2023: two New York City lawyers were sanctioned for submitting a legal brief containing fabricated citations generated by ChatGPT. The lawyers had relied on ChatGPT for research but failed to verify its outputs, leading to a severe ethical lapse. The presiding judge noted that while there is nothing “inherently improper” about using AI for assistance, ethical standards require attorneys to act as gatekeepers, ensuring the accuracy of their filings.
This cautionary tale for in-house counsel underscores the need for robust oversight when leveraging AI in legal workflows.
Insurance gap
The insurance industry is still exploring the potential solutions to AI risks, creating some uncertainty during this development period. While policies such as Errors and Omissions (E&O) and Directors and Officers (D&O) insurance can offer some protection, specific AI-related exclusions are beginning to surface in response to evolving claims. For instance, insurers are considering excluding liabilities tied to discriminatory hiring algorithms or IP infringement by generative AI tools. However, widespread adoption of these exclusions is still 18–24 months away, as carriers await significant financial losses to justify policy revisions.
This presents both a challenge and an opportunity for in-house counsel. To proactively mitigate risks, counsel should evaluate existing insurance policies for coverage gaps and explore emerging options tailored to AI liability. It’s also critical to assess whether insurers demonstrate expertise in underwriting AI-related risks, particularly for businesses in regulated industries or those heavily reliant on AI technologies.
Managing AI liability
Beyond securing appropriate insurance, in-house counsel must develop internal safeguards to minimize AI-related exposure. Key steps include:
- Conducting comprehensive risk assessments. Evaluate how AI is integrated across operations and identify specific areas of liability, such as employment practices, data privacy, or intellectual property.
- Establishing AI governance policies. Draft clear, written policies that define acceptable AI use, require human oversight, and establish review protocols to address biases or errors in AI outputs. Such policies not only reduce liability but may also serve as a defense in litigation.
- Performing vendor due diligence. If utilizing third-party AI tools, conduct thorough audits to ensure compliance with legal standards and verify that tools do not introduce unintended risks. Vendors should also be held accountable for their products’ reliability and biases.
- Leveraging external expertise. Partnering with third-party auditors or consultants specializing in AI can provide an additional layer of protection by identifying and mitigating risks before they materialize.
Changing insurance landscape
As AI adoption grows, the insurance market will evolve to reflect its unique risks. In-house counsel should anticipate these changes by monitoring policy trends and advocating for coverage that aligns with their organization’s specific needs. This includes staying informed about AI-related exclusions that could limit future claims and weighing the cost-benefit of using AI tools that increase liability exposure.
For example, while developing proprietary AI tools may reduce some risks, it can be cost-prohibitive. Conversely, relying on third-party AI tools requires stringent vetting and ongoing oversight to mitigate liability. The ultimate goal is to balance the benefits of AI innovation with the potential risks it introduces.
Call to action
AI offers transformative potential, but its use must be tempered by careful consideration of legal, ethical, and financial implications. By proactively addressing AI risks through sound governance, due diligence, and strategic insurance planning, in-house counsel can protect their organizations from costly litigation and reputational damage.
In an era where technology outpaces regulation, the role of in-house counsel is more critical than ever. Ensuring organizational readiness for AI-related liability is not just about managing risks—it’s about enabling responsible innovation.