Ryan Costello is engagement director, data privacy services, at ProSearch. Views are the author’s own.
The use of artificial intelligence in legal processes brings opportunities to streamline discovery in ways that benefit clients and reduce costs.
However, while AI holds promise as a driver of business growth and productivity, it also presents concerns that are attracting regulatory scrutiny. Now is the time to take practical steps to prepare for compliance.
Why regulate AI?
Risks of misuse or unintended consequences are front and center with regulators.
AI has the potential to perpetuate and exacerbate biases if safeguards are not in place.
Concerns about AI are similar to privacy issues, in part due to the risks that AI poses to data privacy and security. It’s notable that much of the language being used around the concept of responsible AI – notice, transparency and accountability – is the same language we find in the EU’s General Data Protection Regulation and other privacy laws.
Overall, more oversight is needed to ensure AI does not reinforce biases and is deployed ethically. Key considerations include evaluating training data, auditing algorithms and providing verification of outcomes or decisions made by AI. With vigilance, AI can be shaped to promote fairness and avoid prejudice.
Regulatory efforts
Just as the EU took the lead in data privacy with the adoption of the GDPR, it has since focused on regulating AI.
On December 8, 2023, the European Commission and the Parliament reached agreement on the provisional rules for the European Union's Artificial Intelligence Act. As a result, the EU AI Act will likely become the world's first comprehensive rule regulating artificial intelligence.
The new EU Artificial Intelligence Act mandates transparency requirements for “high-risk” AI systems, allowing regulators to assess compliance. The AI Act focuses on limiting the potential for AI systems to cause harm through inaccurate outputs, discrimination, lack of transparency and diminished human control.
In addition to the new AI regulations, the EU regulatory playbook includes the Digital Services Act (DSA) focused on controlling the spread of misinformation, hate speech and terrorist propaganda. In August 2023, the Digital Services Act came into effect for very large online platforms and search engines. It will be fully applicable to other entities in February2024.
In the U.S., federal guidance and state-level initiatives are tackling AI concerns through similar transparency and accountability measures.
In July 2023, the Biden administration announced it had secured voluntary commitments from artificial intelligence companies to manage the risks posed by AI. The administration has more recently announced a presidential executive order on safe, secure and trustworthy artificial intelligence to ensure that the United States leads the way in seizing the promise and managing the risks of AI. The order establishes standards for AI safety and security, protects Americans’ privacy, and advances equity and civil rights.
Regulatory approaches
Regulators have several methods for monitoring AI systems and ensuring compliance with regulations.
Transparency
Transparency rules require disclosures about how AI systems are being used, their limitations, and technical details of their architecture. For example, the EU AI Act will mandate algorithmic transparency provisions for high-risk AI systems, expanding existing requirements under the GDPR. Companies will need to clearly communicate when an individual is interacting with an AI system and provide some explanation of the logic behind AI-generated decisions or outcomes.
Public transparency includes sharing statistics on an algorithm’s accuracy and fairness, its underlying data and technical design, and comprehensive impact assessments. Cross-organizational transparency between businesses or between businesses and regulators allows more detailed private information sharing about proprietary algorithmic systems. This helps organizations adapt algorithms for new purposes while enabling regulators to better understand algorithms, with reduced risk of intellectual property theft.
System audits
Algorithm audits allow regulators to directly investigate and evaluate AI systems for potential flaws. Previous audits have uncovered issues like inaccuracy, discrimination, distortion of information, misuse of data and legal violations.
Under the EU AI Act, regulators will be empowered to demand information from companies to comprehensively assess high-risk algorithmic systems. This enables an in-depth analysis beyond just reviewing documentation. The FTC already has the authority to audit algorithmic systems using its investigative powers.
Regulatory sandboxes
A regulatory sandbox enables testing in a real-life environment of new technologies which are not yet fully compliant with the existing legal and regulatory framework. The use of AI regulatory sandboxes aims to improve ongoing communication between regulators and companies developing AI systems. By participating in sandboxes, companies can receive feedback from regulators earlier and more frequently. This collaboration is meant to ease compliance issues and provide more legal certainty for new AI tools still in development.
Assurance
Third-party auditors are increasingly being used to certify AI systems and validate responsible development practices. As demands grow for explainability and assurances around AI, a rapidly emerging AI assurance industry is stepping in to provide these services. Certifications from independent auditors can give regulators, clients, and the public confidence that the systems are operating as intended.
Whistleblower reports
Finally, regulators are welcoming whistleblowers and reports from impacted individuals to identify real-world harms arising from problematic AI systems. Developers who build these systems are often best positioned to report AI issues, as they have intimate knowledge of how their systems work. Affected individuals who believe an AI system has wronged them in some way can also trigger investigations, although the opacity of many algorithms makes this difficult.
Best practices
Organizations are advised to keep an eye on regulatory decisions related to AI. Awareness of the progress of regulatory efforts will prepare legal teams to understand and comply with new guidelines to stay within the law. This will take some dedicated attention, as regulatory efforts may vary across jurisdictions and states.
Reduce regulatory risk by adopting responsible AI principles in business initiatives as early as possible. Doing so can help organizations stay ahead of impending regulatory changes.
For example, prioritize fairness by proactively mitigating biases through inclusive data practices and rigorous testing. Ensure reliability by confirming consistent performance across different scenarios. Advocating transparency about actual capabilities, limitations, and data handling practices is crucial. Close collaboration with IT can enhance privacy and security protections.
Perhaps most importantly, incorporating meaningful human oversight throughout the entire life cycle of an AI system, from development to deployment, maintains accountability. Taking a thoughtful approach that integrates these AI best practices enables legal teams to harness the benefits of AI while remaining nimble in the face of regulatory changes to come.