The U.S. Equal Employment Opportunity Commission has outlined steps it says employers should take to audit AI and other automated solutions for “disparate impact” under Title VII of the Civil Rights Act of 1964.
The technical assistance document, published Thursday, isn’t binding but is instead meant to “encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” the agency’s chair, Charlotte A. Burrows, said in a statement.
Title VII prohibits discrimination in the workplace based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity) or national origin. “Disparate impact” refers to discrimination that occurs when a policy or practice has a significant negative impact on members of a Title VII-protected group but is not job-related and consistent with business necessity, according to EEOC enforcement guidance.
The document defines key terms and offers Q&As for AI audits. It also cautions employers that they may be responsible for such tools even if they’re designed and administered by a vendor.
“Therefore, employers that are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII,” the document suggests. “Further, if the vendor is incorrect about its own assessment and the tool does result in either disparate impact discrimination or disparate treatment discrimination, the employer could still be liable.”
EEOC has made clear in recent months that it intends to keep a close eye on employer AI use. Vice Chair Jocelyn Samuels said at a February event that the commission’s goal was to ensure “we can enjoy the benefits of new technology while protecting the fundamental civil rights that are enshrined in our laws.”
The workplace bias watchdog is not alone, either.
The Consumer Financial Protection Bureau, the U.S. Department of Justice and Federal Trade Commission have also stated a commitment to rooting out discriminatory uses of the tech, and the White House earlier this month said it plans to examine the use of automated tools that track and manage workers.
Regulation is already in the works elsewhere, too. New York City recently adopted a law requiring that employers conduct audits of such tools. Enforcement is slated to begin July 5.