Dive Brief:
- Rite Aid, the big drug store company that’s reorganizing under bankruptcy protection, has agreed to terms set by the Federal Trade Commission over a now-ended AI-based facial recognition program that it piloted to reduce crime in select stores.
- The company agreed not to restart the program for five years and, should it restart it, put safeguards in place to protect customer privacy and limit the number of false positives that marred the program until it was voluntarily shut down in 2020.
- “We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy,” the company said in a statement. “However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint. The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores. Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation.”
Dive Insight:
The FTC’s enforcement action is part of a broader push the agency announced earlier this year to step up its scrutiny of companies’ use of automated facial recognition and other biometric-based surveillance systems.
In its complaint against Rite Aid, which owns some 1,900 stores in the United States, the agency accuses the company of failing to put adequate checks on facial recognition software it hired vendors to deploy to create a database of people suspected of shoplifting or engaging in other criminal behavior in its stores.
The system was populated with security camera and other images of people the company called “persons of interest” and sent an alert to employees whenever the algorithm made what it thought was a match between a person in the database and a person entering the store.
The FTC says it gathered evidence showing the system was rife with inaccuracies, producing thousands of false positives that led to in-store confrontations that left customers feeling humiliated and their reputations harmed.
In one case, an employee stopped and searched an 11-year-old girl because of a false match, causing the girl’s mother to miss work because of the trauma her daughter experienced. In another case, an employee called the police on a customer because the technology generated an alert, even though the database entry was of a white woman with blonde hair while the customer who was targeted was black.
“Consumers complained to Rite Aid that they had experienced humiliation and feelings of stigmatization as a result of being confronted by Rite Aid’s employees based on false positive facial recognition matches,” the FTC said.
The pilot was concentrated in market areas dominated by minorities and the technology would make what the FTC considered obvious mistakes. For example, it wasn’t unusual for the system to send out an alert in stores in one or several cities at once even though the person of interest was based in another city.
The FTC says the company kept poor records on the system’s accuracy, hindering the software’s ability to learn from its mistakes, and it failed to train employees adequately on the system’s use.
“Rite Aid’s training materials … did not address the risks to consumers from using the technology,” the agency says.
In the settlement, which must be approved by a court, the company agreed to delete all of the data it accumulated in the program and make sure vendors and other third parties that have access to the data also delete it.
Should it restart the program, Rite Aid has to have a plan in place for protecting people’s data, improving the program’s accuracy through regular testing and employee training, among other things, and monitor if the system is disproportionately impacting people by race or gender.
The order also includes provisions relating to another order the company has been subject to since 2010 which concerns its failures meeting earlier data protection requirements.
“Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices,” Samuel Levine, director of the FTC’s consumer protection bureau, said in a statement.
In a separate statement, FTC Commissioner Alvaro Bedoya said the bias in the system against minorities is part of a broader problem with AI algorithms and he called on lawmakers to step in with legislation while the technology is new.
“Many technologies should never be deployed in the first place,” he said. “I urge legislators who want to see greater protections against biometric surveillance to write those protections into legislation and enact them into law.”