close
close

Auditor raises alarm over inequalities in law enforcement AI tool

Auditor raises alarm over inequalities in law enforcement AI tool

An artificial intelligence tool used to identify individuals in law enforcement investigations, airport security and public housing surveillance disproportionately harms people of color and women, according to a new government watchdog report.

Facial recognition technology, criticized by civil rights advocates and some lawmakers for privacy violations and inaccuracies, is increasingly being used by under-regulated federal agencies. The U.S. Commission on Civil Rights found.

“The unregulated use of facial recognition technology poses significant civil rights risks, particularly for marginalized groups that have historically borne the brunt of discriminatory practices,” said President Rochelle Garza said. “As we work to develop AI policies, we must ensure that facial recognition technology is rigorously tested for fairness and that any disparities detected across demographic groups are promptly addressed or its use is suspended until the disparity is addressed.”

Rapidly evolving facial recognition tools are increasingly being used by law enforcement, but there are no federal laws regulating their use.

At least 18 federal agencies use facial recognition technology. Government Accountability Office. In addition to the federal distribution, the Justice Department has allocated $4.2 million to local law enforcement agencies nationwide since 2007 for programs that use facial recognition tools at least in part. public records show this.

FBI’s extensive database turns to facial recognition software

The 184-page report, released this month, details how federal agencies have quietly deployed facial recognition technology across the U.S. and the potential civil rights violations it poses. The commission specifically scrutinized the Justice Department, the Department of Homeland Security and the Department of Housing and Urban Development.

“While there is intense debate about the benefits and risks associated with the use of FRT at the federal level, many agencies already use this technology,” the report said, adding that it could lead to serious consequences such as wrongful arrests, unfair surveillance, and discrimination.

A face recognition system It uses biometric software to map a person’s facial features from a photo. The system then tries to match the face to a database of images to identify the person. The degree of accuracy depends on a variety of factors, including the quality of the algorithm and the images used. The commission said that even with the highest-performing algorithms, tests have shown that false matches are more likely for certain groups, including older adults, women and people of color.

The U.S. Marshals Service said in a commission report, citing the Justice Department, that it uses facial recognition tools in investigations of fugitives, missing children, major crimes and protective security duties. The marshals have had a contract with facial recognition software company Clearview AI for several years. Members of Congress called against it In February 2022, the use of Clearview AI products and other facial recognition systems is expected to be banned due to potential civil rights violations, including threats to privacy.

FBI’s use of facial recognition technology dates backwards at least until 2011. The Justice Department told commissioners that the FBI had used facial recognition software a wide range of images, including booking photos, driver’s licenses, public social media accounts, public websites, cell phones, security camera footage, and photos held by other law enforcement agencies.

The US Government Accountability Office has been investigating the FBI’s use of facial recognition technology since 2016. In a report prepared eight years ago, office is over FBI “must better ensure confidentiality and accuracy.”

The Justice Department, which oversees the FBI and the Marshals, announced a temporary policy on facial recognition technology in December 2023, saying it should only be used for leads in an investigation. The commission added that there was not enough data on the department’s use of FRT to confirm whether it was implemented.

The FBI declined to comment on the report when reached by USA TODAY. The Justice Department and the U.S. Marshals Service did not respond to requests for comment.

Artificial intelligence tool used in border control and immigration investigations

The commission found that the Department of Homeland Security, which oversees immigration enforcement and airport security, uses facial recognition tools at many agencies.

The report notes that U.S. Immigration and Customs Enforcement has been conducting searches using facial recognition technology since it signed a contract with biometric defense firm L-1 Identity Solutions in 2008.

The commission wrote that the contract allowed ICE to access the Rhode Island Department of Motor Vehicles’ facial recognition database to find undocumented immigrants accused of or convicted of crimes, and 2022 study From the Georgetown Privacy and Technology Law Center.

Facial recognition technology is also being used to verify people’s identities at airports, ports and pedestrian walkways at the southwest and northern border entry points. The report noted that in 2023, civil rights groups reported that the U.S. Customs and Border Protection mobile app had difficulty identifying Black asylum seekers who were seeking appointments. This year, CBP said it had an accuracy rate of over 99% for people of different ethnicities, the commission’s report said.

Department of Homeland Security spokeswoman Dana Gallagher told USA TODAY that the department values ​​the commission’s views and that the Department of Homeland Security is at the forefront of rigorous testing for bias.

According to the report, the department opened a 24,000-square-foot lab in 2014 to test biometric systems. The Maryland Test Facility, which the commission visited and documented, served as “a model for testing facial recognition systems in real-world environments,” Gallagher said.

“DHS is committed to protecting the privacy, civil rights and civil liberties of all individuals we interact with as we carry out our mission to keep the homeland secure and the traveling public safe,” Gallagher said.

Public housing agencies are using facial recognition tools

The commission said some surveillance cameras in public housing contain facial recognition technology that has led to evictions for minor infractions. expressed his concerns since at least 2019.

The report notes that the U.S. Department of Housing and Urban Development did not develop any of the technology itself, but gave grants to public housing agencies to purchase cameras using the technology and then “put the FRT in the hands of the grantees without any regulation or oversight.”

Public housing tenants are disproportionately women and people of color, meaning that use of the technology could lead to Article VI violations, the commission warned. In April 2023, HUD announced that Emergency Safety and Security Grants would not be used to purchase the technology, but the report noted that it did not restrict recipients who already use the tool from using it.

The Commission referred to May 2023 Washington Post investigation found that cameras were being used to penalize residents and catch them committing minor infractions, such as smoking in the wrong area or removing a car from the laundry room, to pursue evictions. Attorneys defending evicted tenants have also reported an increase in lawsuits citing surveillance footage as evidence to kick people out, the Post reported.

The Department of Housing and Urban Development did not respond to USA TODAY’s request for comment.

Civil rights group hopes report will prompt policy changes

Tierra Bradford, senior program manager for justice reform Leadership Conference on Civil and Human RightsHe told USA TODAY he was excited to see the report and hopes it leads to more action.

“I think this addresses a lot of the concerns that we’ve had in the justice area for some time,” Bradford said.

He added that the U.S. criminal justice system has a history of disproportionately targeting marginalized communities, and facial recognition tools appear to be another example of that problem.

“There should be moratoriums on technology that has been shown to have real bias and disparate impacts on societies.”

National debate over facial recognition tools

The commission’s report comes after years of debate over the use of facial recognition tools in the public and private sectors.

The Detroit Police Department announced in June that it would review its policies on how to use technology to solve crimes as part of a federal settlement with a black man. wrongfully detained Facial recognition software-based theft in 2020.

The Federal Trade Commission last year banned Rite Aid After using AI facial recognition technology to subject customers, particularly people of color and women, to unfair searches, the FTC said the system based its alerts on low-quality images, resulting in thousands of false matches and customers being searched or kicked out of stores for crimes they did not commit.

A man was wrongfully arrested in Texas and sentenced to nearly two weeks in jail filed a lawsuit In January, he blamed facial recognition software for misidentifying him as a suspect in a store robbery. According to the lawsuit, the AI ​​software at a Sunglass Hut in Houston used low-quality surveillance footage of the crime to incorrectly identify Harvey Murphy Jr. as a suspect, resulting in a warrant for his arrest.

Nationally, members of the Civil Rights Commission said they hope the report will inform lawmakers about the use of the rapidly evolving technology. The agency is pushing for a testing protocol that agencies can use to check how effective, fair and accurate their software is. It also recommends that Congress provide a “legal mechanism for redress” for people harmed by FRT.

“I hope this bipartisan report helps inform public policy that will address the multitude of issues related to artificial intelligence (AI) generally, but specifically in this case, facial recognition technology,” said Commissioner Stephen Gilchrist. “Our country has a moral and legal obligation to ensure that the civil rights and civil liberties of all Americans are protected.”