San Francisco becomes first US city to ban use of facial recognition technology by police
Comes amid mounting fears of misuse of the technology
By a vote of 8-1, the city council moved to prevent the use of the technology that has become increasingly commonplace at places such as airports, and which is also being used by some police forces.
Those who support the ban say the technology is not only flawed, but a serious threat to civil rights, especially in a city that is celebrated for public protest and privacy.
They also worry people will one day not be able to go to a mall, the park or a school without being identified and tracked.
“Good policing does not mean living in a police state,” said supervisor Aaron Peskin, who introduced the measure, at a hearing last week. “Living in a safe and secure community does not mean living in a surveillance state.”
Groups such as the ACLU point to its test of Amazon’s facial recognition programme – called Amazon Rekognition – as proof, after scanning images of members of Congress and comparing them to archived arrest photos. Twenty-eight legislators were incorrectly matched, including six members of the Congressional Black Caucus.
Meanwhile, critics say police need all the help they can get, especially in a city with high-profile events and high rates of property crime. That people expect privacy in public space is unreasonable given the proliferation of cell phones and surveillance cameras, said Meredith Serra, a member of a resident public safety group Stop Crime SF.
“To me, the ordinance seems to be a costly additional layer of bureaucracy that really does nothing to improve the safety of our citizens,” she said at the same hearing.
San Francisco’s new rule, which is set to go into effect in a month if a second vote, considered a formality goes ahead, forbids the use of facial-recognition technology by the city’s 53 departments — including the San Francisco police department, which doesn’t currently use such technology, CNN said. However, the ordinance carves out an exception for federally controlled facilities at San Francisco International Airport and the Port of San Francisco.
Late last year, the president of Microsoft called for greater government regulation of AI facial recognition technology, because of the risk of it discriminating against women and people of colour.
Brad Smith said such regulation would help avoid “a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success”.
The comments of Mr Smith, 59, which were released at the same time as a report by a research group consisting of both Microsoft and Google employees also calling for more regulation, are especially noteworthy because of the controversy the company triggered earlier this year over comments about its work with AI.
In June, the company’s general manager Tom Keane, wrote how proud Microsoft was to be working with the US Immigration and Customs Enforcement agency (ICE) to use facial recognition technology to help identify immigrants and process applications.
Additional reporting by Associated Press