biometrics, civil liberties, facial recognition, Information Security, Law & order, Privacy, surveillance, Top News

San Francisco bans police use of facial recognition

San Francisco – a tech-forward metropolis that nonetheless finds pervasive facial recognition (FR) to be “psychologically uncomfortable” – on Tuesday became the first major US city to ban police use of the technology.

Aaron Peskin, the city supervisor who sponsored the bill, told the New York Times that the Board of Supervisors’ 8-to-1 vote sends a strong message to the nation, coming as it does from a city whose DNA has been rewritten by technology.

Plenty of these technologies are birthed here, and their parent companies live here, he said. Thus, it’s kind of up to San Francisco to rein them in when they run amok, he said:

I think part of San Francisco being the real and perceived headquarters for all things tech also comes with a responsibility for its local legislators. We have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here.

Peskin pointed out that the shortcomings of FR mean that it leads to plentiful misidentifications. Case in point: the American Civil Liberties Union (ACLU) tested facial recognition technology used by police in Orlando, Florida, and found that it falsely matched 28 members of Congress with mugshots.

So many other cases in point when it comes to this error-prone technology. Here’s one: After two years of pathetic failure rates when they used it at Notting Hill Carnival, London’s Metropolitan Police finally threw in the towel in 2018. In 2017, the “top-of-the-line” AFR system they’d been trialling for two years couldn’t even tell the difference between a young woman and a balding man.

San Francisco’s new ordinance says that the city doesn’t think that FR is worth it:

The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.

The reference to racial injustice alludes to multiple reports, including one oft-cited study from Georgetown University’s Center for Privacy and Technology that found that automated facial recognition (AFR) is an inherently racist technology. Black faces are over-represented in face databases to begin with, and FR algorithms themselves have been found to be less accurate at identifying black faces.

In another study published earlier this year by the MIT Media Lab, researchers confirmed that the popular FR technology it tested has gender and racial biases.

Plus, pervasive surveillance is straight-up nasty, Peskin said:

It’s psychologically unhealthy when people know they’re being watched in every aspect of the public realm. On the streets, in parks… that’s not the kind of city I want to live in.

The ordinance bans the use of FR by police and city agencies and requires city departments to disclose any surveillance technologies they currently use or plan to use, as well as to spell out policies regarding them that the Board of Supervisors must then approve.

It doesn’t affect personal, business or federal government use of facial recognition technology. That means the use of FR at San Francisco International Airport and the Port of San Francisco, both controlled by the federal government, won’t be affected.

The ordinance won’t become law until the Board of Supervisors ratifies the vote next week, but the second vote is seen as a formality.

Critics say that an outright ban goes too far and doesn’t take into account the positive uses of the technology. NPR quoted Daniel Castro, vice president of the industry-backed Technology and Innovation Foundation, who says that other US cities shouldn’t follow San Francisco’s lead:

They’re saying, let’s basically ban the technology across the board, and that’s what seems extreme, because there are many uses of the technology that are perfectly appropriate.

We want to use the technology to find missing elderly adults. We want to use it to fight sex trafficking. We want to use it to quickly identify a suspect in case of a terrorist attack. These are very reasonable uses of the technology, and so to ban it wholesale is a very extreme reaction to a technology that many people are just now beginning to understand.

Similar legislation is under consideration in the nearby city of Oakland, and the Massachusetts Senate is considering a bill that would impose a moratorium on FR software in the state until the technology improves.

Notwithstanding error rates, plenty of police forces are still gung-ho about adopting, or expanding, use of the technology. One of those would be London’s Metropolitan Police.

In spite of giving up on it for the Notting Hill Carnival, and other high profile failures, police say it’s helping them catch violent criminals, and that the technology continues to improve.

Source : Naked Security

Previous ArticleNext Article
Founder and Editor-in-Chief of 'Professional Hackers India'. Technology Evangelist, Security Analyst, Cyber Security Expert, PHP Developer and Part time hacker.

Leave a Reply

Send this to a friend