It wasn’t so long ago that opposition to facial recognition technology was confined to a small number of privacy activists who were dismissed by the tech industry as irrelevant and alarmist. The protests that have exploded in the wake of the George Floyd killing have changed all that and now Silicon Valley is changing, too.
Last week, the giants of the industry acted. Amazon announced a one-year moratorium on selling its facial recognition platform Rekognition to police and called on the government to regulate its use. IBM told the U.S. Congress that it would stop selling facial recognition products and end research and development in the field. Microsoft said that it too would stop selling the technology to U.S. police departments until regulations were in place.
The change isn’t coming just from sellers but from buyers as well. Several U.S. cities have banned their police departments from using facial recognition tools. Earlier this year, the European Union said it was weighing restrictions on the technology for the next five years. Amazon, Google and Microsoft employees had protested their companies’ sales of the technology to the U.S. military.
Yet in many parts of the world, security forces are rushing to adopt the technology even if at this stage it is not sufficiently reliable and in certain cases raised serious ethical issues.
Facial recognition is not a new technology. It’s used on a daily basis for innocent purposes, such as unlocking smartphones or to automatically tag images of people in social media and apps. It is also used in biometric passports or identity cards.
But its big potential – as well as its biggest danger – lies with its use by law enforcement authorities and the military. The dream of every police force is to be able to input the photo of every suspect or even ordinary people deemed to be problematic, into a database connected to a network of security, body and drone cameras and instantly identify them when their image is caught.
However, apart from the issue of accuracy, research has shown over and over how the technology – most of it designed by white male engineers – suffers from racial and gender biases. One study conducted by Massachusetts Institute of Technology researchers Joy Buolamwini and Timnit Gebru in 2018 found that the error rates for determining the sex of light-skinned men were never worse than 0.8% while for darker-skinned women rose to as much as 34%.
- Coronavirus surveillance poses long-term privacy threat, UN expert warns
- Algorithms can now recognize emotions. Here's why that's terrifying
- Microsoft bans face-recognition sales to police as Big Tech reacts to protests
Other research found that facial recognition tools had varying levels of success according to the age, sex and race of the subjects. A study by the American Civil Liberties Union, for instance, showed that the Amazon product identified 28 members of Congress as having arrest records.
The use of the software during the protests that have erupted across the United States over the past two weeks, together with heightened racial tensions, created a perfect storm for the technology. Facial recognition tools are seen as yet another manifestation of discrimination against black Americans, which is what forced technology companies to pull back from it, even if only partially and temporarily.
For the big technology companies, facial recognition is a small and marginal part of their business that they can afford to jettison if it is causing them too many problems. The tech news site The Information said Amazon generated just $3 million in revenues in 2018 from facial recognition.
The segment is, in fact, dominated by a handful of startup companies, most of them located outside the United States. For them, facial recognition is their core business and they don’t plan to give up on it. For example, Japan’s NEC said over the weekend that law enforcement authorities needed the technology to protect the public.
The market research firm IHS Markit estimates that about half of the global market is controlled by Chinese companies such as Hikvision Digital Technology, Dahua Technology, Huawei and Megvii. The Carnegie Endowment for International Peace estimates that 52 governments in Asia and Africa use Chinese facial recognition technology.
The Israeli startup AnyVision also has no plans to leave the business. Alex Zilberman, the company’s chief operating officer, said demand for its technology has been growing lately.
“IBM’s declaration sounds to me a bit puzzling, a little like it’s raising its hands in surrender,” he said. “To say you shouldn’t use facial recognition technology is ridiculous. I admit it isn’t easy because the technology is so powerful and it has risks, but compared to cloud computing companies, which provide raw capability and have no control over what’s done with their technology, we provide solutions that allow us to control what’s done with it. You can’t stop technology and innovations. You need to find a way to harness it and put incorporate the proper safeguards.”
AnyVision has developed products and algorithms that are used, for instance, to control entry to stadiums, airports, stores and casinos and to cross borders. Since the outbreak of the novel coronavirus, its technology is also being used by hospitals. Zilberman said annual sales are in the tens of millions of dollars in 45 countries.
The company says it works to reduces the risks of concerns about bias and privacy. “We check who our customers are and don’t sell to countries that don’t have good governance and don’t sufficiently respect privacy rights,” he said. “We addressed the issue of bias long before it was cool to talk about it. Five years ago, we realized that in order to provide a system that does a good and accurate job, it has to be trained using balanced data for all types of populations and people.”
Nevertheless, AnyVision drew controversy last year after an investigation by TheMarker and America’s NBC television claimed its technology was being used to surveil West Bank Palestinians. The company denied it, and Zilberman noted that Microsoft, which has invested in AnyVision, denied the accusations, too.
In any case, Microsoft pulled out of AnyVision. “Because they don’t have control over what we do or transparency, they thought it would be better to continue as a commercial partnership, but ended the investment – which we agreed to,” said Zilberman. He added that the change had no effect on its business “apart from questions that come up from time to time.”
Another company that has faced controversy is the U.S. startup Clearview AI, which has sold hundreds of facial recognition systems to police departments in the United States based on a database of 3 billion photos gathered from social media and websites. A police officer can enter a picture of a suspect and have it matched with other pictures of the suspect on the internet.
In the wake of a recent expose by The New York Times, Clearview AI faces multiple lawsuits. But over the weekend, it came out in defense of the technology. “We strongly believe in protecting our communities, and with these principals in mind, look forward to working with government and policy makers to help develop appropriate protocols for the proper use of facial recognition,” company CEO Hoan Ton-That said in a statement.
Outside the United States, facial recognition remains a popular tool for law enforcement agencies. The police in London recently began operating a system that identifies criminal suspects on the streets. But the technology is most popular among nondemocratic governments, first and foremost China. China is the world leader in the size and scope of its usage, most notoriously to monitor citizens belonging to its minority Uighur population. As a result, it’s hard to imagine that the recent announcements by America’s technology giants will have a global impact. The ball is now in the court of governments around the world.