Facebook will stop using its facial recognition system and delete the biometric data of more than a billion people, the social media giant has announced.
The company has been using facial recognition since 2010 to automatically detect people in photos and videos, creating one of the largest known repositories of biometric information in the world.
According to a blog post by Jerome Pesenti, vice-president of artificial intelligence at Facebook’s parent company Meta, the self-imposed moratorium is part of a company-wide move to limit the use of facial recognition in its products.
“We need to weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules,” he wrote, adding the change would “result in the deletion of more than a billion people’s individual facial recognition templates”.
Pesenti further added that facial recognition could be helpful in a narrow set of use cases, including identity verification for financial products or accessing personal devices, which the company will continue working on while ensuring “people have transparency and control over whether they are automatically recognised”.
The move follows Facebook’s changing of its corporate name to Meta as part of a rebrand designed to push the company’s “metaverse” – its vision of a future internet that uses augmented (AR) and virtual reality (VR) to change how people interact both online and in the real world – at the end of October 2021.
However, while Facebook said the facial recognition templates would be deleted by December 2021, the company will retain its use of the DeepFace algorithm that powers the system. It has also not ruled out incorporating facial recognition in any future products.
“We believe this has the potential to enable positive use cases in the future that maintain privacy, control and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs,” wrote Pesenti. “For potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework.”
In 2020, Facebook was forced to pay $650m to settle a class action privacy lawsuit (originally filed in 2015) for allegedly using the biometric data of nearly 1.6 million users in Illinois without their consent or permission in contravention of the state’s Biometric Information Privacy Act.
When the Federal Trade Commission (FTC) fined Facebook $5bn in 2019, the company’s confusing controls and settings around how and when facial recognition would be used were cited as one of the reasons for the penalty.
The move to limit facial recognition makes Facebook the latest major technology firm to self-impose a moratorium on its use of the technology.
In June 2020, in the wake of mass protests against the police murder of George Floyd, tech giants Amazon, Microsoft and IBM all agreed to halt sales of their respective facial recognition technologies to US law enforcement agencies.
Calls to either legislate against or outright ban the use of facial recognition technology, especially in public spaces, have picked up pace throughout 2021.
In the UK, for example, the former commissioner for the retention and use of biometric material, Paul Wiles, told the House of Commons Science and Technology Committee in July 2021 that while there was currently a general legal framework governing the use of biometric technologies, their pervasive nature and rapid proliferation meant an explicit legal framework was needed.
While most of the committee’s discussion centred around police use of biometrics, Wiles said the pervasiveness and use of such technologies in the private sector would also need to be addressed by new legislation.
“It will be possible in the future to use live facial recognition purely for a private commercial profit motive interest, without necessarily making the individual aware that it is going on,” he said. “This is simply the analogue of what we’re already seeing in the use made of the data that every day all of us give, not just to big tech companies but to small companies as well, and the fact that they are exploiting that and selling that data on without us really understanding.”
In June 2021, information commissioner Elizabeth Denham said she was “deeply concerned” about the inappropriate and reckless use of live facial recognition (LFR) in public spaces, prompting her to publish an official information commissioner’s opinion to act as guidance for companies and public organisations looking to deploy biometric technologies.
In an accompanying blog post, she noted: “It is telling that none of the [private] organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”
In the same month, two pan-European data protection bodies – the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) – jointly called for a general ban on the use of automated biometric identification technologies in public spaces, arguing that they present an unacceptable interference with fundamental rights and freedoms.
Leave a Reply