Is facial recognition good? It provides a reliable form of digital identification that does not require every user to have a smartphone or other authentication device. It removes the need for physical contact with shared biometric devices, such as fingerprint scanners, supporting virus mitigation efforts during the pandemic. By flooding our buildings and cities with facial recognition enabled video surveillance we can identify dangers before they happen, create unprecedented efficiencies, and provide personalized services like never before. Facial recognition technology is a fundamental element in most human-centric visions of futuristic buildings and cities.
Is facial recognition bad? No other technology erodes user privacy more than one which can automatically recognise a face and link it to a database of information about each user. Facial recognition can be misused for soico-economic and racial profiling that unfairly limits opportunities to certain users, it allows the data-holder to drive their own agenda with limited oversight, and can be used to control people through fear. By flooding our buildings and cities with facial recognition enabled video surveillance we foster a divergent and dangerous “underground” class seeking to avoid over monitoring. Facial recognition technology is a fundamental element in most dystopian visions of our future buildings and cities.
It is left to lawmakers to reconcile these different perspectives, to ensure society benefits from the good that facial recognition can provide, while preventing or limiting the bad side of this powerful technology. While the same technology is being developed around the world, the technology’s legislation varies widely in different regions, which creates a complex landscape for the global introduction and operation of facial recognition tech. Where privacy is protected, the market suffers, and where the market is protected, privacy suffers, leaving each nation and bloc to find the right balance for its society and industry.
“Facial recognition and other forms of biometric data analysis are still relatively new technologies, and current regulations around them are very much a work in progress. However, given the sensitive nature of biometric data, many regulators are quickly moving to firm up legislation on how that data is collected and stored,” explains our latest physical security market research. “Increasingly stringent regulations governing the usage of the technology in various nations around the world could naturally impede market growth over time.”
In the EU, lawmakers presented official documentation for a new risk-based model for regulating AI within the block’s single market in April 2021. In the new legislation, most uses of AI wouldn’t face any new regulation but a subset of “high risk” use cases will be subject to limitations or outright bans. Facial recognition technology, as well as any involuntary human scanning tech or subliminal techniques, will likely fall into that high-risk category and face strict regulation. As with GDPR, the impact will also not just be felt in Europe, the new regulations will be extraterritorial in scope, so will apply to any company selling an AI product or service into the EU.
“On AI, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” says Margrethe Vestager, Executive Vice-President for A Europe fit for the Digital Age. “By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
In the UK, the government is consulting on revisions to the Code of Practice to reflect changes in legislation. This is the first revision to the code since its introduction in June 2013. The code, issued under section 30 of the Protection of Freedoms Act 2012 (PoFA), provides guidance on the appropriate use of surveillance camera systems by local authorities and the police. Subject to the comments received, the government’s intention is to lay the draft code before parliament in late autumn 2022. Biometrics and Surveillance Camera Commissioner, Professor Fraser Sampson, is responsible for encouraging compliance with the code and continuously reviewing its performance.
“There can be no doubt that technologies using surveillance and biometric data are progressing at a rapid pace. Clearly, the use of such technologies can be intrusive to privacy and raises other human rights considerations. However, when used ethically and accountably, the technology can also provide significant opportunities for law enforcement,” said Prof. Sampson in a formal DCMS consultation. “Finding the right balance between the privacy concerns and entitlements of the individual, while harnessing new technology ethically, accountably, and proportionately, is proving a significant challenge for policing today; tomorrow’s technology will make it even more so.”
In the US, there have been federal, state, and municipal-level efforts to regulate AI technology and class-action lawsuits taking aim at high-profile companies that use facial recognition. While there is no federal law to specifically regulate the technology, numerous bills have been proposed, such as the Commercial Facial Recognition Privacy Act of March 2019, the Ethical Use Of Facial Recognition Act of 2020, and Facial Recognition and Biometric Technology Act, also of 2020. However, none of these federal proposals moved past the introduction phase, leaving consumer protection regulation to individual cities and states, adding to the complexity of the technology’s use and the privacy landscape in the US.
“US Facial recognition regulation, in the form of bans, is on the rise at the city level. Some bans are broader and include private actors, like stores and restaurants, while most only prevent the use of facial recognition by public actors, like police departments,” reads our comprehensive security report. “A total of 14 cities, 1 county, and 1 state, have not only banned government use of facial recognition, but are effectively boycotting the purchase of any product that includes facial recognition, with the City of Oakland, even declaring that they are ‘sending a message to the market not to develop these products’. The state of Illinois has been a leader in biometric data privacy, and other states, including Texas and Washington, have followed in Illinois’ footsteps.”
China serves as an example of “flooding our buildings and cities with facial recognition enabled video surveillance” and as such, the nation is rife with accusations of soico-economic and racial profiling while also triggering divergent groups that are opposed to the technology and the regime. However, by operating the most advanced facial recognition tech in the world, China has also reaped the benefits of virus mitigation, law enforcement, efficiency, and population control. As major facial recognition systems in China are commissioned and operated by the government itself, regulation becomes somewhat irrelevant, allowing the industry and technology to advance much further than other parts of the world, but the contrasting approach to privacy has created challenges for Chinese firms in Europe and the US.
“Chinese AI unicorns have benefited from huge capital injections and government support. Their capabilities will surely become even stronger if they are able to successfully list and continue investment via the amounts of new capital raised,” reads our new physical security report. “What remains to be seen [after many were blacklisted in the US and EU], is whether they can effectively market their solutions outside the Chinese domestic market as well as sustain their technological advantage, growth and profitability while at the same time addressing global privacy concerns related to largely unregulated public surveillance in smart cities.”
As long as there is uncertainty around facial recognition regulation, there will be hesitancy on the funding and development of the technology - that is the case in Europe and the US. The EU will likely see more certainty soon, as new regulation is enforced, allowing companies to develop appropriate technology for the market. It is unclear when certainty on facial recognition will arrive in the US, meaning user privacy and tech development will suffer for the foreseeable future. While in China, we will continue to see what a market flooded with facial recognition looks like and how it works, albeit in isolation. Today, the diversity of the global market is providing a test bed for the good, the bad, and the legislation around facial recognition technology.