“Artificial Intelligence (AI) is a means, not an end. It has been around for decades but has reached new capacities fueled by computing power. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks,” said Thierry Breton, the EU’s internal market commissioner, during the release of a new set of AI regulatory proposals for the union. “Today's proposals aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
The new EU proposals focus on the growing field of computer vision, defined as the ability of machines to recognize and process images and videos, much like the human brain but many orders of magnitude faster. While the human brain is still far superior in many ways, computers have the advantage of being able to identify multiple objects in an image simultaneously and process multiple streams of footage endlessly without needing a break. Where computer vision has been limited is the full comprehension of images, as they depend on humans to train their systems to only find certain objects rather than understand the footage as a whole. This is where AI comes in, to give the computer the ability to learn what is worth identifying.
“Video analytics based on computer vision techniques for applications such as facial recognition were, until recently, prohibitively expensive, and required a high degree of customization, immense computing power, and expensive cameras,” explains our latest report – AI & Machine Learning in Smart Commercial Buildings. “However, due to tremendous advances in algorithm performance, falling costs for computing power, and the availability of high-resolution cameras at reasonable costs, computer vision-based video analytics have seen a surge in popularity, and have now become one of the fastest-growing fields of AI.”
These developments are now forcing regulators to take action to guide the growing market for AI-enabled video surveillance products in a way that protects citizens and drives economic growth. The potential of the technology for a wide range of smart building applications is undeniable, from health and safety to productivity enhancement, to space utilization and, of course, security. However, the power of AI-enabled video surveillance is so great that it also poses a threat to the privacy and consumer freedom of building occupants and wider society. With their latest proposal, the EU has positioned itself as a global leader in finding the balance between the good and bad aspects of the emerging technology.
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way,” said Margrethe Vestager, Executive Vice-President for A Europe fit for the Digital Age. “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
The new EU regulation analyzes AI solutions using a risk-based approach. Each system is classified as either unacceptable, high-risk, limited-risk, or minimal-risk according to its capability to do harm to citizens. Unacceptable risk systems are considered a clear threat to the safety, livelihoods and rights of people, and will be banned under the EU proposals. These include systems with the ability to manipulate human behavior to circumvent users' free will or those that would allow forms of “social scoring” by governments. Essentially, the doomsday AI applications that opponents of the technology have warned against for many years.
High-risk systems may have similar technological abilities as the unacceptable risk group but appear to be categorized as high-risk by their application. Systems used in critical infrastructure, education, medical fields, employment, law enforcement, immigration, and justice, for example, may offer huge benefits but also pose significant threats when misused. For the high-risk group, the EU proposals would demand that systems adhere to strict obligations before being approved for use. It would be safe to assume that many forms of AI video surveillance currently being applied by the Chinese government would fall into this category, despite their ability to reduce crime rates or manage epidemics such as COVID-19.
Limited-risk AI solutions are less defined but are likely to include the use of AI for chatbots and other forms of human-machine interaction. Under the EU proposals, these systems would be subject to specific transparency obligations, namely the requirement to ensure that users are aware they are interacting with an AI-enabled machine. Minimal-risk solutions, meanwhile, would include applications such as AI-enabled video games or spam filters, and the EU proposals would allow them relatively free entry into the market. The new European Artificial Intelligence Board would support national bodies in the categorization and implementation of the various AI systems.
“Horror stories and fear mongering about the unethical use or future threats of AI are a common staple in today’s media. Despite being increasingly aware that AI is being used for practical applications that directly impact their lives, the public does not necessarily understand how AI technologies work which can negatively impact their willingness to trust AI,” explains our comprehensive report. “As the complexity and volume of data being used to drive AI systems rises, it becomes increasingly important to develop AI in a way that’s transparent, explainable, and empathetic in addressing these fears and confusions.”
That is what the new EU proposals are all about, however, in the union’s efforts to balance the social and economic benefits of the technology against the fears of misuse, the new regulations include some concerning elements. One major loophole is that AI systems released within one year after the law is passed will be grandfathered in, thereby avoiding the rules even if they would otherwise be considered high-risk. Critics suggest this will empower larger providers who already have many existing AI products and disadvantage newer entrants. Despite this, and other criticized elements of the proposals, the new EU regulations look set to lead the union’s policy and even become a global standard for AI development.
“Any uses of AI that are deemed as a threat to people’s safety or rights such as live facial scanning or uses that are used to manipulate behavior, exploiting children’s vulnerabilities or use subliminal techniques are likely to be banned under the new regulations,” continues our brand new report. “As with prior GDPR regulations, the impact will also not just be felt in Europe, the new regulations will be extraterritorial in scope, so will apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals – making them likely to become somewhat of a global de-facto standard.”