Recently, Stanford Researchers Michal Kosinski and Yilun Wang trained a machine powered by artificial intelligence (AI) to detect sexual orientation of people to an accuracy of 81%, simply by scanning photos of faces. Kosinski and Wang only created the algorithm to highlight the potential and potential dangers of AI; however, in a world where the persecution of homosexuals is still widespread, the backlash against their creation was fierce.
Our JPSP paper warning that sexual orientation can be predicted from faces is now available at https://t.co/d1AAc6t67O pic.twitter.com/6E5cfPYx7m
— Michal Kosinski (@michalkosinski) September 8, 2017
It’s “junk science” that “threatens the safety and privacy of LGBTQ and non-LGBTQ people alike,” said gay advocacy groups like Glaad and the Human Rights Campaign. They have “invented the algorithmic equivalent of a 13-year-old bully,” wrote Greggor Mattson, the director of the Gender, Sexuality and Feminist Studies Program at Oberlin College. In the wrong hands, such software could threaten the freedom and safety of millions of people.
More recently, Facebook developed a “proactive detection” AI technology that scans Facebook posts for patterns that may indicate suicidal thoughts. The company hopes that the system will reduce incidents of suicide and self-harm by sending mental health resources to the user at risk or their friends, or even contact local first-responders when the AI identifies immediate risk. Like the first example, this AI impinges on privacy and freedom but this one could save lives, “we have an opportunity to help here so we’re going to invest in that,” said Facebook VP of product management Guy Rosen.
The police force is also using Artificial Intelligence. In one case, 18 year old Brisha Borden and her friend were caught stealing an unguarded scooter and bike worth a total of $80. Earlier that year 41-year-old Vernon Prater was arrested for shoplifting $86.35 worth of tools from a nearby Home Depot store. Prater was a seasoned criminal, convicted of two counts of armed robbery and one attempted armed robbery, for which he served five years in prison. Borden’s record included only some minor offences as a juvenile.
Nevertheless, the police AI marked the teenage scooter thief Borden at eight out of 10, labeling her as high risk of commiting a future crime - Prater meanwhile was given three out of 10, the AI considered the serial armed robber a low risk of commiting a future crime. The crucial difference between these criminals, and the reason for the AI’s strange result according to critics, was that Borden is black and Prater is white. Two years later Borden has no new charges but Prater is serving an eight-year prison term for stealing thousands of dollars’ worth of electronics. The AI system got it entirely and concerningly wrong.
“Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” said then US Attorney General Eric Holder. “They may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society,” he added.
“At the very best, it’s a highly inaccurate science,” said Clare Garvie of Georgetown University’s Center on Privacy and Technology, about AI’s promises to predict criminal behavior, intelligence and other character traits from faces. “At its very worst, this is racism by algorithm.”
These examples, and new applications arising everyday, show the great advantages and serious concerns we as a society need to consider as we continue to develop more and more powerful AI systems. As with the use of explosives, biological toxins, or personal medical information, in order to promote advantages while also addressing concerns we will need good regulation. However, that is proving a challenge for this broadly applicable and poorly understood technology.
“No countries have yet worked out how to answer the innumerable questions posed by rapid advances in AI,” said Geoff Mulgan, CEO of global innovation foundation Nesta. “That will change in 2018, as governments take the first serious steps towards regulating AI and guiding it towards safer and more ethical uses. They’ll try to speed up the development of new technologies, firms and industries, while also protecting the public from harm,” he wrote in an article this month.
In November 2017, the UK Government announced a new Centre for Data Ethics and Innovation, described as "a world-first advisory body to enable and ensure safe, ethical innovation in artificial intelligence and data-driven technologies." Aside from stating a £9 million ($12.5 million) budget commitment, the new organization has offered few details on its role, powers or employees.
Centre for Data Ethics and Innovation, and similar organizations that will inevitably emerge around the world, face an uphill challenge in uncharted territory. In fact we already use so many forms of AI in our lives and that is increasing rapidly, meaning any regulatory body will be playing catch up from day one. Our smartphones, smart buildings, cities and grids have been making AI a part of our day-to-day for years, for example. Any fundamental review of the technology may also raise questions about the way the AI is being used by companies who have come to define the current era; Google, Facebook, Amazon and Apple.
Outside of government a lively debate on this topic has been ongoing. In the UK groups including the IEEE, The Future of Life Institute, the ACM and the Oxford Internet Institute, have voiced the need for regulation.
Britain’s Nuffield Foundation even set up a Convention on Data Ethics. The US AI debate is led by organizations like AI Now and Danah Boyd's Data & Society, as well as notable individuals such as academic Ryan Calo and entrepreneur Elon Musk.
“There is no playbook on how to do this well. AI will require responses at many levels, from regulation to self-regulation, law to standards, health to warfare, tax to terrorism. The debate is likely to swing between excessive complacency and excessive fear,” said Mulgan, Nesta CEO, who once proposed the formation of a Machine Intelligence Commission.
However, Mulgan believes that, “there’s no alternative to getting serious. And although some countries may be tempted by a race to the bottom - an ‘anything goes’ environment for AI - history suggests that the most successful places will be ones that can shape clear, firm and fair regulations that are good at coping with rapid change.”
[contact-form-7 id="3204" title="memoori-newsletter"]