Which is more intelligent, the dog that fetches the newspaper from the front yard or the dog that runs away as soon as the door is opened? Is the dog that earns its food through obedience and affection smarter than the cat that offers neither but gets its food anyway? Is a chimpanzee that creates a tool to extract termites from their mound cleverer than the anteater that uses its claws and snout to achieve the same goal?
It is impossible to answer these questions without defining intelligence and to assess a specific animal’s intelligence we must separate it from human intelligence. Our natural urge is to anthropomorphize animals in order to judge their intelligence; an animal that learns to press buttons in a laboratory in order to get food is seen as more intelligent then one which refuses to participate in the experiments of its captors. This concept is not so different when defining the intelligence of our machines, buildings and cities.
The leading artificially intelligent smart assistants, for example, are the most human-like ones. If we are creating a machine to assist us, aren’t we limiting ourselves by simply trying to make it as human as possible, when in fact it could be anything? As we race to create our artificially intelligent built environment, we are restricted by our definition of intelligence, our imagination and our bias that human intelligence is somehow the ultimate goal.
“We build machines with narrowly human properties, skills and behaviors – this is a terribly restrictive idea of intelligence,” suggests Alex Taylor, a sociologist at Microsoft Research who is witnessing, firsthand, the entanglements between humans and machines. “We design machines to think and act in ways that only mirror humans. That’s too restrictive, limiting what AI might be capable of”, he reasserted.
According to data analytics firm Quid, private investment into AI grew from $1.5bn in 2010 to more than $5bn in 2015. Commercial application of these investments was seen in industries ranging from autonomous vehicles to medical diagnostics, however, as much as $1.7bn of this AI investment went into smart buildings.
Much of the smart building innovation in AI is emerging through a field called deep learning. “Deep learning is a form of artificial intelligence that relatively mimics how our brain hierarchically understands objects and environments,” says Ruggero Altair Tacchi, lead data scientist at Quid. “This allows us to approach problems from different scales, for example, in computer vision, where a computer makes sense of an image at different layers.” Since 2010, deep-learning companies with applications for smart buildings have raised $273m, according to the firm.
This is promising for the smart building industry, where computers can learn to recognize patterns ranging from people in the room, to temperatures that correlate with high performance and efficiency, and then recommend these conditions “We’re already seeing this applied to retail stores and offices,” says Tacchi. “In retail, this is helping with inventory protection. In offices, we see firms optimizing office dynamics by matching people on teams to enhance productivity.”
All these applications remain based on recreating human intelligence rather than developing a new form of intelligence to better suit machines and our objectives. I cannot give you examples of what a smart, truly AI enabled, building may do, say or look like, without the limitations described, because I am also human. This conundrum suggests we may not have the ability to design without such bias, and if we did, would that not simply be by giving AI complete freedom from human ideals and rules, just an objective.
In science fiction, AI with complete freedom often “realizes” that human limitations are the problem. A problem that Hollywood usually imagines is best solved by eradicating or enslaving our society. The fearful prediction is even backed by influential figures such as Stephen Hawking and Elon Musk. However, a new Stamford University report on the social and economic implications of artificial intelligence is dispelling such ideas as unfounded.
The study, coauthored by 20 AI experts, is part of a project intended to last 100 years and was commissioned in response to our rapid advancements in computer science. Its first task, it seems, is to subdue such alarmism. “No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future,” the report reads.
The report does, however, believe that AI is set to fundamentally change our society. It predicts that automated trucks, flying vehicles, and personal robots will be commonplace by 2030. It also warns of the social and ethical implications of AI, such as unemployment and erosions of privacy driven by new forms of surveillance and data mining.
The report also states that, “the characterization of intelligence as a spectrum grants no special status to the human brain. But to date human intelligence has no match in the biological and artificial worlds for sheer versatility”. It continues to suggest that, “this makes human intelligence a natural choice for benchmarking the progress of AI.”
In the 1940s Alan Turing conceived the idea that a computer could develop the “intelligence” to beat a human at chess, and when the term AI was coined in 1956 few would have predicted the path it has followed to date. We are constantly redefining machine intelligence, in fact we seem to create a new level of AI and then call it something else; pattern recognition, machine learning, deep learning. Always reserving the term artificial intelligence as a perpetually unobtainable future development.
When we achieve a new level of AI, it is labeled as something else, and we continue our search for true AI. Perhaps human intelligence does serve as a benchmark, and when we achieve truly artificial human intelligence we will then begin our search for something more advanced, something less human, perhaps something that goes beyond the word “intelligence”.