Get all the news you need about Smart Buildings with the Memoori newsletter
Artificially intelligent robots could, in the not too distant future, make human labor obsolete, according to Tesla CEO Elon Musk. To counter the threat to jobs and freedom, Musk co-founded a nonprofit organization to promote safe artificial intelligence (AI) research and has begun advocating for the benefits of a universal basic income to assist displaced workers.
Some artificial intelligence technology is raising serious questions around privacy, security and productivity in the work environment. One London-based startup, and recipient of support from a cyber-security accelerator run by British intelligence agency GCHQ, is creating just such technology.
StatusToday’s AI platform utilizes a continuous supply of employee metadata, including everything from the files you access, how long and often you look at them, to when you use a key card or go for lunch. Deviate from your normal rhythm and your new big robot brother will sound the alarm.
“All of this gives us a fingerprint of a user, so if we think the fingerprint doesn’t match, we raise an alert,” says Mircea Dumitrescu, StatusToday’s chief technology officer. “We’re not monitoring if your computer has a virus,” says Dumitrescu, “We’re monitoring human behavior.”
“Privacy!” You may scream in protest, and you wouldn’t be alone. Employers don’t have the right to watch your every move and store data on your behavior, even if it is during work hours, do they?
Your boss may already note down when you get to work, he or she may notice what you’re wearing or how disheveled you might look, your boss may even be monitoring how much time you spend on social media, in the toilet or playing solitaire on your computer. So is this new tech such a big change?
Yes, according to many concerned with privacy in the workplace. These AI spying systems take employee monitoring to a whole new level, and storing information on employee behavior needs legal justification or consent. However, like in so many privacy debates, employee-monitoring technology has found its justification through security.
These AI systems gather metadata in order to develop a picture of how each employee normally behaves and functions, allowing it to flag anomalies in real time. The idea being, that it could detect when an employee steps out of their usual behavioral patterns, and therefore pose a security risk. Downloading a large number of unusual files might suggest stealing confidential information, for example, while accessing areas of a building without an obvious reason may indicate a physical security threat.
The physical security business is a rapidly growing sector. In fact, the total value of world production of physical security products at factory gate prices in 2016 reached $28.44 billion, according to our recent report. Access control, intruder alarms and video surveillance are already very present technologies in the modern smart office building.
Under the auspices of security, AI could turning the gaze of these products on to loyal employees, not just slackers and intruders.
“It seems like they’re just using the aura around AI to give an air of legitimacy to good old-fashioned work place surveillance,” says Javier Ruiz Diaz, policy director at digital campaigning organization the Open Rights Group. “You have a right to privacy and you shouldn’t be expected to give that up at work.”
While security may be the justification, the mere presence of employee-monitoring data will allow employers to track their staff’s productivity. In recent years, smart building technology has been adapting light levels and air quality to increase employee productivity.
The introduction of AI monitoring tools, however, may have the opposite effect says Paul Bernal at the University of East Anglia, “the general creepiness will bother people, and that could be counterproductive if it affects their behavior.”
Phil Legg at the University of the West of England goes further, suggesting that attempting to track unusual behavior will never catch every security risk. “If people know they’re being monitored, they can change their behavior to game the system,” he says. By understanding the monitoring system, a disgruntled employee could collect a set of damaging files one-by-one over the course of a year, under the AI’s radar.
Security over privacy may be acceptable for employers, despite the concerns of their employees. However, security measures negativity affecting productivity would make such technology much less desirable for companies. Furthermore, AI employee-monitoring technology, if employees know it is there, may actually be creating the unusual behavior that it seeks to identify, undermining its very purpose.
Increased monitoring is a reality in every part of our lives, and while AI employee-monitoring tech still needs to work out some issues with privacy and consent, it is likely to find its place in the smart building.