This article was written by Owen Kell, Senior IoT Research Associate at Memoori.
AI has a communication problem. There I said it, and I think I’m in a credible position to say it now, having spent the last four months interviewing industry insiders, and pouring over company websites, white papers, and annual reports in preparation for our new report into AI’s applications in Smart Buildings.
The way that the value propositions for AI are being communicated to prospective customers really aren’t doing the trick. It’s not only me that feels like this, numerous commentators and end users we’ve spoken to also feel the same, with Gartner in September 2019 reporting that even amongst CIOs, 42% of respondents “aren’t fully understanding AI benefits and use in the workplace”.
For me, the key communication issues can be broken down into 3 key areas:
The first and most frustrating issue from a research point of view is the overzealous or misinformed marketing of AI solutions. For many years I, like many others have been firmly in the “AI Sceptic” camp, in part due the various historic cycles of over-hype around AI I’ve seen over the course of my career, but also in part due to overzealous marketing. If these lasts few months have taught me anything, however, it’s that there are now some genuinely transformational and highly innovative commercial AI solutions emerging in the smart buildings space. Solutions that can make significant positive contributions to the performance of our buildings and the ways we interact with them - but adoption is still being hindered by a combination of fear, confusion and mistrust.
Marketing execs make repeated conflations of AI with non-AI solutions too, and further interrogation often indicates that solutions are actually driven by more traditional forms of analytics, which further dilutes the messaging and muddies the waters. In March 2019, The Verge released a revelatory article indicative of this trend. Based on the findings of an MMC Ventures Whitepaper, the article led with the headline that “Forty percent of AI Startups in Europe don’t really use AI”.
Overzealous marketing related to detection accuracy of computer vision systems or potential operational savings offered by other AI powered building systems solutions can serve to shoot vendors (and the industry as a whole) in the foot over the long term. All too often end users we’ve engaged with feel “once bitten twice shy” after investing in AI solutions that fail to live up to expectations. After disappointing results, they are left wary and mistrustful of any solutions being marketed as “AI” and much less likely to invest in similar solutions in future.
Accuracy and capabilities of AI systems will depend not only on the hardware and software being used to deliver the solution, but also on the particular environmental conditions where it is being deployed, and results of testing and AI algorithm training and development in their own controlled environments are often not reflected in reality out in the field.
Over the course of our research for this report, we’ve also been frustrated by the lack of detail provided by vendors to explain the design and capabilities of their AI solutions, and prospective buyers often experience the same frustrations. Publicly available materials related to “AI powered”, “AI enabled” or “AI driven” solutions are all too often vague, opaque, poorly differentiated, cloaked in pseudo-science or dominated by unsubstantiated claims around accuracy or performance, all of which makes it incredibly hard for end-users to differentiate and understand what makes one AI solution stand out from the next.
Vendors are, of course, wary about divulging detailed information about the “special sauce” that makes their AI tick, but they don’t need to give away the crown jewels to better explain what makes their solution a cut above the competition. Providing solid, digestible background information that explains aspects including: what machine learning approach was used to train the AI; whether and how the system improves over time; how training data was obtained; and how the solution would positively impact business operations and help make better more informed decisions all help to build the investment case. Leading vendors are doing this, providing white papers that explore the science behind what makes their solutions unique, as well as documenting case that show how and where their solutions have been successfully deployed in the past, with clear metrics and measurement of project outcomes, but such an approach remains the exception rather than then rule.
Secondly the industry’s lack of universally accepted definitions for most common AI terms adds to the confusion. A plethora of terms including cognitive computing, machine learning, deep learning, predictive analytics, chatbots, natural language processing (NLP), and facial recognition, are used liberally and almost interchangeably with AI to market solutions, typically without any definitions being provided.
Part of the problem is also that the definitions of what AI is and what it is capable of accomplishing have been constantly changing over the years along with corresponding changes in technological capabilities.
Without universally agreed definitions and more widespread education about the capabilities and limitations of different AI techniques and approaches it can be challenging for end-users to differentiate between one AI solution and the next. Again, marketing and communications materials should provide clarity wherever possible about what they mean when they refer to AI and any related terms to better communicate the value contribution and market differentation being offered through the use of AI in their offerings.
Finally comes an issue which is growing in terms of profile, scale and complexity, the challenge of “AI explainability”, which refers to tools and frameworks that are used to ensure that the results of machine learning models can be interpreted by humans. Machine learning algorithms, especially deep neural networks, are very good at ferreting out subtle patterns in huge sets of data, but they struggle to make simple causal inferences. Interpretability workflows can highlight why deep learning models make precise decisions.
Refinitiv report that there tends to be somewhat of a trade-off with explainable AI systems, as models whose predictions are totally transparent tend to suffer from a decline in predictive capability, or can be inflexible and computationally cumbersome. Nonetheless, explainability is key to building trust in AI outputs as it allows human operators to explain to colleagues and customers what’s going on under the hood as well as providing a clearer view of any potential bias that exist in the model’s data.
Our research shows that the market for AI solutions in smart buildings, though still nascent, is gaining genuine traction despite the challenges we’ve listed here, and we forecast a period of sustained growth. More concerted industry efforts to overcome the communications challenges we’ve covered here would nevertheless go a long way to fostering trust, understanding and end-user confidence – leading in turn to more widespread market adoption.