Can we learn to trust AI?

Can we learn to trust AI?

Articles

Read

When moderator Chris Clague mistakenly introduced Poppy Gustafsson—CEO of Artificial Intelligence cyber-security outfit Darktrace—as the CEO of "Darkforce" at our recent Open Mic event, it was without doubt an honest mistake. While Gustafsson laughed off the slip-of-the-tongue, there are doubtless some who may suspect that seemingly autonomous AI systems are something of a "dark force". Suspicions abound about this esoteric new technology, perhaps not surprisingly. AI has been the villain at the heart of various dystopian tales over the years. But is there any merit to a fear of AI, or is just another case of humans being wary of uncharted territory?

Part of this apprehension is perhaps attributable to the fact that few understand how artificial intelligence comes to the decisions that it does. Moreover, as Clague put to Gustafsson, it often goes deeper than this to a suspicion that even those who created the algorithms don’t fully understand that which they’ve created—the so-called “black box”. Gustafsson conceded the problem but replied “it’s not about making all of our users experts in computational science…You’ve got to show—not necessarily how—but why it’s come to the decisions it has…and then the human being can say ‘okay, I don’t necessarily know how it came to that decision but I can see the decision it made, and I can see in the context of my understanding that that makes sense.’”

 

And this drive to turn the output of artificial intelligence into something more, well… intelligible, could be instrumental to building trust. The result of a poll of attendees revealed that 76% felt that in their line of work it was very important that facts and evidence outweighed conventional wisdom. Will Moy, CEO of Full Fact noted that it would be interesting to ask next whether the panel attendees felt that they had sufficient facts and data available to them in a “decision-ready” format. “One of the challenges,” he explained, “is transforming data—which we have lots of—and research & analysis into decision-ready information.” 

Jacek Olczak, CEO of Philip Morris International, highlighted the need for empathy in helping those who have reservations to become comfortable with new technologies. He likened the process to that of change management within a large business: “Describe where you want to go, what is your new direction, what are the steps you’re going to take. And then take everyone, almost like in kindergarten, step-by-step by hand, and build in them that trust that it’s perfectly ok, that we’re changing our perception of the world because new facts are now on the table.”

All of which points to a helpful roadmap to better understanding and widespread trust of AI, but critical questions remain: is such trust justifiable, and indeed is it desirable?

Gustafsson certainly thinks so, but admits that the risks can be sizable. She points to the active nature of her company’s AI product. It doesn’t just identify suspected cyber-attacks, it also takes immediate (and autonomous) action to mitigate them. That requires a huge investment of trust from her customers, as she explains: “You cannot get it wrong. You only need to get it wrong once, and they won’t go with it.” She also highlights that given the requirement for trust is so high, they need a good reason to commit to it; you have to be solving a real-world problem.

Yet AI has some way to go to deserve that trust fully says Moy–himself well versed in the topic (Full Fact is one of the world leaders in using AI to tackle false and harmful information online). Essentially machine learning algorithms fall into two categories. In the industry jargon these are known as Supervised and Unsupervised. While “unsupervised” models seek to organize data in ways that humans may not instinctively think to, "supervised" machine learning can be broken down further into regression–predicting outcomes based on historical trends in continuous data, and classification—trying to predict which category something falls into. Both of these require "training data”, and herein lies a risk, suggests Moy. “Clearly, the selection of that training data is an enormously sensitive topic,” he explains. “I don’t think—in socially sensitive uses of AI—there is enough scrutiny of what that training data is, and what its unintended consequences are for different groups of society, different topics of discussion…We believe that the internet companies need to step up and conduct open evaluation of the kind of technology they’re using and the kind of unintended consequences it might have.”

Poppy Gustafsson too, acknowledges this risk, particularly when humanity is undergoing seismic changes: “A couple of months ago all of the global businesses suddenly transformed themselves from being ones that were office based to ones where we’re all just working from our home. That was a huge shift and how do you make sure that you don’t see that as ‘the world is undergoing a massive cyber-attack, everything has changed’ when it’s not, it’s just the consequence of behavioural norms.” Nevertheless, she feels that the prize of AI’s insight is worth navigating these risks for. In an increasingly noisy digital ecosystem, she points to the immense help of a computer’s perspective: “I think that something that we rely on quite heavily is using our own technology to see through that noise to the underlying patterns.”

Perhaps this gives some clue to the tension at the heart of this discussion. Olczak suggested that one of the problems with ‘artificial intelligence’ could be the nomenclature. "Artificial" suggests that it’s not natural, somehow counter to our humanity. And yet it’s easy to imagine that an AI black-box in the corner of one’s room could be rendered instantly more agreeable if it was dubbed, say, a “decision-making helper”. Who among us would not welcome such a gizmo? AI without humanity could certainly be a daunting idea–possibly deserving of the aforementioned “dark force” moniker. Yet when AI’s insights are placed in the context of our own wisdom, it can make us both stronger. After all, the dystopian stories mentioned above often hinge around AI deciding it doesn’t need the human element after all. But given the benefits AI could bring, it may be just as dystopian to imagine a future where we decide that we can manage perfectly well without this powerful tool.

In the end then, perhaps humanity and AI represent two sides of the same coin, each trying ever harder to comprehend the other. Do heads trust tails? Who knows. But how many decisions have been helped by a coin toss?

– Poppy Gustafsson, CEO of Darktrace

You cannot get it wrong. You only need to get it wrong once, and they won’t go with it."

Views and opinions voiced by panelists are their own. They do not necessarily reflect views and positions of PMI.

more content we think you'll like