Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
If the recent, very public war of words between two of the world’s most prominent technology CEOs, Elon Musk and Mark Zuckerberg, has taught us anything it’s that the realm of artificial intelligence (AI) is still a highly contentious one. For the uninitiated, Musk made headlines recently when he publicly stated that AI, in his opinion, posed a significant threat and was in dire need of regulation, going so far as to call it a “fundamental risk to the existence of civilisation”. Zuckerberg, meanwhile, called such warnings “pretty irresponsible”, choosing instead to accentuate the benefits AI could provide in saving lives through medical diagnoses and driverless cars.
Clearly, it’s very easy for technologists to put AI into a box and wax lyrical about their vision of how it will impact humanity. The truth is that it’s a far more nuanced debate than you might think – largely because AI can come in a number of shapes and sizes. To be clear, the form that is being discussed by Musk and Zuckerberg relates primarily to artificial intelligence that has ‘human level’ cognitive skills, otherwise known as AGI or ‘Artificial General Intelligence’, which despite impressive progress in a bunch of specialities (from driving cars to playing Go) is not close to imminent.
However, what this debate ignores is that AI is something that’s already in widespread use by many in a business context today, and that the current risks associated with it are not about whether it will leave us all in a smouldering pile of rubble. Instead of worrying about such apocalyptic doomsday scenarios, we should be focusing our energies on the very real dangers posed by this technology in the here and now if it is used incorrectly. These risks can include regulation violations, diminished business value and significant brand damage. While not cataclysmic in their repercussions to humanity, they can still have a major impact on the success or failure of organisations.
When addressing AI risks in a business context, it’s important to remember that not all AI is created equally. Specifically, artificial intelligence comes in two distinct flavours – Transparent and Opaque, and both have very different uses, applications and impacts for businesses and users in general. Here’s the technical bit: Transparent AI is a system whose insights can be understood and audited, allowing one to reverse engineer each of its outcomes to see how it arrived at any given decision. Opaque AI, on the other hand, is an AI system that cannot easily reveal how it works. Not unlike the human brain, any attempt to explain exactly how it has arrived at a certain insight or decision can prove challenging for it.
But while it’s true that the names ‘Opaque’ and ‘Transparent’ each have emotive connotations to them, it’s important that we do not let these influence us. Try to remember that there’s no such thing as ‘good’ or ‘bad’ AI – only appropriate or inappropriate use of each system, depending on your own needs. Opaque AI has a number of positive aspects to it which can prove useful in the right circumstances. Being transparent is a constraint on AI and will limit its power and effectiveness, and so in some instances an Opaque system might be preferable.
In highly regulated industries this choice becomes even more important. For example, in the financial services industry, proper use of Opaque AI in lending will result in improved accuracy and fewer errors. However, if banks are required to demonstrate how these operational improvements were achieved though reverse engineering the decision process (as mandated by the EU General Data Protection Regulation – or GDPR – for instance), it becomes a challenge or even a liability.
Another potential problem with an Opaque system is that of bias creeping in. Without your knowledge, an Opaque AI system could start favouring policies that break your organisation’s brand promise. Believe it or not, it’s easy for an AI system to use neutral data to work out customer details which it can then use to make non-neutral decisions. So, for example, an Opaque AI in a bank could interpret customer data and use it to start offering better deals to people based on race, gender or other demographics – all of which would, for obvious reasons, be a disastrous outcome.
So, how do you know whether or not you are using AI correctly, and whether or not Transparent or Opaque AI is best for you? The answer lies in how much organisations are willing to trust it. In order to fully trust an AI system one or two things need to happen. Either the AI needs to be transparent so that business management can understand how it works or, if the AI is Opaque, it needs to be tested before it is taken into production. These tests need to be extensive and go beyond searching for viability in delivering business outcomes; they must also search for the kind of unintended biases that I’ve just outlined.
As suggested earlier, there are other considerations to be taken into account, particularly for those organisations who use AI as part of a customer engagement system. When GDPR comes into effect in Europe in May 2018, it will mandate that companies must be able to explain exactly how they reach certain algorithmic-based decisions about their customers. What this means is that organisations who are able to use some sort of a switch – let’s call it a ‘T-Switch’ – to increase transparency by forcing the methods your AI uses to make decisions from Opaque to Transparent will have a distinct advantage as they’ll be much more easily able to comply.
The fact remains that no matter what Messrs Musk and Zuckerberg may argue, we’re not yet at a stage where AI will prove to be either the downfall or the salvation of human civilisation. However, what’s becoming clear that businesses are increasingly finding themselves at a crossroads when it comes to selecting which system is right for them. In theory, you might think that a transparent system would be the preferred choice of many if they could make it unencumbered, but in reality it may be a very tough decision to make.
Would you, for example, insist on a ‘Transparent’ AI doctor to diagnose patients if you knew that an Opaque alternative was available which was more likely to diagnose correctly and save lives? The point is that in some cases the deciding factor may be marginal, with a number of issues relating to profitability, customer experience and regulation to consider before organisations are able to make a decision. Dystopian views of AI rising up against humanity tomorrow may well grab the headlines, but let’s not to overlook the risks posed by artificial intelligence in the here and now and make sure you’re asking yourself the all-important question: is your AI right for you?
Guest Author: Dr. Rob Walker, Vice President, Decision Management & Analytics at Pegasystems