Voices in AI – Episode 27: A Conversation with Adrian McDermott

1 Comment

In this episode, Byron and Adrian discuss intelligence, consciousness, self-driving cars and more.

-
-
0:00
0:00
0:00

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Adrian McDermott, he is Zendesk’s President of Products where he works to build software for better customer relationships, including, of course, exploring how AI and machine learning impacts the way customers engage with businesses. Adrian is a Yorkshireman, living in San Francisco, and he holds a Bachelor of Science and Computer Science from De Montfort University. Welcome to the show, Adrian!

Adrian McDermott: Thanks, Byron! Great to be here!

My first question is almost always: What is artificial intelligence?

When I think about artificial intelligence, I think about AI as a system that can interact with and learn from its environment in an independent manner. I think that’s where the intelligence comes from. AI systems have traditionally been optimized for achieving specific tasks. In computer science, we used to write programs using procedural languages and we would tell them exactly what to do at every stage of that language. With AI, it can actually learn and adapt from its environment and, you know, reason to a certain extent and build the capabilities to do that. Narrowly, I think that’s what AI is, but societally I think the term has a series of connotations it takes on, some scary and some super interesting and exciting meanings and consequences when we think about it and when we talk about it.

We’ll get to that in due course, but back to your narrow definition, “It learns for its environment,” that’s a pretty high bar, actually. By that measure, my dog food bowl that automatically refills when it runs out, even though it’s reacting to its environment, it’s not learning from its environment; whereas a Nest thermometer, you would say, is learning from its environment and therefore is AI. Did I call the ball right on both of those, kind of the way you see the world?

I think so. I mean, your dog bowl, perhaps, it learns, over time, how much food your dog needs every day, and it adapts to its environment, I don’t know. You could have an intelligent dog bowl, dog feeding system, hopefully one that understands the nature of most dogs is to keep eating until they choke. That would be an important governor on that system, let’s be honest, but I think in general that characterization is good.

We, as biological computational devices, learn from our environment and take in a series of inputs from those environments and then use those experiences, I think, to pattern match new stimuli and new situations that we encounter so that we know what to do, even though we’ve never seen that exact situation before.

So, and not to put any words in your mouth, but it sounds like you think that humans react to our environment and that is the source of our intelligence, and a computer that reacts to its environment, it’s artificial intelligence, but it really is intelligent. It’s not artificial, it’s not faking it, it really is intelligent. Is that correct?

I think artificial intelligence is this ability to learn from the environment, and come up with new behaviors as a result of this learning. There is a tremendous number of examples of AI systems that have created new ways of doing things and have learned. I think one of the most famous is move thirty-four in Google’s AlphaGo when it’s playing the game Go against Lee Sedol, one of the greatest players in the world. It performed a move that was shocking to the Go community and the Go intelligentsia because it had learned and it had evolved its thinking to a point where it created new ways of doing things that were not natural for us as humans. I think artificial intelligence, really, when it fulfills its promises, is able to create and learn in that way, but currently most systems do that within a very narrow problem domain.

With regard to an artificial general intelligence, do you think that the way we think of AI today eventually evolves into an AGI? In other words, are we on a path to create one? Or do you think a truly generalized intelligence will be built in a completely different way than how we are currently building AI systems today?

I mean, there are a series of characteristics of intelligence that we have, right, that we think about. One of them is the ability to think about a problem, think about a scenario, and run our head through different ways of handling that scenario and imagine different outcomes, and almost to self actualize in those situations. I think that modern deep-learning techniques actually are, you know, the construction is such that they are looking at different scenarios to come up with different outcomes. Ultimately, we don’t necessarily, I believe it’s true to say, understand a great deal about the nature of consciousness and the way that our brains work.

We know a lot about the physiology, not necessarily about the philosophy. It does seem like our brains are sort of neuron-based computation devices that take a whole bunch of inputs and process them based on stored experiences and learnings, and it does seem like that’s the kind of systems that we’re building with artificial-intelligence-based machines and computers.

Given that technology gets better every year, year over year, it seems like a natural conclusion that ultimately technology advancements will be such that we can reach the same point of general intelligence that our cerebral cortex reached hundreds of thousands of years ago. I think we have to assume that we will eventually get there. It seems like we’re building the systems in the same way that our brains function right now.

That’s fascinating because, that description of human’s ability to imagine different scenarios is in fact some people’s theory as to how consciousness emerged. And, not putting you on the spot because, as you said, we don’t really know, but is that plausible to you? That being able to essentially, kind of, carry on that internal dialogue, “I wonder if I should go pull that tiger’s tail,” you know, is that what you think made us conscious or are you indifferent on that question?

I only have a layman’s opinion, but, you know, there’s a test—I don’t know if it’s in evolutionary biology or psychology—the mirror test where if you put a dog in front of a mirror it doesn’t recognize itself, but Asian elephants and dolphins do recognize themselves in the mirror. So, it’s an interesting question of that ability to self-actualize, to understand who you are, and to make plans and go forward. That is the nature of intelligence and from an evolutionary point of view you can imagine a number of ways in which that consciousness of self and that ability to make plans was essential for the species to thrive and move forward. You know we’re not the largest species on the planet, but we’ve become somewhat dominant as a result of our ability to plan and take actions.

I think certain behaviors that we manifest came from the advantageous nature of cooperation between members of our species, and the way that we act together and act independently and dream independently and move together. I think it seems clear that that is probably how consciousness evolved, it was an evolutionary advantage to be conscious, to be able to make plans, to think about oneself, and we seem to be on the path where we’re emulating those structures in artificial intelligence work.

Yeah, the mirror test is fascinating because only one bird passes it and that is the magpie.

The magpie?

Yeah, and there’s recent research, very recent, that suggests that ants pass it, which would be staggering. It looks like they’ve controlled for so many things, but it is unquestionably a fascinating thing. Of course, people disagree on what exactly it means.

Yeah, what does it mean? It’s interesting that ants pass because ants do form a multi-role complex society. So, is it one of the requirements of a multi-role complex society that you need to be able to pass the mirror test, and understand who you are and what your place is in that society?

Yeah, that is fascinating. I actually emailed Gallup and asked him, “Did you know ants passed the test?” And he’s like, “Really, I hadn’t heard that?” You know, because he’s the originator of it.

The argument against the test goes like this: If you put a red dot on a dog’s paw, the dog knows that’s its paw and it might lick it off its own paw, right? The dog has a sense of self, it knows that’s its foot. And so, maybe all the mirror test is doing is testing to see if the dog is smart enough to understand what a mirror is, which is a completely different thing.

Do you think, by extension, and again with your qualification that it’s a layman’s viewpoint, I asked you a question about AGI and you launched into a description of consciousness. Can I infer from your answer that you believe that an AGI will be conscious?

You can infer from my answer that I believe that to have a truly artificial general intelligence, I think that consciousness is a requirement, or some kind of ability to have freedom in thought direction. I think that is part of the nature of consciousness or one way of thinking about it.

I would tend to agree, but let me just… Everybody’s had that sensation where you’re driving and you kind of space, right, and all of a sudden you snap to a minute later and you’re like, “Whoa, I don’t have any memory of driving to this spot,” and, in that moment, you merged traffic, you changed lanes, and all of that. So, you acted intelligently but you were not, in a sense, conscious at that moment. Do you think that saying, “Oh, that’s an example of intelligence without consciousness,” is the problem? Like, “No, no you really were conscious all that time,” or is it like, “No, no, you didn’t have, like, some new idea or anything, you just managed off rote.” Do you have a thought on that?

I think it’s true that so much of what we do as beings is managed off rote, but probably a lot of the reason we’re successful as a species is because we don’t just go off rote. Like, if someone had driven in front of you or the phone had rung, if all these things had happened, that would have created a suitably justifiable, stored in short-term memory because it’s important event while you were driving, then you would have moved into a different mode of consciousness. I think the human brain takes in a massive amount of input in some ways but filters it down to just this, quote unquote, “stream of consciousness” of experiences that are important, or things that are happening. And it’s that filter of consciousness, or the filter of the brain, that puts you in the moment where you’re dealing with the most important thing. That, in some ways, characterizes us.

When we think about artificial intelligence and how machines experience the world, I mean, we have five sensory inputs falling into our brains and our memories, but a machine can have, yes, vision, sound, but GPS, infrared, just some random event stream from another machine. There are all of these inputs that act in some ways as sensors for an artificially-intelligent machine that are much, in some ways, richer and more diverse, or could be. And that governor, that thing that filters that down, figures out what the objective is for the artificial intelligence machine and takes the right inputs and does the right pattern matching and does the right thinking, is going to be incredibly important to achieve, I think, artificial general intelligence. Where, it knows how to direct, if you like, it’s thoughts and how to plan and how to do and how to act, how to think about solving problems.

This is fascinating to me, so I have just a few more questions about AGI, if you’ll just indulge me for another minute. The range of time that people think it’s going to take us to get it, by my reckoning, is five years on the soonest and five-hundred on the longest. Do you have any opinion of when we might develop an AGI?

I think I agree with five years on the soonest, but, you know, honestly one of the things I struggle with as we think about that is, who really knows? We have so little understanding of how the brain actually works to produce intelligence and sentience that it’s hard to know how rapidly we’re approaching that or replicating it. It could be that, as we build smarter and smarter non-general artificial intelligence, eventually we’ll just wander into a greater understanding of consciousness or sentience by accident just because we built a machine that emulates the brain. That’s, in some ways, a plausible outcome, like, we’ll get enough computation that eventually we’ll figure it out or it will become apparent. I think, if you were to ask me, I think that’s ten to fifteen years away.

Do you think we already have computers fast enough to do it, we just don’t know how to do it, or do you think we’re waiting on hardware improvements as well?

I think the primary improvements we’re waiting on are software, but software activities are often constrained by the power and limits of the hardware that we’re running it on. Until you see a more advanced machine, it’s hard to practically imagine or design a system that could run upon it. The two things improve in parallel, I think.

If you believe we’ll, maybe, have an AGI in fifteen years, that if we have one it could very easily be conscious, and if it’s conscious therefore it would have a will, presumably, are you one of the people that worries about that? The super intelligence scenario, that it has different goals and ambitions than we have?

I think that’s one of many scenarios that we need to worry about. In our current society, any great idea, it seems like, is either weaponizable in a very direct way, which is scary. The way that we’re setup, locally and globally, is intensely competitive. Where any advantage one could eek out is then used to dominate, or take advantage of, or gain advantage from our position against our fellow man in this country and other countries, globally, etcetera.

There’s quite a bit of fear-mongering about artificial general intelligence, but, artificial intelligence does give the owner of those technologies, the inventor of those technologies, innate advantages in terms of taking and using those technologies to get great gain. I think there’s many stages along the way where someone can very competitively put those technologies to work without even achieving artificial general intelligence.

So, yes, the moment of singularity, when artificial general intelligence machines can invent machines that are considerably faster in ways that we can’t understand. That’s a scary thought, and technology may be out-thinking our moral and philosophical understanding of the implications of that, but at the same time some of the things that we’re building now—like you said, are just fifty percent better or seventy-seven percent smarter—could actually be, through weaponization or just through extreme mercantile advantage taking, those could have serious effects on the planet, humankind, etcetera. I do believe that we’re in an AI arms race and I do find that a little bit scary.

Vladimir Putin just said that he thinks the future is going to belong to whoever masters AI, and Elon Musk recently said, “World War Three will be fought over AI.” It sounds like you think that’s maybe a more real-world concern than the rogue AGI.

I think it is, because we’ve seen tremendous leaps in the capability of technology just in the last five years, certainly no less than five to ten years. More and more people are working in this problem domain; that number must be doubling every six months, or something ridiculous like that, in terms of the number of people who are starting to think about AI, the number of companies deploying some kind of technology. As a result, there are breakthroughs that are going to begin happening, either in public academia or more likely, in private labs that will be leverageable by the entities that create them in really meaningful ways.

I think by one count there are twenty different nations whose militaries are working on AI weapons. It’s hard to get a firm grip on it because: A, they wouldn’t necessarily say so, and, B, there’s not a lot of agreement on what the term AI means. In terms of machines that can make kill decisions, that’s probably a reasonable guess.

I think one shift that we’ve seen, and, you know, this is just anecdotal and my own opinion, is that so much of base research in computer science or artificial intelligence is done in academia and done basically publicly, publishable, and for the public good, I think, traditionally. And if you look at artificial intelligence where, you know, the greatest minds of our generation are not necessarily working in the public sphere on artificial intelligence; they’re locked up, tied up in private entity companies, generally very, very large companies, or they’re working on the military-industrial complex. I think that’s a shift, I think that’s different from scientific discovery, medical research, all these things in the past.

The closed-door nature of this R&D effort, and the fact that it’s becoming almost a national or nationalistic concern, with very little… You know there are weapons treaties, there are nuclear treaties, there are research weapons treaties, right? I think we’re only just beginning to talk about AI treaties, and AI understanding and we’re a long way from any resolve because the potential gains for whomever goes first, or makes the biggest discovery first, makes a great breakthrough first, are tremendous. It’s a very competitive world, and it’s going on behind closed doors.

The thing about the atomic bomb is that they were hard to build, and so even if you knew how to build it, it was hard. AI won’t be that way. It’ll fit on a flash drive, or at least the core technology would, right?

I think building an AGI, some of these things require web-scale computational power that currently, based on today’s technology, that requires data centers not flash drives. So, there is a barrier to entry to some of these things, but, that said, the great breakthrough more than likely will be an algorithm or some great thinking, and that will, yes, indeed, fit on a modern flash drive without any problem.

What do you think of the open AI initiative which says, “Let’s make this all public and share it all. It’s going to happen, we might as well make sure everybody has access to it and not just one party.”

I work at SaaS company, we build products to sell, and through open-source technologies, through cloud platforms, we get to stand on the shoulders of giants and use amazing stuff and shorten our development cycles and do things that we would never be able to do as a small company founded in Copenhagen. I’m a huge believer in those initiatives. I think that part of the reason that open-source has been so successful in just the problems of computer science and computer infrastructure is that, to a certain extent, there’s been a maturation of thought where not every company believes its ability to store and retrieve its data quickly is a defining characteristic for them. You know, I work at Zendesk and we’re in the business of customer service software, we build software that tries to help our customers have better relationships with their customers. It’s not clear that having the best cloud hosting engine or being able to use NoSQL technology is something that’s of tremendous commercial value to us.

We believe in open-sources, so we contribute back and we contribute because there’s no perceived risk of commercial impairment by doing that. This isn’t our core IP, our core IP is around how we treat customers. I think that, while I’m a huge believer in the open AI initiative, I think that there isn’t necessarily that widespread same belief when the parties are at investment levels in AI research, and at the forefront of thinking. I think that there’s a clear, for some of those entities, there’s a clear notion that they can gain tremendous advantage by keeping anything that they invent inside of the walled garden for as long as possible and using it to their advantage. I would dearly love that initiative to succeed. I don’t know that right now we have the environment in which it will truly succeed.

You’ve made a couple of references to artificial intelligence mirroring the human brain. Do you follow the human brain project in Europe, which is taking that approach? They’re saying, “Why don’t we just try to replicate the thing that we know can think already?”

I don’t really. I’m delighted by the idea, but I haven’t read too much about it. What are they learning?

It’s expensive, and they’re behind schedule. But it’s been funded to the tune of one and a half billion dollars, I mean it’s a really serious effort. The challenge is going to be if it turns out that a neuron is as complicated as a supercomputer, that things go on at the Planck level, that it is this incredible machine. Because I think the hope is that it if you take it at face value, that is something maybe we can duplicate, but if there’s other stuff going on it might be more problematic.

As an AI researcher yourself, do you ever start with the question, “How do humans do that?” Is that how you do it when you’re thinking about how to solve a problem? Or do you not find a lot of corollaries, in your day to day, between how a human does something and how a computer would do it?

When we’re thinking about solving problems with AI, we’re at the basic level of directed AI technology, and what we’re thinking about is, “How can we remove these tasks that humans perform on a regular basis? How can we enrich the lives of, in our case, the person needing customer service or the person providing customer services?” It’s relatively simple, and so the standard approach for that is to, yes, look directly at the activities of a person, look at ways that you can automate and take advantage of the benefits that the AI is going to buy you. In customer service land, you can remember every interaction very easily that every customer has had with a particular brand, and then you can look at the outcomes that those interactions have had, good or bad, through the satisfaction, the success and the timing. And you can start to emulate those things, remove friction, replace the need for people whatsoever, and build out really interesting things to do.

The primal way to approach the problem is really to look at what humans are doing, and try and replace them certainly where it’s not their cognitive ability that is necessarily to the fore or being used, and that’s something that we do a lot. But I think that misses the magic, because one of the things that happens with an AI system can be that it produces results that are, to use Arthur C. Clarke’s phrase, “sufficiently advanced to be indistinguishable from magic.” You can invent new things that were not possible because of the human brains limited bandwidth, because our limited memories or other things. You can basically remember all experiences all at once and then use those to create new things.

In our own work, we realize that it’s incredibly difficult, with any accuracy, given an input from a customer, a question from a customer, to predict the ultimate customer satisfaction score, the CSAT score that you’ll get. But it’s an incredibly important number for customer service departments, and knowing ahead of time that you’re going to have a bad experience with this customer based on signals in the input is incredibly useful. So, one of the things we built was a satisfaction-prediction engine, using various models, that allows us to basically route tickets to experts and do other things. There’s no human who sits there and gives out predictions on how a ticket is going to go, how our experience with the customer is going to go; that’s something that we invented because only a machine can do that.

So, yes, there is an approach to what we do which is, “How can we automate these human tasks?” But there’s also an approach of, “What is it that we can do that is impossible for humans that would be awesome to do?” Is there magic here that we can put in place?

In addition to there being a lot of concern about the things we talked about, about war and about AGI and all of that, in the narrow AI, in the here and now, of course, there’s a big debate about automation, and what these technologies are going to do for jobs. Just to, kind of, set the question up, there are three different narratives people offer. One is that automation is going to take all of the really low-skilled jobs, and they’ll be a group of people who are unable to compete against machines and we’ll have, kind of, permanent unemployment at the level of the Great Depression or something like that. Then there’s a second camp that says, “Oh, no, no, you don’t understand, it’s far worse than that, they’re going to take everybody’s job, everybody, because there’ll be a moment that the machine can learn something faster than a human.” Then there’s a third one that says, “No, with these technologies, people just take the technology and they use it to increase their own productivity, and they don’t actually ever cause unemployment.” Electricity and mechanization and all of that didn’t increase unemployment at all. Do you believe one of those three, or maybe a fourth one? What do you think about the effects of AI on employment?

I think the parallel that’s often drawn is a parallel to the Industrial Revolution. The Industrial Revolution brought us a way to transform energy from one form into another, and allowed us to mechanize manufacturing which altered the nature of society from agrarian to industrial, which created cities which had this big transformation. The Industrial Revolution took a long time. It took a long time for people to move from the farms to the factories, it took a long time to transform the landscape, comparatively. I think that one of the reasons that there’s trepidation and nervousness around artificial intelligence is it doesn’t seem like it will take that long, it’s almost fantastical science fiction to me that I get to see different vendors, self-driving cars mapping San Francisco on a regular basis, and I see people driving around with no hands on the wheel. I mean, that’s extraordinary, I don’t think even five years ago I would believe that we would have self-driving cars on public roads, it didn’t seem like a thing, and now it seems like automated driving machines are not very far away.

If you think about the societal impacts of that, well, according to an NPR study in 2014, I think, truck driving is the number one job in twenty-nine states in America. There are literally millions of driving jobs, and I think it’s one of the fastest growing categories of jobs. Things like that will all disappear, or to a certain extent will disappear, and it will happen rapidly.

It’s really hard for me to subscribe to the… Yes, we’re improving customer service software here at Zendesk in such a way that we’re making agents more efficient and they’re getting to spend more time with customers and they’re upping the CSAT rating, and consequently those businesses have better Net Promoter scores and they’re thriving. I believe that that’s what we’re doing and I believe that that’s what’s going to happen. If we can answer automatically ten percent of a customers’ tickets that means that you need ten percent less agents to answer those tickets, unless they’re going to invest more in customer service. The profit motive says that there needs to be a return on investment analysis between those two things. So, in my own industry I see this, and across society it’s hard not to believe that there won’t be a fairly large-scale disruption.

I don’t know that, as a society, we’re necessarily in a position to absorb that destruction yet. I know in Finland, they’re experimenting with a guaranteed minimum income to take away the stress of having to find work or qualify for unemployment benefit and all these things, so that people have a better quality of life and can hopefully find ways to be productive in society. Not many countries are as progressive as Finland. I would put myself in the “very nervous about the societal effects of large-scale removal of sources of employment,” because it’s not clear what the alternative structures are, that are set up in society to find meaningful work and sustenance for people who were losing those jobs. We’ve been under a trajectory since, I think, the 1970s, of polarization in society, and generating inequality. And I worry that the large-scale creation of an unemployed mass could be a tipping point. I take a very pessimistic view.

Let me give you a different narrative on that, and tell me what what’s wrong with it, how the logic falls down on it. Let’s talk just about truck drivers. That would go like this, it would say, “That concern that you’re going to have in mass all these unemployed truck drivers is beyond ill-founded. To begin with, the technology’s not done, and it will still need to be worked out. Then the legislative hurdles will have to be worked out, and that’ll be done gradually state by state by state. Then, there’ll be a long period of time when law will require that there be a driver, and self-driving technology would kick in when it feels like the driver’s making a mistake, but there’ll be an override; just like we can fly airplanes without pilots now but we insist on having a pilot.

Then, the driving part of the job is actually not the whole job, and so like any other job when you automate part of it, like the driving, that person takes on more things. Then, on top of that, the equipment’s not retrofit to it, so you going to have to figure out how do you retrofit all this stuff. Then, on top of that, having self-driving cars is going to open up all kinds of new employment, and because we talk about this all the time, there are probably fewer people going into truck driving, and there are people who retire in it every year. And that, just like every other thing, it’s just going to gradually work as the economy reallocates resources. Why do you think truck driving is like this big tipping point thing?

I think driving jobs in general are a tipping point thing because, yes, there are challenges to rolling it out, and obviously there’s legislative challenges, but it’s not hard to see, certainly interstate trucking going first and then drivers meeting those trucks and driving through urban areas and various things like that happening. I think people are working on retrofit devices for trucks. What will happen is truck drivers who are not actually driving will be allowed to work more hours, so you’ll need less truck drivers. In general, as a society, we’re shifting from going and getting our stuff to having our stuff delivered to us. And so, the voracious appetite for more drivers, in my opinion, is not going to abate. Yeah, the last mile isn’t driven by trucks, it’s smaller delivery drivers or things that can be done by smarter robots, etcetera.

I think those challenges you communicated are going to be moderating forces of the disruption, but when something reaches the tipping point of acceptance and cost acceptability, change tends to be rapid if driven by the profit motive. I think that is what we’re going to see. The efficiency of Amazon, and the fact that every product is online in that marketplace is driving a tremendous change in the nature of retail. I think the delivery logistics of that need are going to go through a similar turnaround, and companies driving that are going to be very aggressive about it because the economics is so appealing.

Of course, again, the general answer to that is that when technology does lower the price of something dramatically—like you’re talking about the cost of delivery, self-driving cars would lower it—that that in turn increases demand. That lowering of cost means all of a sudden you can afford to deliver all kinds of things, and that ripple effect in turn creates those jobs. Like, people spend all their money, more or less, and if something becomes cheaper they turn around and spend that money on something else which, by definition, therefore creates downstream employment. I’m just having a hard time seeing this idea that somehow costs are going to fall and that money won’t be redeployed in other places that in turn creates employment, which is kind of two hundred and fifty years of history.

I wouldn’t necessarily say that as costs fall in industries all of those profits are generally returned to the consumer, right? Businesses in the logistics retail space, generally, retailers run at a two percent margin, right, and businesses in logistics run with low margins. So, there’s room for those people to kind of optimize their own businesses, and not necessarily pass down all those benefits for the consumer. Obviously, there’s room for disruption where someone will come in, shave back down the margins and pass on those benefits. But, in general, you know, online banking is more efficient because we prefer it, and so there are less people working in banking. Conversely, when banks shifted to ATMs banking became much more of a part of our lives, and more convenient so we ended up with more bank tellers because personal service was a thing.

I think that there just are a lot of driving jobs out there that don’t necessarily need to be done by humans, but we’ll still be spending the same amount on getting driven around, so there’ll be more self-driving cars. Self-driving cars crash less, hopefully, and so there’s less need for auto repair shops. There’s a bunch of knock-on effects of using that technology, and for certain classes of jobs there’s clearly going to be a shift where those jobs disappear. There is a question of how readily the people doing those jobs are able to transfer their skills to other employment, and is there other employment out there for them.

Fair enough. Let’s talk with Zendesk for a moment. You’ve alluded to a couple of ways that you employ artificial intelligence, but can you just kind of give me an idea of, like, what gets you excited in the morning, when you wake up and you think, “I have this great new technology, artificial intelligence, that can do all these wondrous things, I want to use it to make people’s lives better who are in charge of customer relationships”? Entice me with some things that you’re thinking of doing, that you’re working on, that you’ve learned, and just kind of tell me about your day-to-day?

So many customer service inquiries begin with someone who has a thirst for knowledge, right? Seventy-six percent of people try to self-serve when trying to find the answer to a question, and many people who do get on the phone or online at the same time trying to discover the answer to that problem. I think often there’s a challenge in terms of having enough context to know what someone is looking for, having that context available to all of the systems that they’re interacting with. I think technology, not just artificial intelligence technology, but artificial intelligence can help us pinpoint the intention of users because the goal of the software that we provide, and the customer service ethos that we have is that we need to remove friction.

The thing that really generates bad experiences in customer service interactions isn’t that someone said no, or we didn’t get the outcome that we want, or we didn’t get our return processed or something like that, it’s that negative experiences tend to be generated from an excess of friction. It’s that I had to switch from one channel to another, it’s that I had to repeat myself over and over again because everyone I was talking to didn’t have context on my account or my experience as the customer and these things. I think that if you look at that sort of pile of problems, you see real opportunities to give people better experiences just by holding a lot more data at one time about that context, and then being able to process that data and make intelligent predictions and guesses and estimations about what it is they’re looking for and what is going to help them.

We recently launched a service we call “answer bot” which uses deep learning to look at the data we have when an email comes in and figure out, quite simply, which knowledgebase article is going to best serve that customer. It’s not driving a car down to the supermarket, this sounds very simple, but in another way these are millions and millions of experiences that can be optimized over time. Similarly, the people on the other side of that conversation generally don’t know what it is that customers are searching for or asking for, for which there is no answer. And so by using the same analysis of environment queries that we have and knowledge bases we can give them cues as to what content to write, and, sort of, direct them to build a better experience and improve their customer experience in that way.

I think from an enterprise software builder’s point of view, artificial intelligence is a tool that you can use at so many points of interaction between brand and consumer, between the two parties basically on either side of any transaction inside of your knowledge base. It’s something that you can use to shave off little moments of pain, and remove friction, and apply intelligence, and just make the world seem frictionless and a little smarter. Our goal internally is basically to meander through our product in a directed way, finding those experiences and making them better. At the end of the day we want someone who’s deploying our stuff and giving a customer experience with it, and we want the consumers experiencing that brand, the people interacting with that brand, to be like, “I’m not sure why that was good, but I did really enjoy that customer service experience. I got what I wanted, it was quick. I don’t know how they quite did that, but I really enjoyed it.” We all have had those moments in service where someone just totally got what you were after and it was delightful because it was just smooth and efficient, good, and no drama—prescient almost.

I think what we are trying to do, what we would like to do is adapt all of our software and experiences that we have to be able to be that anticipatory and smart and enjoyable. I think the enterprise software world—for all types of software like CRM, ERP, all these kind of things—is filled with sharp edges, friction, and pain, you know, pieces of acquisitions glued together, and you’re using products that represent someone’s broken dreams acquired by someone else and shoehorned into other experiences. I think, generally, the consumer of enterprise software at this point is a little bit tired of the pain of form-filling and repetition and other things. Our approach to smoothing those edges, to grinding the stone and polishing the mirror, is to slowly but surely improve each of those experiences with intelligence.

It sounds like you have a broad charter to look at kind of all levels of the customer interaction and look for opportunity. I’m going to ask you a question that probably doesn’t have an answer but I’m going to try anyway, “Do you prefer to find places where there was an epic fail where it was so bad it was just terrible and the person was angry and it was just awful, or would you rather fix ten of a minor annoyance where somebody had entered data too many times?” I mean, are you working to cut the edges off the bad experiences, or just generally make the system phase shift up a little bit?

I think, to a certain extent, I like to think of that as a false dichotomy, because the person who has a terrible experience and gets angry, chances are there wasn’t a momentary snap, there was a drip feed of annoyances that took them to that point. So, our goal, when we think about it, is to pick out the most impactful rough edges that cumulatively are going to engulf someone into the red mist of homicidal fury on the end of the phone, complaining about their broken widget. I think most people do not flip their anger bit over a tiny infraction or over a larger fraction, it’s over a period, it’s a lifetime of infractions, it’s a lifetime of inconveniences that gets you to that point, or the lifetime of that incident and that inquiry and how you got there. We’re generally, sort of, emotionally-rational beings who’ve been through many customer service experiences, so exhibiting that level of frustration, generally, requires a continued and sustained effort on the part of a brand to get us there.

I assume that you have good data to work off of. I mean, there are good metrics in your field and so you get to wade through a lot of data and say, “Wow, here’s a pattern of annoyances that we can fix.” Is that the case?

Yeah, we have an anonymized data set that encompasses billions of interactions. And the beauty of that data set is they’re rated, right? They’re rated either by the time it took to solve the problem, or they’re rated by an explicit rating, where someone said that was a good interaction, that was a bad interaction. When we did the CSAT prediction we were really leveraging the millions of scores that we have that tell us how customer service interactions went. In general, though, we talk about the data asset that we have available to us, that we can use to train and learn a query and analyze.

Last question, you quoted Arthur C. Clarke, so I have to ask you, is there any science fiction about AI that you enjoy or like or think that could happen? Like Her or Westworld or iRobot or any of that, even books or whatnot?

I did find Westworld to be, probably, the most compelling thing I watched this year, and just truly delightful in its thinking about memory and everything else, although it was, obviously, pure fiction. I think Her was also just a, you know, a disturbing look at the way that we will be able to identify with inanimate machines and build relationships that, you know, it was all too believable. I think you quoted two my favorite things, but Westworld was so awesome.

It, interestingly, had a different theory of consciousness from the bicameral mind, not to give anything away.

Well, let’s stop there. This was a magnificently interesting hour, I think we touched on so many fascinating topics, and I appreciate you taking the time!

Adrian McDermott: Thank you, Byron, it’s wonderful to chat too!

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

1 Comment

Comments are closed.