Voices in AI – Episode 34: A Conversation with Christian Reilly


In this episode, Byron and Christian talk about AGI, AI assistants, transfer learning, ANI and more.


Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today our guest is Christian Riley. He is thVice President of Global Product and Technology Strategy over at Citrix. Before joining Citrix, Riley was at Bechtel Corporation for eighteen years where he was responsible for the strategic planning, enterprise architecture and innovation program within the corporate information system and technology group. Welcome to the show, Christian.

Christian Riley: Thanks Byron, great to be here, thanks for having me.

I love to start off with a simple question, which isn’t really so simple, what is artificial intelligence? 

So, it is very interesting actually, I mean, when I think about artificial intelligence, I kind of think about it in two different ways, there is the general intelligence, which is kind of very broad and I’d suggest is a technology way of trying to re-create the human brain, and then we have this other idea and notion, which is artificial narrow intelligence, which is really about breaking down what I consider to be relatively mundane and programmable repetitive tasks that are much simpler in concept but are effective ways of either augmenting or, kind of, replacing the humans from doing certain tasks. That’s kind of the way that I like to look at.

Well, I think it’s really good. So let’s talk about those is two separate things, and let’s start with general intelligence. But before we start I want to ask you, do you believe that general intelligence is an evolutionary development from narrow AI? Like, does it get narrower or little broader, little broader, little broader, and that’s how it’s general,” or is an AGI completely different technology, it looks completely different and we haven’t even really started building it yet?

Well, it’s a great question, Byron. The first thing to, perhaps, realize is that when we talk about AI, it’s really not that new. I think the instantiation of the current example of it is new. As the technologies have become easier to adopt and easier to consume, I think that’s given a whole new birth to the area. I guess it’s been around since the 50s and the 60s, the ideas of science fiction back then, that, you know, robots and computers would take over and think for us.

And then if you go back to Asimov and the whole of I, Robot and the very basic principles that, you know, a machine should never harm a human and those kinds of things. It feels a little bit like science fiction, but I think it’s very real. I think the “general” side of it, I don’t know whether we’ll ever really truly get to the full scope of general intelligence the way we like to think about it. Which is, effectively, that a computer or a series of computers can be programmed and learn to feel emotion and to have a conscience and those kinds of things that we have as humans by hundreds of thousands of years of evolution. The narrow thing to me seems much more like an automation angle, and I’m not sure that you would ever start with automating tasks and you suddenly become super human.

I think it’s highly likely that as humans we figure out the things that are off-loadable, if you like, to ANI that can be repeated, that can be, in fact, more efficient, more effective, and allow us to go off and think about different problems in different ways and leave that, kind of, automation element to it. So, I think they’re, kind of, two completely different things. I just have a feeling that the AGI or the general intelligence is a much broader aspect. Whether we’ll ever get there, I don’t know. I’m sure statistically we could say that computers are capable of making all of the decisions. They can add up better than we can, but they don’t understand the reason that they’re adding up, right?

So whether they’re performing a simple additive task, or they’re performing a household budget, and, say, if I have my household budget and I either can afford or can’t afford that extra bit, then there’s an emotion attached to that that computers just don’t understand.

That’s really fascinating. I think, I’ve had fiftysomething guests on the show as of this taping, and I think you’re only the fifth one to say we may not be able to build a general intelligence. And that fact really surprises me that there are so few, because we don’t even understand human intelligence, we don’t understand the human mind, we don’t understand consciousness. All of these things, and yet there seems to be, at least from most of my guests, this basic assumption that we can build this and we will build it and we may build it very soon. So tell me what would be the argument that we cannot build a general intelligence, from your standpoint?

Yeah, I mean I just think there are just some things that even with the best machine learning techniques, there are so many emotional elements to the way that our brain functions, plus the fact that we’ve got these hundreds of thousands of years of evolution. There are just certain things that I don’t think it’s possible we can actually do with all the technology.

I think it’s fair to say that the whole of AGI, even though it may be conceptually fifty, sixty, or maybe even approaching seventy years old, fundamentally, my heart tells me that even the smartest robot with the best of AGI capability can’t emulate a human being. If you think about the number of things that we have to process in the context of making decisions. We’re not doing these in sequence, right, we’re kind of doing these all at once—we’re wrestling with this idea and this what if. We can look forward, and we can look backwards with experience and emotion and learning. To me, it just feels that there’s something about the human psyche that I don’t think we’ll ever replicate.

It’s interesting the roboticist Rodney Brooks says that there’s some basic fundamental thing about life that we don’t understand. He calls it the juice. And he says that if you put an animal in a cage, the animal is desperate to get outit scratches and it’s getting more and more frantic. But if you put a robot in a cage, and you program it to get out it just kind of going through the motions, and that it lacks this “juice,” and we don’t really know what that juice is. So, it sounds like you think there’s some intellectual juice, some knowledge juice that we don’t understand that we have that a machine may or may not be able to have.

I think that’s a great phrase actually, the juice. I mean, I think it is absolutely that. If you were to put a human in a room, you know, the number of calculations that go through the human’s mind, and not just how I’m going to get out of here, but if I don’t then what’s the impact on my family, what’s the impact in the people that love me; there’s an emotional set of criteria that I don’t think—I mean, yeah, we can program it, of course—but I think that “juice” is something that I don’t know how we would replicate.

And, of course, with our ANI, you could argue a similar thing. Is a box or a digital assistant capable of emotion when dealing with an irate customer? That’s an interesting question. I’ve never seen evidence of it, because they’re really not programmed to do that. It’s a very small set of functions, you know. Bots are a great example of ANI for either interacting or getting recommendations, but when they give you the recommendation to the restaurant, as an example, is that based on their personal experience or is that based on coalescing all the data that they’ve been able to access and synthesize around other people’s opinions. I think, generally, if you are going to predicate something upon other people’s opinions and not your own, then I think that’s where the barrier is between ANI and AGI.

So, let’s switch lenses for a moment and talk about narrow intelligence. If somebody asked you where are we at, like how would you assess our progress in building narrow AI at this moment?

I think we’re in a great time. Again, it depends how you classify it, but I think if you take—bots are a great example—some of the popular digital assistants that are out there, whether that’s Siri, Cortana, Google, Samsung, all the big guys have made huge investments in that because they see, obviously, voice and the natural language processing, and then the machine learning that’s behind that as a key factor to engage the next generation of the human computer interface. So, I think we’re actually in pretty good shape.

Again, whether you take simple things like integrated voice responses and say, “Okay, is that really ANI or is it not?” Yeah, it’s is a form of ANI, but it’s a very small, almost like a closed loop, system that will only respond in the way that it’s programmed—so, press one for this press two for that. In a way that’s kind of a mechanism for replacing humans. But I think the things that have a much more conditional background—when you’re asking a question about where’s the best restaurant, or how should I get to the nearest tube station or what’s the best way to get from A to B—that’s really a different form of ANI. And I think that’s much more about building up the learnings, and the statistical analysis, and interpreting that in the best way that it can give you an intelligent response versus press one for this, press two for that. You are, kind of, automating it in some respects, and arguably that’s a good approach for some customer service angles, but I think when we think about the modern day digital assistants, the modern day bots, I think we’re actually making pretty good progress.

Now, the question is—and that’s okay, it’s very consumer-centric today—has that really found its way into enterprise? Certainly not that I’ve seen. I mean, there are some elements that are growing within enterprise use cases, or certain other areas of ANI that are not always about bots, of course. But I think the pin in all that is really the key to it, which is the arrival of understood machine learning techniques that are providing the algorithms that power the analysis of this data and are yielding some particularly interesting results in different areas.

Do you think it’s a mistake to personify these devices? Taking your view of these devicesand I can’t say any of their names because I have them all on my desk next to me and theyll perk up hereAmazon has named their device, Apple named their device; they’ve given them human names. Google, interestingly, hasn’t; it’s called the Google Assistant. Do you think that it’s a mistake, and does it set false expectations if you make these things sound like people and give them names and all of that? Is it, maybe, setting the bar too high or setting them up to constantly be failing because they’re never really going to be all that great at that?

Well I guess it’s interesting to ask, do I come from a consumer angle or do I come from a business angle? So, I mean, if you think about it in the relationships that you have today, we all have nicknames for people, we all have real names of courses, but to our nearest and dearest, we all call them different names and we have different emotional attachments to those names. And if you think about it going back to some of the early robots that we saw—the Japanese have been brilliant at this, of course—over the years, they’ve always had cutesy names. So, whether you were talking to a fixed device that was, quote, “human on the other end,” or whether you were interacting with a cute robot that would do certain things when you spoke to it, I think there’s always been a need to create some kind of connection with that robot or that voice.

I think it’s pretty interesting. Where the Bixby name comes from, I don’t know, but it’s pretty interesting what Samsung did with that. Obviously we’ve got Siri, Cortana, and other things, and then Google came up with “Assistant,” as you say, so, maybe there’s a master plan from Google to be much more about business, over time, which would be kind of ironic coming from a consumer search company.

I mean, I think maybe it’s another one of these things that when you think about it in terms of potential applicability further down the line, and this is one of the things I always hold near and dear is, I can imagine this playing out in, let’s say, the facilities for the elderly as an example. Well, unfortunately, these people may be in sheltered accommodation, or whatever it is, and need to connect with somebody or something, maybe ask for some help or ask for shopping to be delivered. Wouldn’t that be great if that person felt a connection to a device, whether that device looks like a cylinder on their table or whether it’s a small robot. Maybe that, again, is part of this question around emotional support and emotional connection, which is effectively using the technology for a great result—making people feel better about the world around them.

I want to come back to that, but before we get off on another topic, you have to think that Star Wars would be different if C3PO were named Gary and R2-D2 were named Sam. You know, that’s Gary and Sam over there. 

I guess my mind immediately goes to the story of the robot in Japan that they were training to be able to navigate a mall. It was programmed so that when it came up to people, it would ask them to move, and if they didn’t move it just tried to go around them. And what happened was, kids would mess with it. They would jump in front of it when it tried to move, and then they would grow increasingly violent especially if there were multiple kids around. And so, the roboticists had to program it to say, if you see two or more small people, i.e. children, with no large people around, then turn around and run for a large person because that will protect you from the small people. 

And the interesting thing to me was when they asked the children, “Did you think that robot acted like a machine or an animal or was it a human? They overwhelmingly said they thought it was human. And then when they asked, “Do you think it was suffering when you were hitting it with your water bottles and doing all that?” The majority of them said Yes, I thought it was feeling distress. And so, one wonders if the more we make these machine like people the more we, in essence, cheapen what it is to be people. Do you think there’s any danger of that, or am I just off in left field?

You know, I mean it’s a good question. I think maybe that strikes a little bit, Byron, to the heart of the question about how do we teach these things to learn? Because, again, going back to some of the concepts around the personalization element to it, the unsupervised learning techniques that are at the core of some of the AI and core machine learning concepts, they’re intended to—both unsupervised and predictive learning—try and emulate the way that humans, and the animals that you gave in the example earlier, learn.

Typically, we learn in a very unsupervised manner, by immersing ourselves in the world around us, and watching how it works, and then looking at how our parents or grandparents and other people and in our close communities, how they react to certain things. So there’s a very interesting difference, I think, between that and supervised learning which is: I’m going to tell you a thousand times that this is a car until you understand that this is a car? So it gets to be quite interesting the differences between the learnings themselves.

But do I think we’re in a danger? It’s interesting, you know, because I’m sure that there are elements of humanity where the perpetrators of those same things—I’m going to hit you with a bottle or whatever else—draw no distinction between hitting a person or hitting the robot. But maybe that’s a failure of their own neural programming that they think it’s okay to do that. So, I actually think, philosophically, from my perspective, that the more that we can make technology engaging, the more we can make technology seamless, we can weave it into the fabric of what we do every day.

I think it’s fascinating to see, as we mentioned before, the digital assistance and the way people use them. But the fact that that’s become so woven into the fabric, now, that there’s not even an app for many of these digital assistants, it’s just kind of built into the fabric. I think that could potentially tell us something about where this goes, and to get that true acceptance over time, I think we have to make these things as engaging as we can because they’re definitely here to stay. I mean I don’t see it as a threat to humanity, frankly. I know other guys out there, Professor Hawking, as an example, have said that it’s possibly the worst thing that could ever happen to humanity, the advent and the speed at which AI was coming into the world. But, again, I think if we can make it part of the fabric of what we do, and this is going to happen in cars, it’s going to happen in aircraft, it already is. It’s kind of part of what we do.

And to your point, Professor Hawking is talking, not about our PDAs, but about a general intelligence, which you’re, at the very least, saying it’s very far away. 

So, let’s talk about supervised and unsupervised learning for a minute. How far away do you think we are from a general learner that we can just point and sayHere is the Internet, go learn everythingI mean, that’s the Holy Grail isn’t it?

Absolutely, and wouldn’t that be great, but I think you have to step back and appreciate the differences between the different types of machine learnings. You know, of course, we say, “Hey, here’s the Internet go learn everything.” There’s stories out there about the length of time it takes to actually provide enough data sets and to provide those with the right algorithms so that when you look at a picture of a cat you realize it’s not a birthday cake. That sounds like a silly thing to say, but that’s not an insignificant piece of learning. Then you add in things like anomaly detection, regression, text analytics, and distinguishing between different images—I mean, that’s not easy.

Imagine taking every image that you could find on the internet. There’s a high probability that if you take twenty, thirty, forty, fifty common items that you would expect pretty much everybody from a five-year-old kid to a one-hundred-year-old great grandfather to be able to articulate what they are—that’s not an insignificant piece of learning for a machine. You’ve got to teach the model fifty different iterations of that until you get to the fact that ninety-nine percent of the time I’m going to tell you that this is a cat, this is a birthday cake, this is the Eiffel Tower.

To me it’s a very interesting question about the structured versus the unstructured learning capability, but I think you have to understand just how much goes into that from a model perspective in the background. So, things that we take for granted as part of our cognitive world—part of our own AGI as humans, if you want to call it that—is built on this unstructured unsupervised learning that we have which is very different from, obviously, the structured learning, but also it is something that we take for granted because it’s in our everyday world it’s the way we do it, we don’t have to program ourselves consciously to learn the differences between things. It would be great, wouldn’t it, to be able to just say, “Hey, here’s everything on the internet, here’s everything in the deep web, this is how you get to all, go and assimilate all that.” And then when I ask you a question you would be able to go to page 407 of this thesis document that would give you the answer. I think we’re a long way from that.

To your point, I can train a computer to recognize that that’s a unicorn, and a person can recognize it’s unicorn. Then you say, “Okay, make it a cake in a unicorn shape.” And a human, even if they have never seen the unicorn cake, they would say, Oh, that’s a cake. And then it’s like, Okay, make it a cake in a unicorn shape with a piece missing,” and then a human could look at it and say, “Oh, yeah that’s a unicorn cake with a piece missing.” Even if they have never seen one of these. Then it’s like, Okay, make it stale like it’s been sitting out for a week,” and then a human can look at it and say, Yep. So what we’re doing even though we’ve never seen any of those combinations, we’re able to magnificently do transferred learning between all these different things. Is that a breakthrough? Is that a hundred little tricks we’re doing, or is that just something we’re going to need to figure out for computers and maybe in a very broad way we can solve that.

Yeah, I think it comes down to, again, the human element versus what we can impart and teach. One of the interesting things from my background, in the world I came from, was the breakthrough in 3D design. So, obviously, I came from an engineering and construction background, and I was around at the advent of 3D design, and one of the ironic things that used to strike me about 3D design is that we as humans see the world in 3D, and yet we always designed in two dimensions, and then we had this breakthrough of 3D design, and we’re designing in exactly the way that we see the world.

So I think there’s a few elements that are part of what we have as humans, which gets really interesting, because with the unicorn cake analogy and the missing piece, does the computer know that that’s a three dimensional object or does it see it in 2D? And if it sees it in 2D, would it have a different interpretation of what we see, because we can see that the cake’s base is this shape and the unicorn should look like this, etcetera.

I don’t know how far we are and I don’t know how quickly we could get there. And maybe we start at the “juice” that we talked about earlier. You know, how do you set a baseline and what is that baseline? Is it to say, you must have the following five things every time you want to make an interpretation of an object, or make a decision, and every one of those things changes. So, I don’t know how big or wide that baseline is for us to get to the point where we say, “Hey, if you have these basic building blocks in place this is how you get to that AGI, this is how you get to represent the human brain in as many use cases you could think that we have every day.”

Take that robot that we talked about earlier. Say you’re going past a ladder and you see a guy up there cleaning a window. As humans we would look at it and say, “Oh, there’s a risk that this guy is going to fall here.” Would the robot stop, and would he have the cognitive power to say, “Actually, I’m going to stop here, and I’m going to make a recommendation that this guy get somebody to hold the bottom of the ladder”? So, these are the kinds of things that I wrestle with and try and figure out, you know, how much of that building block would you have to have to make the rest of it be almost like replicating what we would do naturally.

It’s interesting because on your 3D vision thing, humans only see 3D for like twelve or thirteen feet, right? And then beyond that it’s all visual cues, right? We’re not actually seeing them, and we’re like faking it in our software of the brain aren’t we?

Yeah, I think that’s true. But again, that’s why I think it’d be very interesting to see some of the big technology companies out there investing in some 3D things, right? So if you think about what we’ve heard from Apple, as an example, what we’ve heard from Google—you know, I don’t think we’re anywhere yet in terms of our capability to deal with 3D through an augmented or virtual world.

And I think—again obviously with some of the machine learning and intelligence in the background—that’s going to open a significant set of opportunities for design, for construction, from my background, of course, but for tons of other things. I believe we really do see the world in different dimensions. You know, maybe there are even more dimensions that help, like the fourth dimension. If we decide that that’s time, can we actually see things machine learned before they happen? And is it better that they can augment what we do as humans, rather than try and replace?

So again, in the world that I came from, we spoke a lot about different dimensions—two dimensions, three dimensions and adding different dimensions for imagining massive facilities, oil refineries, airports, power stations or whatever it is—but we never really had the machine learning capabilities in there. So you think of all these things that can built over time, all the operational data that we have, all the design mistakes that we’ve made, all those things it just gets left on the cutting room floor because there’s no mechanism to deal with it. I think, fundamentally, that’s what happened with big data, in my opinion.

Nobody that I meet any more talks about big data. You know, that whole concept of big data was a five year ago question. It was about analytics; it was about business intelligence being done in a different way. And now that conversation has shifted completely to machine learning. How what can we learn? How can we make better decisions? How can we feed different data into the machine learning algorithms? How can we iterate on those? How can we build models, bigger, better, faster?

And I think there’s so much opportunity that’s out there. When you add in these other types of immersion, whether they be augmented, mixed, or virtual reality, as an example, what’s going to come next? Based on the fact that we have the data, we have the algorithms, and now perhaps all we need is a little bit more inspiration and a little bit more perspiration to really drive what I think could be some absolutely incredible applications of this technology in the future.

You maintain, just reading about you online, that enterprises really have to adopt AI today. This isn’t the time to wait. Assuming that that is true, why do you think that is? Make that case, please.

I’ll give me some examples, we have customers that are in extremely large financial sectors and components of the financial sector, and we have customers who do things like online gambling, as an example; we have all sorts of different other customers in healthcare, pharmaceutical, in retail and manufacturing—and I’ve failed, so far, to see a single industry where I think that some applied machine learning couldn’t help them significantly with their digital transformation efforts. We talk about this word “digital transformation efforts,” and yeah it’s a great buzz word, but really, to me, it’s a set of very distinct constructs that you either say, “Hey, I have to move to being data driven, I have to deal with that data in a different way, and I have to apply some of these techniques and technologies that we’ve been talking about, that actually help to drive different business outcomes.”

So, if you think about it in the context of, say, pharmaceutical, what are the next generation biotech companies doing to actually speed up the time of trials, and speed up the times of new drugs and bringing those to market? Knowing that in certain parts of the world there’s a very finite time on the license that you have to sell those drugs as a sole operator before they become generic. So, you’ve got a small window of advantage.

You know, it’s the same way with banking, and the same with finance. How can I get better at predicting what may happen? How can I get better at doing risk? And then, also, how can I get better at customer engagement by using ANI to drive a better customer engagement, defining better products, making the products more personal, making them more relevant and more timely.

I think all of these come down, in my mind, to the foundation that was laid with big data, I think is a good foundation, but to me it was missing the “so what?” I think now with the availability of the machine learning algorithms we know the “so what?”

The other bit of that which gets really interesting is that these are becoming commoditized very quickly—and people look at me with a scary face when I say that. But you think about where Microsoft is going, and you think about what IBM is trying to do, think about what AWS is doing, ultimately what Google is doing—these guys see the AI elements and the machine learning elements as the next frontier, and they want to provide those as a set of consumable services in the same way that you can go and get a blob of storage or you can go and buy a virtual machine.

I think that, to me, is a critical element. So yes, of course, you need data scientists, and you need people who understand what the data can do for you, what the machine learning can do for you as a business outcome. But I think the fact that it’s rapidly becoming commoditized and getting to the point now where you can, with a little bit of understanding, choose what kind of machine learning service that you want and for what reason and then you can add that into your next generation of, quote, “application,” which really is going to drive some pretty interesting results.

I think it’s not a case of the fact that people no longer can afford to do it. I think it’s a case of the fact that they just can’t afford not to do it. You know, as I mentioned before, there’s lots and lots of different types—I think Microsoft alone have half a dozen or more different types of machine learning concepts that they offer as services within Azure. But I think the speed that that is entered, and the speed that that is getting to be commodity, I think, will ultimately be the game changer.

You know, there’s a talent shortage that everybody talks about, a shortage of people who are up on these techniques. Is that how you see that talent shortage being solved? That the tools essentially are made more accessible to existing coders, or do you think we’re about to have a surge of new talent come in, or a combination of both? Do you think the talent deficit is going to go away anytime soon?

Well, I mean, it’s interesting if you believe some of the stories in the recent press about Google, they went out and hired an entire class of computer science graduates who specialized in statistics and machine learning. So, if you believe that, then okay that would make a ton of sense, investing in that next generation of talent is a great thing. I wonder, frankly, if there aren’t existing roles that will get repurposed. I mean, if you go back years and years, and think about it,  this is not a new challenge—even within IT, I mean it’s certainly not new in terms of industry in general,  but even within IT.

I mean, it would be extremely unlikely now that you could walk into any large IT organization within any large global enterprise, and expect to see PBX phone systems existing in dedicated rooms, because all of that converged on the network almost a decade ago now. And as it’s become more and more accepted and more and more defacto, we’ve seen the end of that skill set. So, the people that were the command line interface guys for huge telephone systems, they reskilled to be network people.

And if you think about that in parallel, some people who used to be developers in organizations who were writing applications that the organization had defined as being required to be bespoke, that’s ebbing away a little bit now in terms of software as a service adoption, and standardization on things like Salesforce or Workday or Concur or whatever it is. And so, I think, the other developers are either going off to find new jobs in other locations, or in many cases they’re kind of retraining as integration specialists or business process people.

I think it’s a combination of different things, Byron, but absolutely that skill set needs to come in. You know, people who are in data science roles, have statistics backgrounds, either applied or pure math, in some cases, that’s all great, but do they have the business knowledge and the business process understanding to actually get the value and demonstrate the value from the algorithms that they create or take onboard as part of services from the different cloud providers?

I think it’s a combination of everything. I think, fundamentally, there’s going to be a mixed skill set. I think there is going to be a fight for data scientists, for sure. I think there’s going to be a fight for people who can write algorithms and especially ones who can write it in the context of the business. But I don’t think it’s an exclusive club, I think, like all these things, that we are gradually turning the crank on yet another major cycle of technology.

I think what’s happened is that the relative time for that technology to be adopted is definitely getting shorter, on one axis, and the value derived from that is actually getting higher, on another axis. So it feels like all this is coming at once, but I don’t think it’s a mutually exclusive world, because I think we’re going to rely on combinations of those skills—business skills and traditional database skills and then the more advanced data science skills—to really come together and drive the true value.

There’s, obviously, a larger conversation going on around the world about the effect of automation and ANI on employment. What is your view on that? How is that going to unfold?

Well I’m sure the same conversation happened a hundred years ago with the automation of the car plant, which was led by the Ford Motor Company. And I’m sure at the same time there was as much uproar that this would be the end of humans, effectively, in the automotive industry. We now know that that wasn’t the case, of course. Yes, of course, there have been jobs displaced by automation, but they created other roles that we didn’t necessarily know about.

So, I think, absolutely. Take the case of call centers as a good example. If we could come up with a sufficiently well-balanced ANI that was able to, very quickly, displace eighty percent of what you would call standard calls, then of course there’s a concern. But, I think that perhaps the bigger concern is that those jobs—and I don’t want to use the phrase “low end” because it sounds a little bit trite—are the kind of jobs that we would associate with non-academia, people who haven’t got a bunch of different qualifications for this that and the other, which you need, right?

It’s the same argument, in a weird way, that’s been raging through Europe and the US about immigration, and the question that, “Well, if you take all of these jobs away, jobs that people wouldn’t do by choice, what happens?” The fact is that you’ll never ever get to a scenario where everybody wants every job, but there ought to be room for everyone. So it gets to be a very social question. It gets to be quite a moralistic question, as well, in many cases. You know, would you, as an organization, prefer to employ people or would you prefer to have a machine do that that can keep your costs down, and it can improve your competitiveness-sphere, it can improve your profitability—then that’s a hard business question.

So I think the answer is, yes, there will be some displacement of jobs. They’re highly likely to be the entry level jobs, or ones that are ripe for automation. But does that mean that that will give us a huge global socioeconomic problem? I don’t know. I mean I think it’s highly likely that there will be different jobs—whether that’s in the same industries or in different industries—that are created as part of this.

I hear lots of people saying, “Well, we’re now building robots that can maintain themselves, that can replace their own parts.” Yeah, kind of, but CNC milling machines were capable of building themselves from every part that you need to fabricate them, but you still need somebody to put them together and to maintain them and to look after them, right? So, I think, it’s a very interesting question. There will, certainly, in my opinion, be some displacement but my hope is that, like we’ve seen before in different phases of “industrial revolutions,” again in quote marks, we’ve always managed to find new industries or find new things to do that are a direct result, in some cases, of that automation. So I’m hopeful it will play out the same way.

I’m very sympathetic with that position. I mean, we can even look to more recent historyI doubt Tim Berners-Lee, when he invented the web, said this will create trillions of dollars in wealth and it’s going to create Etsy and eBay and Google and Amazon and Uber and everything else. And AI iso much bigger. And it is true what you say, an assembly line is a form of artificial intelligence, and it must have been a very threatening timeThen you can look and say“Yeah, we’ve replaced all animal power on the planet with machines in a very short amount of time but that didn’t cause a surge in unemployment. And so you’re right that history, up until 2000, supports that view. 

I think the arguments that people put forth in the “this time it’s different camp, the first one is something you just said a minute ago, which is the axis of the speed of the adoption of these technologies is much faster, and it’s that speed that’s going to get us. Do you give any credence to that?

Oh, absolutely. With that speed, I think, comes the potential for exponential growth in different areas, different parts of the business, which, from a fundamental operating concept of running a business, is either a blessing or a curse. Because, if you’re not ready for it… And I think that there are some questions out there about, will the adoption of AI machine learning actually drive the speed of new business or business growth so it turns exponential?

There’s a famous story, which I’m sure you’ve heard before, Byron, but I’ll share it with the listeners, about the football stadium, which asks the question do you really understand exponential growth. So the analogy goes something like: it’s 1:00 o’clock in the afternoon, and you’re sat in the best seat at the very top of a medium sized football stadium, and for the sake of illustration the stadium is actually watertight. And, so the question is, if a drop of water is added to the stadium on the halfway line, and then one minute later it doubles in size to two drops, and then after one more minute it doubles to four drops and so on—basically, it doubles in size every minute—if you’re there at 1:00 in the afternoon, what time is it before the water reaches the very top of the stadium and effectively engulfs the seat you’re sat in? And people say, “Oh it’s going to be months, it’ll be years.” It’s actually 49 minutes.

So, from that very first drop of water it doubling and doubling and doubling every minute, by the time that the 50-minute mark comes, the entire stadium is full of water. If you can picture that, mentally, the question about the speed, so that it’s 49 minutes for that to happen, but it’s really based on the fact that exponential growth is not the way that we imagine, you know, double digit growth to be in the traditional ways that we look at compound annual growth rates of businesses or that kind of thing.

So the question is, if that does come along, to your point about the speed and does that speed equal exponential growth, then the question is: are we ready for that if indeed that size and scale is predicated upon some of these new technologies? And I think that’s a fascinating conversation.

Another discussion that’s been had, especially in Europe, is this idea of the right to know. If an artificial intelligence makes a decision about you, like, a declined loan or something, you have a right to understand why that is. What is your view of that? First of all, is that a good thing? And second is it a possible thing? Are these neural nets just inherently ununderstandable?

Well, I think, certainly in the UK we’ve seen examples of that, you know, the decision making systems that are used by banks for approving personal loans and mortgages. Things that once would have required you to visit the branch and sit down with the branch manager, for him to understand your aspirations and for him to have the final decision as the empowered person from the bank, I think those days are pretty much gone. Now there is the neural construct that makes the decision based on a bunch of factors that are employed at the point of the decision—no credit reference, age, time at your company, your salary, your available free funds and a lot of them—and, I think, the personal side of it is gone.

I think removing that emotion is a challenge because—there’s a phrase in England that came from a TV comedy series that says, “computer says no”—and, so, it’s literally a case of if I get declined, what do I do? Do I have the same problem if I go to another financial institution? Should I really have the right to know what factors were part of the decision making process, and ultimately where I failed to meet those criteria that were set by either underwriters or some of the mitigation steps? So I think it’s definitely very visible here in the UK.

You know, we tend to accept that the power for those kinds of decisions, life changing decisions in some cases through mortgages or loans, has really gone from the hands of the local bank branch—and in fact many of those local bank branches no longer exist, you know, we’ve seen those disappear from towns and villages and cities across the UK routinely—to the decision being made by an ANI, and certainly not with the emotion and the considerations that we talked about from an AGI perspective. But people will tell you, “Hey, we’ve got lots and lots of statistical models on this. You see how we build up risk analyses. We do this routinely to see if you are considered to be a risk or a safe bet.” And that’s how we make the decision on you, and it really isn’t very personal anymore.

And what do you think about the use of this technology in warfare and in weapons? That seems to be another area where there’s rapid adoption. Do you have any views on that?

Well, I think this becomes a very interesting question if you take the fact that in battlefield operations very recently, and the ones that are unfortunately still going on in some parts of the Middle East, it’s extremely conceivable that some of the weaponry being used, and some of the drones that are being flown are being flown from literally thousands and thousands of miles away from the theatre of war, from the scene of the battle.

Now, I suppose one of the answers is that it’s possibly a good thing for the coalition, or for it’s for the people on this side of the conversation, because the fewer people you can put in harm’s way, the more you can neutralize the enemy without putting people in harm’s way, then… Is it a good thing? Is it a bad thing? I mean, I have to say, from a personal perspective, I don’t think any war is a good thing no matter what technology or historical weaponry you use, but I think it’s a fact of life.

If you think about that from a drone perspective, or from an aviation perspective, in general, we don’t call aviation “artificial aviation” because it’s not birds. You know, so should we really be calling artificial intelligence “artificial” at all if it constitutes some kind of intelligence that helps with the decision making process. So, my philosophy on that is that the less people you can put in harm’s way, in any situation, the better.

And having come from, obviously, a construction background where construction sites are inherently dangerous and having drones do tasks that you would usually put humans in the way—of construction sites are different than the theatre of war—but there’s an element of risk there, there is an element of potential fatalities. And I think anytime we can employ technology to go and do surveys, to go and calculate how much concrete has been poured, how much asphalt has been laid, you know, how much land has been reclaimed. I mean, these are things that we should be employing this technology to do, and then feeding all of that data and that intelligence back into, ultimately, providing a better opportunity to do more reliable design, and more cost effective design, and, hopefully, more robust design which will continue to make the world a safer place.

Only one more question along those lines, this one from a cyber-security standpoint. We see more and more of these security breaches in big companies and governments, and they seem to be getting bigger and bigger and more and more frequent. Do you think artificial intelligence, at least in the foreseeable future, is enabling the bad actor to attack, or is it enabling the good actor to defend? 

I, unfortunately, I think it’s both. I would love to tell you that I think we—and I say we as an industry—have the advantage, but I guess we’ve seen examples of where that’s been very much in the hands of the bad actors. You know, we’ve heard a lot about different state-sponsored attacks that have used all sorts of sophisticated techniques. But, I guess, if you think about it from the point of view of where the industry is, where some of the focus areas are within the industry in general, I think it’s high time we actually focused on the user behavior. Our weakest link has, kind of, always been users.

You know, we’ve thrown technology at security problems for a very long time, but I think about it in a very simple way that if we can build up an idea of what we would consider to be normal user behavior. Then the more data points that we collect, the more we can feed in, the more we can train these models, the easier we can spot anomalies. And I think that’s true for other types of network traffic and monitoring.

If you think about it from the user perspective, an analogy I like to use with that, Byron, is, I travel a lot with my job. I’m very fortunate to go to all sorts of places around the world and meet all sorts of fantastically interesting customers, partners, and so on. But I can’t get away from the fact that every time I step off the plane, and I go to the ATM machine, the first thing that happens is that I get an “access denied” message. Then I have to call the bank, and they have to send me a one-time password, and I have to actually say, “Hey, I’m in Turkey, I’m in Portugal, I’m in the United States. It’s really me. I’m trying to make a valid transaction.” So, even though it’s a little bit of a pain, I actually prefer it that way, more than for somebody to have cloned my card and be using it all the way around the world and leaving me with the headache of trying to figure it out with the bank.

I actually like to think about it in a similar way. If we can build up a good set of rich data about what we would classify typical user behavior, so, “Christian logs in from this place, he always uses this device, he always accesses these kinds of applications,” build that up, iterate on it, and then when something is outside of that, allow decisions to be made—either closed loop or through some human interaction—that says, “Hey, this doesn’t look right, I think you need to do something.”

I think, when we get that, we can apply that into a bunch of different contexts. In healthcare, where we’re doing patient monitoring at home, you know, “I’m looking at your vital statistics I consider this to be normal, but if your blood pressure drops or your heart rate increases, I’m going to flag it to your physician.” And there’s a bunch of other things that we could imagine are all about the user, and all about what we would classify as normal behavior or normal characteristics, and then we’ll be able to, either, action things automatically, or action things with human augmentation, when things don’t look like they’re normal.

So, I think that’s the one thing that I look at in terms of the next frontier of security. It really has to focus on that. Because you can build a castle and a moat, and you can argue that, to keep the bad guys out you just need to keep building the walls higher. But the reality is that we don’t live like that. We live in metropolitan cities, we don’t live in castles in forests anymore. So I think we have to approach that a different way.

And certainly, by building up a very rich set of data and training these models on what we would call normal use of behavior, I think we’ve got a much better chance of fighting things that don’t look normal, that could obviously be the impact of an account takeover, or credential harvesting attack, or somebody impersonating me in either a personal or business way.

Tell me a little bit about your role at Citrix. What do you do there, and how is Citrix using artificial intelligence? What are you doing in this area that might be of interest to a general business audience?

There’s a couple of things. One, that I’ve just talked about, is what we now call the Citrix Analytics Service. So, at Citrix, we’re very privileged to be a very key part of most of our customers’ application delivery, from either inside their offices or for that mobile workforce, or their home workers, or contractors, or partners, or whatever that is. So, we sit in a very key position in terms of the user interaction, where users come from, what devices they’re on, and we’re able to build up this rich set of information around the user. So, that’s absolutely what we’re focused on within the Citrix Analytics Service. What you’ll see towards the end of this year and then early into 2018 are releases of that Citrix Analytics Service based on our Citrix cloud platform. That will be something that we bring to market very quickly.

That’s a security thing, that’s all about protecting, but what about enablement? So we build these secure digital workspaces that aggregate different types of applications and different types of services across different types of clouds, but how can we actually mine what people do, so we actually provide them with the context of—depending on who you are, depending on where you are physically, depending on which device you’re coming from, and depending on what you’re trying to do to be productive and get your job done—we should be able to deliver that content, that context, and that information in a real time way.

So, if you’re a maintenance engineer working on this particular part of an airport, or you’re a physician working in an MRI review room in a health care environment, we should know all of the information around you—not just from a security perspective. So, it’s not really always about just trying to figure out what’s going wrong, but using similar approaches and similar models to actually deliver what you should expect at that point of engagement. So, based on the time that you log in, the place that you log in, the device that you log in from; delivering the context so that you can be productive.

It’s, kind of, two different things which are based on the same end user philosophy. One is very much about helping IT to deal with security compliance control, and then the other one is really about the end user experience and helping to drive individual and ultimately business productivity, across pretty much every customer in every vertical that we provide services to.

How do you, from an organizational standpoint, think of artificial intelligence implementation. When the web first came outpeople had a web department, but, of course, now that idea you wouldn’t do. Just in terms of general structure, do you even talk about AI or is it just kind of assumed that it’s driving all of all of your future product developments?

Yeah, it’s absolutely an integral part. You know, there’s a phrase that I use, that “we’re very data rich but very information poor.” That’s because the ways in which we gathered data were on a product-by-product basis. So, we’ve kind of changed the model with that, and turned the pyramid around, effectively, by thinking about data first, thinking about how we capture it, how we interface with other vendors that we work very closely with. You know, how do we bring all that data together to have an environment where we can leverage it?

That sounds like an easy thing to do, but it’s actually quite difficult. So, we have a bunch of very smart data science guys who are intrinsic to our product development, intrinsic to the analytics side that I talked about. These are the guys who are helping us to pull all that data together, to bring it all into one place, so that we can apply these new algorithms and these new techniques on that. But, yeah, absolutely, it’s a core part of our security and our productivity and performance offerings going forward.

And we believe that it’s a big differentiator for us, because of where we sit, because of the longevity we have in our customer environments, and because our customers trust Citrix to deliver mission critical applications, and they will hopefully continue to put that same trust in us when it comes to security and all sorts of productivity. So, we’re really excited about what that means going forward.

We’re coming up to the close here, and it sounds like, overall, you’re very optimistic about the future. Is that true? Tell me what you think, overall, life will be like in ten years?

You know, I think we are going to get more and more things powered by AI than we realize. And I think the true measure of success will be when we stop talking about the AI as being part of x, y and z and talking about the benefit that it brings. I can very easily imagine that when you wake up in the morning and you want to talk to your digital assistant say, “Hey, how many meetings have I got today?” you know, all the videos where the guy’s brushing his teeth and saying, “Hey, what am I going to do today?” That’s all very real.

I think what will happen is that those worlds of work and life, if they’re not already completely blended, will effectively continue to blend. I think if you take some views into the future—and it’s certainly not ten years out, it’s much less than that—there’s going to be some significant shifts. The number of millennials that enter the workforce will be around seventy to seventy-five percent by like 2022 or 2023, that’s significant. That’s a really big change. And I think organizations are already adapting to that, and adopting new philosophies around the way that people work, where people work, the environments that are created, the devices that they’re allowed to use will continue to evolve and continue to change. So, I think we’ll see work as we know it evolve from where it is today at that exponential rate that I talked about earlier, and I think organizations have to get ready for it.

I don’t think it’s a ten-year thing. I think it will be up to organizations to decide how to deploy and adopt, but I think the technology, the offerings, will be ready way before that. And again I think it’s one of these things where you look at my past twenty-something years in this industry as a customer, and now as a technology provider, and I think if you take on balance all the things that we’ve seen, this feels like a seismic shift, it really does.

I think the fact that we’re going to be dealing with intelligent machines alongside intelligent humans is going to be hugely beneficial. And I think it’s also going to be extremely impactful in developing countries where they don’t have a legacy to deal with, where they haven’t gone through the thirty, forty years of technology that we’ve had in enterprise.

So, I think what it will also do is it will level the playing field for a lot of people and I think that will also drive some very interesting prospects and some very interesting statistics for a whole new middle class of people which I think is a long overdue. And I think that will be great. Ultimately, I hope it will be extremely beneficial, literally, in every corner of the globe.

All right well that’s a great place to leave it. I want to thank you for a wide-ranging conversation on a bunch of these topics. I appreciate your time, Christian.

Thanks Byron, it’s been a pleasure.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.


Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.