Yeah. So that’s interesting, and I think it actually breaks down into a couple of different components, one of them being the fundamentals of the processor, so to speak, the technology—not to dehumanize humans—but the platform under which the learning is occurring, and then the other one is the process of learning. I mean, one thing I would say is that, you know, to go into the more psychological, biological side of it, by the age you reach five, you’ve actually done an awful lot of experimental learning. And I know from my own experience, I spent far too many hours bent over my two-year old trying to keep them from doing something silly that they’d already probably done before and there’s actually a lot of experiments that have been run around this.
I mean I am not a behavioral psychologist, but one I remember is that they ran some experiments around a grasshopper in a glass of milk with children, and it turned out in this particular experiment that up until about the age of three, children were quite happy to drink the glass of milk, they didn’t mind. But it was around the age of three that the children started deciding that, no actually I don’t want to drink milk with a grasshopper in it, that’s disgusting. And the principle behind this research, from what I understood, was that disgusting is a learned behavior and it tends to kick in around the age of three.
So, the rate at which humans are building up information and knowledge and extensible understanding is just massive in the human in the early ages. So, being able to identify a dog or a cat or a piece of pie with a huge bite out of it, even by age two, you’ve got a huge amount of data that’s already gone into that. Now, behind that is a question of whether or not the human brain and the machine have the same capacity or the same capability. I think that’s a much more significant question, and that kind of gets down to the fundamentals of the machine versus the human. And it actually reaches back a lot to the question of what is intelligence and that’s again where I see a continuum of things.
So in terms of being able to identify objects, from a personal perspective, I think what we are seeing now in machine learning is really just the tip of the iceberg. We are working in the space where models are very static where they do, as you say, involve typically a vast amount of data in order to be able to train. Even more so than that, they often involve very particular setups in terms of the models that they are trained against. So, at the moment, it’s kind of a bit of a static world in machine learning. I would expect that it’s only a matter of time until that kind of space around static machine learning is well understood and the natural place to go from there is into a domain of more general purpose or more dynamic or more versatile machine learning type algorithms.
So models which can, not only deal with identification of particular classes of objects, but can actually be extended to do recognition of orthogonal type things, to models where they can dynamically update to learn as they experience. So I think, in terms of what we can do with machine learning, I really do think that it’s got a long way to go, a long way towards what human beings appear to do, which is be able to simulate not obviously like data and to form useful conclusions that are more general purpose. I think the technology or the wave we are on at the moment has the legs to get there. But whether this is the technology that’s going to take us into other aspects of human intelligence, such as the ability to imagine, the ability to feel or intuit, it’s not obvious at the moment that it lends itself to that at all.
If anything, technology continues to surprise us and surprise me. I like Arthur C. Clarke’s quote about, “any significantly advanced technology being indistinguishable from magic.” And I certainly believe that’s true. We’ve seen again and again that what we think is possible is simply a matter of time. A colleague of mine was on a flight with me and said they watched the original Space Odysseyand were amazed by how much of what seemed like the future and inconceivable at the time is now just a technical practicality. So I think there is a long way to go with the current wave around machine learning, but I am not sure it’s the right harness to take it into the domains of some of the further out aspects of human intelligence. But that falls in line with the fact that this is a pretty exciting wave that is going to change things, but it’s probably not the last wave.
So, if I can rephrase that, it sounds to me like you are saying that the narrow AI we have today is still nascent and we are still going to do amazing things with it, but it may have nothing whatsoever in common with a general intelligence other than they happen to share the word intelligence. That maybe a completely different quantum-based or who knows what have you, a completely different technology that we haven’t even started building it. Is that true?
Correct. Yeah, that’s certainly my opinion and I have been proved wrong repeatedly in my life and we will see where the technology takes us. The space of machine learning, it’s a new capability for machines which is not to be underestimated at all, it’s pretty amazing. But it does lend itself to certain types of things and it doesn’t lend itself to other types of things. I am not clear on where its limits are going to be found to be, but I don’t think this is the tool that’s going to solve all problems. It’s a tool that can impact everything in a positive way but it’s not going to take us to the end of the earth.
So, assuming that’s true, I want to get back to my five-year old again, because it sounds like you think the kinds of things I was just marveling that the five-year old did, the cat with no tail seems like that’s squarely in your bucket of things narrow AI can do. And so I would put the question to you slightly different. A computer should be able to do five years worth of living, maybe not in five minutes, but certainly in five days or five weeks.
Even if you built a sensor on a computer that a kid could wear around their neck 24 hours a day and you let them free in the world at age five, right now, the kid would still know a whole lot more than that device would know. Is that in your mind a software problem or a hardware problem? Do we not have the chip that can do it or do we not have the software or do we not have the sensors, do we not have embodiment which we may need in order for it to teach itself? What is it that you think maybe we are missing that at least would allow that narrow AI to track with the development of that growing child?
Yeah. So, my answer is roughly all of those. So, I think it’s important to bear in mind that the human brain is an amazing thing. What we do in my company is, we spend a lot of time thinking about power efficiency and you know, sort of part of our DNA is to try to push the boundaries in terms of processing capability but to make sure that we are doing it in a very, very energy efficient way and with that goal in mind, we are always looking for a beacon. And the beacon in terms of raw processing capability and efficiency, for us in many ways, is the human brain.
The human brain’s ability to process information, I don’t have the exact numbers at hand, but there’s been estimates as to the rough digital equivalent and the sheer bandwidth at which we can digest information is just massive. So I think we would be arrogant to the extreme to say that we’ve got a processor which is capable of supporting the same amount of information processing as a human brain. We certainly made great strides forward in the last couple of decades but the human brain is still the gold standard in terms of what can be achieved and the software kind of flows on from that. So I think there’s still a long way to go. That said, I have yet to see the limits in terms of what could be achieved both in the hardware and the software side of things, the pace at which they’ve been progressing has accelerated, if anything. So, still a long way to go to be able to match a five-year old or even a two-year old but it’s definitely increasing over time.
Yeah. It’s funny because you got this brain, and it’s a marvel in itself and then you say, what are its power requirements, and it’s 20 watts. Wow, how are we going to duplicate that in 20 watts, you know, because everything we do right now is more energy intensive. So, some of the techniques in machine learning are of course, fit things to a linear regression, or do classification—is that an A or a B or a C or a dog or a cat or whatever—and then there is clustering where the machine is fed a lot of data and it finds clouds in this n-dimensional space where you know it says something in that cloud has some likelihood of being such and such. So, if you basically said, “Here is a credit card transaction is it fraudulent?” and then the AI is going to say, “Well, how much was it and where was it purchased and what is the item and what kind of day,” and who knows, how many different things and then it says, “This is maybe fraud and this isn’t.”
You know, there is a sentiment and a legal reality that if an AI makes a decision that affects your life, you have the right to know why it made that decision. So my question to you, is that inherently going to limit what we are able to do with it? Because in n-dimensional space of clustering, it would be really hard to say, because the short answer is, you were in the cloud and this other person wasn’t in the cloud. If you were to go to Google and say, I rank #5 for such and such search, and my competitor ranks #1, why? They might very well say, we don’t know. So, how do you thread that needle?
So that’s a fascinating question. You are absolutely right. There’s kind of been a trend in society around, well, we think we understand what computers are capable of—we do understand what computers are capable of—and we try to build a human world around this, which is enjoyable or meets our social norms. And that has been, to date, largely based around the fact that computers are deterministic and they work in the classical deterministic algorithms and that those were reproducible, and so forth and so on. We kind of, as human beings, molded our world around those principles and it’s a progressive society and we continually mold our expectations and the rules of social norms to make us comfortable in that space.
Now, you are very right in the fact that when you get into the domain of machine learning, you are dealing with a technology which is largely irreproducible. So the traceability and the determinism of the decisions becomes a problem, or it becomes a shift in terms of what’s capable. From my perspective, I think this goes on to a range of different domain spaces. I mean, some of the places where they are talking about this are around automobiles for example, machine learning moves the capabilities of computing and it opens up a huge range of benefits that can be delivered into the automotive space. A lot of accidents and fatalities are caused by human error, and being able to hand more and more support to the driver, or do many things for them on a machine basis, potentially has the capability to save a lot of lives and save a lot of distress. So, that’s fantastic, but at the same time, it’s a heavily regulated industry that’s become used to determinism. And suddenly you have this thing where we can produce a huge amount of benefit for human kind, but it doesn’t follow the social norms that we’ve constructed around us to date.
I think this is causing a quandary in a lot of different spaces and even at some government levels. From my perspective, it’s interesting because a lot of the discussion today has been around what needs to be put in place around the technology, what are the constraints around the technology, how do we mold our views of the world today to get this new technology to fit into it. Personally I think that’s a very wrong way to look at it, because what we’ve had with machine learning and what we’ve currently got in front of us is a huge shift in what we can achieve with machines. And, as I said, it’s a principle which is now established which is only really getting started in terms of what it’s capable of and what it can be applied to.
And you know, there’s a lot of debate around is it good, is it bad, and you can find examples that are inherently good or inherently bad, but if you abstract far enough away from it, there’s a couple of principles I think that are important. One of them is, technology, in and of itself, is effectively inert. It’s not a question of it being good or bad. It can be used for positive or it can be used for negative. It doesn’t really inherently have a view on that. It’s about how the human beings normalize it in society, and you know, you can look at examples like speech synthesis.
So machine learning brings speech recognition to a level where it can be used for security purposes. It’s also capable of synthesizing speech from limited samples to be able to circumvent security. So, that’s a good example of a nil sum game. From my perspective, the real question around machine learning isn’t how do we get this technology to mold into our society. It’s about recognizing the fact that what we can achieve has suddenly changed, and getting society and human beings to move with that to remold their world around these new capabilities and rebuild the social norms so that they can harness the huge benefits that this technology can bring, but at the same time making sure that the social norms are in place to where they don’t become chemical weapons. And similar to chemical weapons, we say as a society, that’s not allowed, we are not going to tolerate that.
So, I think that the question around the technology, around the machine learning really is about human beings in societies need to recognize that this is a shift in capabilities. And we need to look at these, and reconstruct our social norms so that we are again happy with the positives that we can get, and we can benefit from those. But at the same time, we put the barriers up to the progression around what could be done negatively, and that’s something that has to happen with any technology advancement. I do think the focus really needs to be on society, and around the recreation of a particular decision, I think we can view that in terms of our existing social norms—we can look at it again as human beings and say, right, what do we consider to be acceptable. I am pretty confident that we will be able to reach those social norms, it’s just a question of the approach we take and how quick we get there. Personally, I feel that it starts from just embracing the technology and appreciating it’s here, let’s understand this and mold this into something that’s positive for us.
So let’s talk a little bit about IoT devices. You know that there’s been this struggle for 2,500 years between code makers and code breakers and there’s a longstanding unsettled debate as to who has the easier job. And then in computers, you had the same thing where you have people who make viruses and Trojan horses, then people who try to detect and prevent them. And they largely stay in check because when one makes an advance, the other one figures out how to counter it and then they patch the software and then they find the hole in that and then there’s another patch and we muddle through. I had a security person on the show and I said, you know, what’s your biggest concern about the future and he said, oh, you know that we are connecting billions of devices to the internet that we do not have the capability of upgrading and therefore if vulnerabilities are found in them, we don’t have a way to fix them.
So if somebody finds a way to turn on a toaster oven that’s connected to the internet, there is not really a way to fix that. What are your thoughts on that? Is that a real concern and is it intractable, is there a solution, what would you say?