Rob May is the CEO and Co-Founder of Talla, a platform for intelligent information delivery in Slack and Hipchat. Previously, Rob was the CEO and Co-Founder of Backupify, (acquired by Datto in 2014). Before that, he held engineering, business development, and management positions at various startups. Rob has a B.S. in Electrical Engineering and a MBA from the University of Kentucky.
He is also a well known angel investor, a venture partner at Pillar, and is the creator and writer of the widely-read and highly-regarded AI newsletter, Technically Sentient.
Rob May will be speaking at the Gigaom AI in San Francisco, February 15-16th. In anticipation of that, I caught up with him to ask a few questions.
Byron Reese: How did you first come to get involved with AI?
Rob May: I was in college at the University of Kentucky and I had a Bachelor’s Degree in Electrical Engineering and I was working on my MBA. So I had to take a class for business school that was on information technology. This was 1998, probably, and taking that class, we had to write a paper about some important topics in IT and being an engineering major, I found the whole class pretty boring. So I was flipping through the textbook to look for something to write about and the very last chapter was on artificial intelligence and the future of IT. So I decided to make that my project and I read a lot at the time of AI and a lot about things like fuzzy logic. There was a guy who was a pretty prominent writer at the time called Bart Kosko, who wrote some cool stuff. Yeah, I just fell in love with it, in part because it wasn’t just a technological problem, but it was very cross-functional, multi-dimensional. I had to read a lot about linguistics, philosophy of mind, cognitive science, neuroscience, computer science, robotics, and electrical engineering. It was a really fascinating cross-discipline subject.
If you were to describe the state of the art right now, where are we in AI?
That is a complex question to answer, I think. I think you’ve got two things, I think. We don’t have a good generalized anything yet. I think there are probably some technological breakthroughs that need to happen to get there. What we do have is we have a lot of very awesome specific types of intellectual agents and I think those are really impressive, so that is one side of it. On the second side though, a lot of even where we see the impressive work is still academic and research driven, so we’re not very good yet at productionizing a lot of that into real software. I think you’ll see—there aren’t best practices around neural networks the way that there are around databases in terms of how you engineer them, how you scale them, how you break them up into pieces, how you diagnose their problems. A lot of the academic papers that come out are very difficult to replicate in a real production software environment. So I think in terms of solving general AI, we’re pretty far away from that still, but we’re on a good path. In terms of bringing intelligence into the real world we’re further along, but that still lacks a lot of work on bridging the academic to the software engineering side.
So it seems that you think that AGI is kind of along an evolutionary path from where we are. That we basically know what we’re doing and AGI is just going to come about with our existing techniques done better, stronger, and faster. Do you think that? Do you think AGI is just better AI than what we have now?
When I say better, I don’t necessarily mean better in terms of we just need faster computers or bigger neural networks or anything like that. The way I would phrase it is I think AGI is going to be very hierarchical and I think that we have built the early building blocks for that and we need to understand the next levels of abstraction to be able to solve the problem. I do think that there are big technology leaps to make, but I feel pretty good that we’ll make them.
When do you think we will see something like an AGI? The range of guesses among people who merit having a guess is between five and five hundred years. Where would you be in that span?
I would be on the lower side of that. I don’t think a lot about when specifically it might happen, but my guess would be in twenty to twenty-five years because these things are accelerating. The way I would work through this problem in my own mind is I would say, well, if I was going to do a linear extrapolation of when we might get there, then yeah, I would probably say it is a hundred to two hundred years out, but it’s an accelerating trend and so that is going to be compressed and build on itself, so that is why I would make a shorter guess.
Take things that we would normally think would be very hard for AGI, like creativity. Do you think that an AGI in twenty or thirty years’ time could write a great novel?
I don’t think it will even take that long. I think that is a specific-use case that will be solved before we have AGI, so I think there will be a lot of problems that seem difficult now that will be solved with artificial intelligence before we have a generalized artificial intelligence just because it will be a matter of having the right data sources and having some new breakthrough and creating algorithms that make it possible to attack that domain now.
Yeah, I would say some of the really interesting areas, some of the areas that might be challenging to solve I would say would be things like analogy-making across disciplines, right? So like how do you abstract and say, “Well look, I know this principle of complexity theory based on how ant colonies work and I am going to apply that to a social phenomenon of humans or I’m going to apply that to a computer program that I’m going to write.” How does somebody looking at biologists studying evolution understand and say, “Well, this is something we could use to evolve better computer programs and come up with genetic algorithms.” Those kinds of cross-domain analogy-making applications will be one of the last things to fall and I think that’s one of the things that’s really important in building an AGI.
Do you think that computers are going to become conscious?
I do. I think maybe not in my lifetime, but maybe in the next hundred years. I think we will be asking ourselves questions about whether or not machines have certain rights.
Elon Musk, Bill Gates, and Stephen Hawking have all recently voiced concerns about AI. Elon Musk famously referred to it as summoning the devil. Are you afraid of an AGI?
At this point, I am no more afraid than not afraid. I am pretty balanced because I don’t know that we have evidence of why we should be one way or the other. We really don’t understand human motivation very well, like at a biological and neurobiological level and I think we don’t understand yet how you would program motivation into an AGI. People could argue, well, the AGI might change and reprogram its own motivations, but I don’t know, you get in all these second order problems. I mean, we humans don’t necessarily do that. Maybe we go to therapy and wish that we can change our motivation, but we can’t. I don’t know really have any more reason to think it will be negative than to think it will be positive and they might ignore us and not care about us.
One fear is that even if they don’t naturally become dangerous, but that bad actors in the world will program one.
Right, so that I am concerned about. In fact, I am a little bit concerned about what Elon Musk is doing there because when you look at Open AI, the whole premise behind Open AI is let’s share this with the world to make sure that it is not in the hands of big, elite companies who might use it nefariously. But here’s the thing – we don’t know what the breakthrough is going to be that is going to lead to generalized AI, right? It might be somewhere that some college kid in a dorm room in fifteen years has the last small step of insight that we need to do it. So what you are doing with Open AI is you are creating a scenario where now everybody has equal access, you are given the possibility that somebody bad or somebody nefarious might be the person that makes the final contribution, and they don’t have to share that last contribution, that last small step in what makes a generalized AI. I personally would feel much better if Larry Page or Mark Zuckerberg or Jeff Bezos was in charge of deciding what to do with the first generalized AI because these people have thought a lot about ethical issues running their own companies. Now, I’m not saying that I would always trust that they are doing the right thing, but what I am saying is they have had to think about legal issues and ethical issues as part of running big companies. They are more prepared to deal with this than a college kid somewhere. So by opening up access, I worry that Open AI risks causing the very problem it is trying to solve. Why is nobody saying, “Well, look, the way to solve the nuclear weapons problem is let’s really open up nuclear research so anybody can have access to it.” Well, yeah, that doesn’t make any sense.
How do you think AI is going to change business?
That’s a great question. So, I think, in the first wave of investment that came in 2013 to 2015 in the latest version of AI technologies, I think people didn’t think a lot about underlying use cases, they just knew there was something here and they knew there was a small group of people who understood it, and so the main way to raise funding was based on your academic credentials. So you saw a lot of PhDs get funded, you saw a lot of people building broad AI platforms—MetaMind, Clarifai, Indico, companies like that. Those are difficult places to start because you are building these platforms when the end-use cases aren’t well defined yet. Some companies have been really smart. I think Clarifai has done a good job of making their platform more targeted. Packaging it up into more product use cases and that is why they just raised another round of funding. So the way that I would phrase it now is now you’re starting to see the idea where a lot of smart technologists who may not understand the math behind the back propagation algorithm, but understand conceptually what some of the opportunities are in AI are moving into this. I have seen a lot of smart technologists start companies.
Where the real explosion is going to happen is in another few years when much of the technical work to do AI gets packaged away and abstracted in a way that’s really easy. So people started with these platforms, and once there’s enough use cases, those platforms will make a comeback and they’ll be very useful and people will start to abstract that way. Then you’ll have the idea of any software engineer to be able to add AI to their platform and it will be pretty cool, but I think it is going to change businesses in a lot of ways.
I think you are going to see a major economic boom driven by AI. The reason is that historically what has driven economic growth to this country is productivity growth. A lot of that productivity growth is driven by technology. The last wave of technology, the consumer wave and the cloud wave, the social stuff and the cloud stuff, the cloud stuff was mainly a cost rearrangement issue, right? It wasn’t like, wow, we’re so much more productive because our stuff runs in the cloud now than on premise. Maybe marginally, maybe a little bit, but it wasn’t a sort of step function in productivity. Then, the consumer stuff, the social stuff, was really made more efficient on the demand side. It fragmented our time. So hey, I watch less TV because I’m tweeting and I’m streaming music and I am doing whatever at the same time. It made the demand side more efficient, and if you believe that demand drives a lot of economic growth, it’s part of the reason growth has been anemic. We didn’t have to buy tapes and CDs, we can just stream from the same song. We didn’t have to buy as many cars and hotels rooms because of Uber and AirBnB. So there have been a lot of those issues that—I wouldn’t say they have been a drag on growth, but there has been an absence of drivers and productivity growth. What you’re going to see with AI is the initial augmentation of workers to make them much more productive and I think that’s really going to lead to an explosion of productivity. I wouldn’t be surprised if three or four years from now we’re looking out and we’re seeing a couple of years of three, four, five percent productivity gains again as people start to do more and more.
The way it is going to impact businesses, first you’re going to have the ability to automate away tasks that are monotonous, mundane, simple. You’re going to see augmented information gathering, decision-making. You’re going to see computers supporting humans in the things that they do and then the computers are going to slowly inch their way up that stack of cognitive capability and take away more and more of the human work. I don’t know how long it takes to get that whole stack done, but when you’re eating the bottom part of that stack, the early phases, and you are augmenting the humans primarily, that’s going to be explosive. I think the growth is going to be fantastic. On the other side, maybe it comes around and hurts us in the long, long run, but it’s unavoidable.
So you are an angel investor. What do you look for? What’s your investment thesis on AI?
Yes, great question. So I’ve got about fifteen angel investments and ten of them are in the AI space, so that’s really where I’m mostly focused. I look for a couple of things. I look for general things like, I am very entrepreneur-focused, so is this entrepreneur hungry, is this the kind of person who can deal with a lot of setbacks and obstacles and get through it all and make things happen. So I look a lot at that. I look at long-term access to data sets. Data is very important in AI, the data sets you have. I don’t need a company to say they have the data sets they need on day one, I need them to have a creative way to get it, or to build it over time, so I look a lot at that. I look for applications of the technology that are going to stay out of the way of where Google and Facebook and Amazon and the big guys are going in the near term. Then I look at the upside of the market opportunity. I’m less concerned with downsized risk and whether I lose money and more concerned with the idea that, if I am right, can this be a really big company? So those are the factors that I evaluate, but I’m more agnostic about the specific underlying AI technology people use. I don’t have a strong bias for or against like deep learning versus phased in approaches or whatever. I think they’re all going to change a lot over the next five or six years as new techniques emerge.
So if you were to think through every industry, consumer industries as well as B2B stuff, where do you think the biggest gains, the best low-hanging fruit can be found? Is it going to be pure knowledge work or is it going to be logistics or is it going to be in things where you have customer service and you interact with other people? Where do you think is a great place where you think, “Oh, that’s an area where we’re going to see huge gains pretty quickly without a doubt.”
So, two things. I think anything robotics driven you’re really going to start to see an explosion there. Anything where a machine couldn’t do it because it needed some autonomy, it needed to be able to learn in its environment is now going to be open. So I do think a lot of logistics. I think a lot of the self-driving car stuff is going to have broad and deep implications even far beyond what we think right now. Then I think knowledge workers are going to be heavily impacted, but not on the part where they interact with consumers or interface with customers. I actually think that will be the last part to be automated away, and that will be a long time coming because if I think about the CEO, if I have a salesperson or a support person or somebody like that, I would rather automate away the other parts of their job and have them spend as much time as they can doing the face-to-face work that they’re good at rather than automate that piece away. So I think you’ll see that happen from the bottom up. I think you’ll see a lot of the reporting, a lot of the data collecting, a lot of the analytics and prediction and decision making and all that kind of stuff being taken over by the machines before the actual communications with customers and users is taken over by the machines.
Where do you think the United States is compared to other countries in terms of research and development and implementing and investments in AI and so forth?
I think we’re probably still tops in the world if you think about it from an ownership of IT perspective. I say that because a lot of Google DeepMind group is run out of London, but they are owned by Google, and so if you want to look at it from an ownership perspective, I think we are pretty powerful. If you want to look at it from a data perspective, I think American companies have some of the best and most interesting data sets in the world, that are going to allow a lot of this training. I worry that in the next five or six years we could slip to number two behind China, because China is making a tremendous amount of investment in these fields, and possibly even Japan, who is investing a lot. In a while I think we will slip to Japan. I think that is going to reinforce what’s happening in Asia, and culturally they are very different in how they think about AI and robots and some of that stuff. They were earlier to mobile phones because they skipped a lot of the landline infrastructure. They didn’t have a lot of the legacy problems that we had to suffer through. Messaging was there earlier and I think it has driven the success of WeChat. So I think the United States is still in the lead. There is a chance we keep the lead, but who knows?
I guess all of your comments are absent regulatory hurdles that impede development.
Yes, so that is a really big challenge, I think, for the United States. With this technology accelerating and taking over fast and the potential for misuse if North Korea or Iran or someplace that is despotic becomes a leader in AI, how do you regulate it appropriately in ways that keep things going well and keep things moving forward? Because in this country we’re very pro-regulation, maybe too much so, and I worry about a lot of this work moving overseas if we do regulate it too much, but I also worry about…you know, our politicians tending to be lawyers. They don’t come from science and technology backgrounds. They don’t understand this stuff and the scientists and the technologists, a lot of them don’t have any ethical or philosophical or political training to think about the impact of government and everything else.
Tell me about Technically Sentient. How did you happen to pick that name?
Well, because I thought that you have this idea that you’re going to build machines that are self-aware and it was sort of a word play on “technically,” meaning technical, computers, and all that kind of stuff. Then also meaning the definition of “Well, technically,” and that kind of thing. I looked at it as sort of a double entendre and sort of passing the Turing test.
And what are you trying to do with it?
My last company was in the backup and security space and so I wanted to rebrand myself, so I started an AI company. I have a lot of AI experience in my past, but that’s not how I’m known to the community and I saw a real absence of AI thought leadership on the business side and on the applied technology side. A lot of it was deep technology. So I decided, hey, if nobody’s out there, I like to write and I like to follow a lot of this stuff. I have strong opinions about everything. It partially was an opportunity to do three things, really. One, brand myself and be able to promote my company sometimes, when appropriate. I try not to do that too much. Number two, it’s a very scalable way to keep in touch with people. I mean, I have almost seven thousand subscribers. Some of them are my friends, some of them aren’t, but they hear from me every week. They feel like they know me. They feel like we’ve been in touch even if we haven’t, so that’s kind of good. The third is that it’s a great source of deal flow for angel investing because a lot of entrepreneurs read it, a lot of VPs read it and will often share deals with me or will ask me to come into a deal. So I get a lot of benefits from it and I feel like as long as I keep it curated at a high quality, then the viewers get a lot out of it as well.
You are the CEO of Talla. Tell us about that.
Yes, so Talla started out in the chat bot space, and we are focused on building a chat bot for HR teams. What we found as we started going down that path is that a lot of the workflows that you might try to build into a bot actually need to be highly customizable. So we ended up extracting the platform to a higher level, so that the way you can think about it now is you just think about a bot as just another interface, like a mobile app or web app or anything like that. What you can really do with Talla is you can build and automate intelligent, communication-driven or data-driven workflows that are HR related. So let me give you some examples. So you can install Talla in Slack or HipChat or whatever communications system you use. Then you can use Talla to execute polls that look at employee net promoter scores. Talla can do on-boarding for your new employees. So on day one, Talla can drip in, “Hey, you should fill out these forms. You should read this article. You should do this thing.” “On day five, you should do this.” “On day seven, you should check in with your manager and talk about this.” You drip that in over time. You can give Talla commands like, “Hey, generate a letter for this person.” Talla will ask you a couple of questions and go off and do it, suggest pay ranges. I look at it as, Talla can automate a lot of your busy-work, a lot of your monotonous work, and really help you focus on the stuff that is more strategic, the stuff that is more important. So I hope that we will play a big part in the future of work and really where that goes with respect to artificial intelligence.
Alright, well, thank you for your time.
Join us at Gigaom AI in San Francisco, February 15-16th where Rob May will speak more on the subject of AI.