Voices in AI – Episode 44: A Conversation with Gaurav Kataria

1 Comment

In this episode, Byron and Gaurav discuss machine learning, jobs, and security.

Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Gaurav Kataria. He is the VP of Product over at Entelo. He is also a guest lecturer at Stanford. Up until last month, he was the head of data science and growth at Google Cloud. He holds a Ph.D. in computer security risk management from Carnegie Mellon University. Welcome to the show Gaurav!

Gaurav Kataria: Hi Byron, thank you for inviting me. This is wonderful. I really appreciate being on your show and having this opportunity to talk to your listeners.

So let’s start with definitions. What is artificial intelligence?

Artificial intelligence, as the word suggests, starts with artificial and at this stage, we are in this mode of creating an impression of intelligence, and that’s why we call it artificial. What artificial intelligence does is it learns from the past patterns. So, you keep showing the patterns to the machine, to a computer, and then it will start to understand those patterns, and it can say every time this happens I need to switch off the light, every time this happens I need to open the door, and things of this nature. So you can train the machine to spark these patterns and then take action based on those patterns. A lot of it is right now being talked about in the context of self-driving cars. When you’re developing an artificial intelligence technology, you need a lot of training towards that technology so that it can learn the patterns in a very diverse and broad set of circumstances to create a more complete picture of what to expect in the future and then whenever it sees that same pattern in the future, it knows from its past what to do, and it will do that in the future.

So…

Artificial intelligence is not built…sorry, go ahead.

So, that definition or the way you are thinking of it seems to preclude other methodologies in the past which would have been considered AI. It precludes expert systems which aren’t trained off datasets. It precludes classic AI, where you try to build a model. Your definition really is about what is machine learning, is that true? Do you see those as synonymous?

I do see a lot of similarity between artificial intelligence and machine learning. You are absolutely right that artificial intelligence is a much broader term than just machine learning. You could create an artificially intelligent system without machine learning by just writing some heuristics, and we can call it like an expert system. In today’s world, right now, there is a lot of intersection happening in the field of AI, artificial intelligence, and machine learning and the consensus or an opinion of a lot of people in this space today is that techniques in machine learning are the ones that will drive the artificial intelligence forward. However, we will continue to have many other forms of artificial intelligence.

Just to be really clear, let me ask you a different question. What you just said is kind of interesting. You say we’ve happened on machine learning and it’s kind of our path forward. Do you believe that something like a general intelligence is an evolutionary development along the line of what we are doing now? Is it are we going to be a little better with our techniques, a little better, a little better, a little better and then one day we’ll have a general intelligence? Or do you think general intelligence is something completely different and will require a completely different way of thinking?

Thanks for that question. I would say today we understand artificial intelligence as a way of extrapolating from the past. We see something in the past, and we draw a conclusion for future based on what pattern we have seen in the past. The notion of general intelligence assumes or presupposes that you can make decisions in the future without having seen those circumstances or those situations in the past. Today, most of what’s going on in the field of artificial intelligence and in the field of machine learning is primarily based on training the machine based on data that already exists. In [the] future, I can foresee a world where we will have generalized intelligence, but today we are very far from it. And to my knowledge most of the work that I have seen and I have interacted [with] and the research that I have read speaks mostly in the context of training the systems based on current data—current information so that it can respond for similar situations in the future—but not anything outside of that.

So, humans do that really well, right? Like, we are really good at transfer learning. You can train a human with a dataset of one thing. You know say this is an alien, grog, and show it a drawing, and it could pick out a photograph of that, it could pick out one of those hanging behind the tree, it could pick out one of those standing on its head…How do you think like we do that? I know it’s a big question. How do you think we do it? Is that a machine learning? Is that something that you can train a machine eventually to do solely with data or are we doing something there that’s different?

Yeah, so you asked about transfer learning. So [in] transfer learning we train the machine or train the system for one set of circumstances or one set of conditions and then it is able to transfer that knowledge or apply that knowledge in another area. It can still kind of act based on that learning, but the assumption there is that there is still training in one setup and then you transfer that learning to another new area. So when it goes to the new area it feels like there was no training and the machine is just acting without any training with all general intelligence. But that’s not true because the knowledge was transferred from another dataset or another condition where there was training data. So I would say transfer learning does start to feel like or mimic the generalized intelligence, but it’s not generalized because it’s still learning from one setup and then trying to just extrapolate it to a newer or a different setup.

So how do you think humans do it? Let me try the question in a different way. Does everything you know how to do, everything a human knows how to do by age 20, something we learned from seeing examples of data? Could you machine learn, could a human be thought of asa really sophisticated machine learning algorithm?

That’s a very good point. I would like to think of humans as, all of us, as doing two things. One is learning, we learn from our experiences, and as you said like going from birth to 20 years of age, we do a lot of learning. We learn to speak, we learn the language, we learn the grammar, and we learn the social rules and protocols. In addition to learning, or let me say separate from learning, humans also do another thing, which is humans create where there was not a learning or repetition of what was taught to them. They create something new—as the expression goes “create from scratch.” This creating something from scratch or creating something out of nothing is what we call human creativity or innovation. So humans do two things: they are very good learners, they can learn from even very little data, but in addition to being good learners, humans are also innovators, and humans are also creators, and humans are also thinkers. The second aspect is where I think the artificial intelligence and machine learning really doesn’t do much. The first aspect, you’re absolutely right, I mean humans could be thought of as a very advanced machine learning system. You could give it some data, and it will pick [it] up very quickly.

In fact, one of the biggest challenges in machine learning today or in the context of AI, the challenge from machine learning, is it needs a lot of training data. If you want to make a self-driving car, experts have said it could take billions of miles of driving data to train that car to be able to do that. The point being, with lot of training data you can create an intelligence system. But humans can learn with less training data. I mean when you start learning to drive at the age of sixteen you don’t need a million miles to drive before you learn how to drive, but machines will need millions and millions of miles of driving experience before they can learn. So humans are better learners, and there is something going on in the human brain that’s more advanced than typical machine learning and AI models today. And I’m sure the state of artificial intelligence and machine learning will advance where machines can probably learn as fast as a human and will not require this much training data that it requires today. But the second aspect of what a human does—which is create something out of nothing or out of scratch, the pure thinking, the pure imagination—there I think there is a difference between what a human does and what a machine does.

By all means! Go explain that because I have an enormous number of guests on the show who aren’t particularly impressed by human creativity. They think that it’s kind of a party trick. It’s just kind of a hack. There’s nothing really at all that interesting about it that we just like to think it is. So I’d love to talk to somebody who thinks otherwise, who thinks there’s something positively quite interesting about human creativity. Where do you think it comes from?

Sure! I would like to kind of consider a thought experiment. So imagine that a human baby was taken away from civilization, from [the] middle of San Francisco or Austin—a big city—and put on an island all by herself, like just one human child all by herself on an island and that child will grow over time and will learn to do a lot of things and the child will learn to create a lot of things on their own. That’s where I am trying to take your imagination. Consider what that one individual without having learned anything else from any other human could be capable of doing. Could they be capable of creating a little bit of shelter for themselves? Could they be capable of finding food for themselves? There may be a lot of things that humans may be able to do, and we know [that] from the history of our civilization and the history of mankind.

Humans have invented a lot of things, even basic things like creating fire and creating a wheel, to much more advanced things like sending rocket ships into space. So I do feel that humans do things that are just not learned from the behavior of other humans. Humans do create completely new and novel things which is independent of what was done by anybody before them who lived on this planet. So I definitely have a view here that I am a believer in human creativity and human ingenuity and intuition where humans do create a lot of things; it is these humans [who]are creating all the artificial intelligence systems and machine learning systems. I would never count out human creativity.

So, somebody arguing on the other side of that would say, well no she’s on this island, it’s raining and she sees a spot under a tree that didn’t get wet, or she sees a fox going into a hole when it starts raining and, therefore, she starts a data point that she was trained on. She sees birds flying down, grabbing berries and eating them, so it’s just training data from another source, it’s just not from other humans. We saw rocks roll down the hill and we generalized that to how round things roll, round rock rolls. I mean that it’s just all training data from the environment, it doesn’t have to be specifically human data. So what would you say to that?

No, absolutely! I think you’re giving very good counter examples and there is certainly a lot of training and learning but if you think about sending a rocket to the moon and you say okay, so did we just see some training data around us and create a rocket and send it to the moon? There it starts to become harder to say that it’s a one to one connection from one training data to sending a rocket to the moon. There are much more advanced and complicated things that humans have accomplished than just finding shelter and creating a tree or finding rolling rocks. So humans definitely go way further in their imagination [and] any simple example that I could give would illustrate that point.

Fair enough! So, and we´ll move onto another issue here in just a minute, but I find this fascinating. So is your contention that the brain is not a Turing machine? That the brain behaves in fundamentally different ways than a computer?

I’m not an expert on how [the] human brain or how any mammal’s brain actually behave[s], so I can’t comment on all the technical aspects on how does a human brain function. I can say from observation that humans do a lot of things that machines don’t do and it’s because humans do come up with things completely from scratch. They come up with ideas out of nowhere, whereas machines don’t come up with ideas out of nowhere. They either learn very directly from the data or as you pointed out, they learn through transfer learning. So they learn from one situation, and then they transfer that learning to another situation.

So, I often ask people on the show when they think we will get a general intelligence, and the answers I get [a] range between five and five hundred years. It sounds like, not putting any words into your mouth, you’re on the further outside of that equation. You think we’re pretty far away, is that true?

I do feel that it will be further out on that dimension. In fact what I’m most fascinated by, and I kind of would love your listeners to also think about this, is [that] we talk a lot about human consciousness—we talk about how humans become creative and what is that moment of getting a new idea or thinking through a problem where you’re not just repeating something that you have seen in the past. That consciousness is a very key topic that we all think about very, very deeply and we try to come up with good definitions for what that consciousness is. If we ever create a system which we believe can mimic or show human consciousness level behavior, then at the very least we would have understood what consciousness is. Today we don’t even understand it. We try to describe it in words, but we don’t have perfect words for it. With more advances in this field, maybe we will come up with a much crisper definition for consciousness. That’s my belief, and that’s my hope that we should continue to work in this area. Many, many researchers are putting a lot of effort and thinking into this space, and as they may progress whether it is five years or five hundred years, we will certainly learn a lot more about ourselves in that time period.

To be clear though, there is widespread agreement on what consciousness is. The definition itself is not an issue. The definition is the experience of the world. It’s qualia. It’s the difference [between] a computer sensing, measuring temperature and a person feeling heat. And so the question becomes how could a computer ever, you know, feel pain? Could a computer feel pain? If it could, then you can argue that that’s a level of consciousness. What people don’t know is how it comes about, and they don’t even know, I think to your point, what that question looks like scientifically. So, trying to parse your words out here, do you believe we will build machines that don’t just measure the world but actually experience the world?

Yeah, I think when we say experience it is still a lower level kind of feeling where you are still trying to describe the world through almost like sensors—sensing things, sensing temperatures, sensing light. If you could imagine where all our senses were turned off, so you are not getting external stimuli and everything was coming from within. Could you still come up with an idea on your own without any stimulus? That’s a much harder thing that I’m trying to understand. As humans, we do try to strive to get to that point where you can come up with an idea without a stimulus or without any external stimuli. For machines, that’s not the bar we are holding for them. We are just holding the bar to say if there is a stimulus, will they respond to that stimulus?

So just one more question along these lines. At the very beginning when I asked you about the definition of artificial intelligence, you replied about machine learning, and you said that the computer comes to understand, and I wrote down the word “understand” on my notepad here, something. And I was going to ask you about that because you don’t actually think the computer understands anything. That’s a colloquialism, right?

Correct!

So, do you believe that someday a computer can understand something?

I think for now I will say computers just learn. Understand as you said, has a much deeper meaning. Learning is much more straightforward. You have seen some pattern, and you have learned from that pattern. Whether you understand or not, is a much deeper concept but learning is a much more straightforward concept, and today with most of our machine learning systems, all we are expecting them to do is to learn.

Do you think that there is a quote “master algorithm?” Do you think that there is a machine learning technique that, in theory that we haven’t discovered yet, can do unsupervised learning? Like you could just point it at the internet, and it could just crawl it and end up figuring it all out, it’ll understand it all. Do you think that there is an algorithm like that? Or do you think intelligence is going to be found to be very kludgy and we are going to have certain techniques to do this and then this and then this and then this? What do you think that looks like?

I see it as a version of your previous question. Is there going to be generalized intelligence and is that going to be in five years or five hundred years? I think where we are today it is the more kludgy version where we do have machines that can scan the entire web and find patterns and it can repeat those patterns but nothing more than just repeating those patterns. It’s more like a question and answer type of machine. It is a machine that completes sentences. There is nothing more than that. There is no sense of understanding. There is only a sense of repeating those patterns that you have seen in the past.

So if you’re walking along the beach and you find a genie lamp, and you rub it, and a genie comes out, and the genie says I will give you one wish: I will give you vastly faster computers, vastly more data or vastly better algorithms. What would you pick? What would advance the science the most?

I think you nailed the question on the head by saying these are the three things we need to improve machine learning: better data, more data, we need more computing power, and we need better algorithms. The state of the world as I experience it today within the field of machine learning and data science, usually our biggest bottleneck, the biggest hurdle, is data. We would certainly love to have more computational power. We would certainly pick much better and faster algorithms. But if I could ask for only one thing, I would ask for more training data.

So there is a big debate going on about the implication that these technologies are going to have on employment. I mean you know the whole setup as do the listeners, what’s your take on that?

I think as a whole our economy is moving into much more specialized jobs where people and humans are doing something which is more specialized than something which is repetitive and very kind of general or simple. Machine learning systems are certainly taking a lot of repetitive tasks away. So if a task that a human repeats like hundred times a day, those simpler tasks are definitely getting automated. But humans, in coming back to our earlier discussion, do show a lot of creativity and ingenuity and intuition. A lot of jobs are moving into the direction where we are relying on human creativity. So as a whole towards the whole economy and for everybody around us, I feel the future is pretty bright. We have an opportunity now to apply ourselves to do more creative things than just repetitive things, and machines will do the repetitive things for us. Humans can focus on doing more creative things, and that brings more joy and happiness and satisfaction and fulfillment to every human than just doing repetitive tasks which become very mundane and not very exciting.

You know, Vladimir Putin famously said, I’m going to paraphrase it here, that whoever dominates in AI will dominate the world. There is this view from some who want to weaponize the technology which see it strategically, you know, in this kind of great geopolitical world we live in. Do you worry about that, or are you like well you could say that about every technology—like metallurgy, you can say about metallurgy, that whoever controls metallurgy controls the future—or do you think AI is something different and it will really reshape the geopolitical landscape of the world?

So, I mean as you said, every technology is definitely weaponized, and we have seen many examples of that, not just going back a few decades. We have seen that for thousands of years where a new technology comes up and as humans we get very creative in weaponizing that technology. I do expect that machine learning and AI will be used for these purposes, but like any other technology in the past, no one technology has destroyed the world. As humans we come up with ways and interesting ways to still reach an equilibrium, to still reach a world of peace and happiness. So while there will be challenges and AI will create problems for us in the field of weapon technology, I think that I would still kind of bet that humans will find a way to create equilibrium out of this disruptive technology and this is not the end of the world, certainly not.

You’re no doubt familiar with the European initiatives that when an artificial intelligence makes a decision that affects you—it doesn’t give you a home mortgage or something like that—that you have a right to know why it did that. You’re an advocate [for], it seems, that that is both possible and desirable. Can you speak to that? Why do you think that’s possible?

So, if I understand the intent of your question, the European Union and probably all the jurisdictions around the world have put in a lot of thought into a) protecting human privacy and b) making that information more transparent and available to all the humans. I think that is truly the intent of the European regulation as well as similar regulation in many other parts of the world where we want to make sure we protect human privacy, and we give humans an opportunity to either opt out or understand how their data or how that information is being used. I think that’s definitely the right direction. So if I understand your question, I think that’s what Entelo as a company is looking it. Every company that is in the space of AI and machine learning is also looking at creating that respectful experience where if any human’s data is used, it’s done in a privacy-sensitive manner, and the information is very transparent.

Well, I think I might be asking something rather poorly it seems [or] slightly different. Let me use Google as an example. If I have a company that sells widgets and I have a competitor—and they have a company that sells widgets, and there are ten thousand other companies that sell widgets—and if you search for widget in Google, my competitor comes up first, and I come up second, [then] I say to Google, “why am I second and they are first?” I guess I kind of expect Google’s like, “what are you talking about?” It’s like, who knows? There are so many things, so many factors, so many who knows! And yet that’s a decision that AI made that affected my business. There’s a big difference between being number one and number two in the widget business. So if you say now every decision that it makes you’ve got to be able to explain why it made that decision, it feels like it shackles on the progress of the industry. Do you comment?

Right. Now I think I understand your question better now. So that burden is on all of us, I think because it is a slope or a slippery slope where, as artificial intelligence algorithms and machine learning algorithms become more and more complex, it becomes harder to explain those algorithms, so that’s a burden that we all carry. Anybody who is using artificial intelligence, and nowadays it’s pretty much all of us. If we think about it, which company is not using AI and ML? Everybody is using AI and ML. It is a responsibility for everybody in this field to try to make sure that they have a good understanding of their machine learning models and artificial intelligence models [so] that you can start to understand what triggers certain behavior. Every company that I know of, and I can’t speak for everybody but based on my knowledge is certainly thinking about this because you don’t want to put any machine learning algorithm out there that you can’t even explain how it works. So we may not have a perfect understanding of every machine learning algorithm, but we certainly strive to understand it as best as we can and explain it as clearly as we can. So that’s a burden we all carry.

You know I’m really interested in the notion of embodying these artificial intelligences. So you know one of the use cases is that someday we’ll have robots that can be caregivers for elderly people. We can talk to them and over time learn to laugh at their jokes, and learn to tell jokes like the ones they tell and emote when they’re telling some story about the past and kind of emote with them and oh it’s a beautiful story and all of that. Do you think that’s a good thing or a bad thing? To build that kind of technology that blurs the lines between a system that, as we were talking about earlier, truly understands as opposed to a system that just learns how to, let’s just say manipulate the person?

Yeah, I think right now my understanding is more in the field of learning than just full understanding, so I’ll speak from my area of knowledge and expertise [where] our focus is primarily on learning. Understanding is something that I think we as the community and researchers will definitely look at. But as far as most of the systems that exist today and most of the systems that I can foresee in the near future, they are more learning systems; they are not understanding systems.

But even a really simple case—you know I have the device from Amazon that if I say its name right now it’s going to, you know, start talking to me, right? And when my kids come into the studio and ask a question of it, once they get the answer [and] they can tell the answer is not what they’re looking for, they just tell it, you know, to be quiet. You know I have to say it somehow doesn’t sit right with me to hear them cut off something that sounds like a human like that—something that would be rude in any other [context]. So, does that worry you? Is that teaching? Am I just an old fuddy-duddy at this point? Or does that somehow numb their empathy with real people and they really would be more inclined to say that to a real person now?

I think you are asking a very deep question here as to do we as humans change our behavior and become different as we interact with technology? And I think some of that is true!

Yeah!

Some of that is true for sure, like when you think about SMS when it came out like 25 years ago as a technology, and we started texting each other. The way we would write text was different than how we would write handwritten letters. It became, I mean by the standards of let’s say 30 years ago, the text were very impolite, they would have all kinds of spelling mistakes, they would not address the people properly, and they would not really end with the proper punctuation and things like that. But as a technology, it evolved, and it is seen as still useful to us and we as humans we are comfortable with adapting to that technology. For every new technology, whether it is a speaking speaker or texting on cell phones, we’ll introduce new forms of communication, new forms of interaction. But a lot of human decency and respect comes from us not just based on how we interact with a speaker or on a text pad. A lot of it comes from much deeper rooted beliefs than just an interface. So I do feel like while we’ll adapt to new, different interfaces, a lot of human decency will come from much [a] deeper place than just the interface of the technology.

So you hold a Ph.D. in computer security risk management. When I have a guest on the show, sometimes I ask them “what is your biggest worry?” “Or is security really, you know, an issue?” And they all say yes. They’re like okay we’re plugging in 25 billion IoT devices, none of which by the way can we upgrade the software on. So you’re basically cementing in whatever security vulnerabilities you have. And you know [of] all the hacks that get reported in the industry, in the news—stories of election interfering, all this other stuff. Do you believe that the concern for security around these technologies is, in the popular media, overstated, understated or just about right?

I would say it’s just about right. I think that this is a very serious issue as more and more data is out there and more and more devices are out there as you mention a lot of IoT devices as well, I think the importance of this area has only grown over time and will continue to grow. So it deserves the due attention in this conversation, in our conversation, in any conversation. I think by bringing it to [the] limelight and drawing attention to this topic and making everybody think deeply and carefully about it is the right thing and I believe we are certainly not doing any fearmongering. All of these are justified concerns, and we are spending our time and energy about them in the right way.

So, just talking about the United States for a moment, because I’m sure all of these problems are addressed [on] a national level differently, different country. So just talking about the US for a minute, how do you think we’ll solve it? Do you just say well we’ll keep the spotlight on it and we hope that the businesses themselves see that they have an incentive to make their devices secure? Or do you think that the government should regulate it? How would you solve the problem now if you were in charge?

Sure! First of all, I think I am not in charge, but I do feel that there are three constituents in this. First, [are] the creators of technology, like when you are creating an IoT device or you’re creating any kind of software system, the responsibility is on the creator to think about the security of the system they are creating. The second constituent, the users, which [are] the general public and the customers of that technology. They put the pressure on the creator that the technology and the system should be safe. So if you don’t create a good system, a safe system, you will have no buyers and users for it. So people will vote with their feet, and they will hold the company or the creators of technology accountable. And as you mentioned, there is a third constituent, and that is the government or the regulator. I think all three constituents have to play a role. It’s not any one stakeholder that can decide whether the technology is safe, or good and is it good enough. It’s an interplay between the three constituents here. So the creators of technology which [are the] company, research lab, [and] academic institution, they have to think very deeply about security. The users of technology definitely hold the creators accountable, and the regulators play an important role in keeping the overall system safe. So I would say it’s not any one person or any one entity that can make the world safe. The responsibility is on all three.

So let me ask Gaurav the person a question. So you got this Ph.D. in computer security and risk management. What are some things that you personally do that you do because of your concerns about security? For instance, like do you have a piece of tape over your webcam? Or you’re like I would never hook a webcam? Or I never use the same password twice. What are some of the things that you do in your online life to protect your security?

So, I mean you mention all that good things like not to reuse passwords and things like that, but one thing which I have always mentioned to kind of my friends, my colleagues and I would love to share it with your listeners is: think about two-factor authentication. Two-factor authentication means, in addition to a password you are using a second means of authentication. So if you have a banking website, or a broker website or for that matter even your email, that’s the email system, it’s a good tactic to have two-factor authentication where you enter your password, but in addition to your password the system requires you to use a second factor and the second factor could be to send you a text message on your phone and it gives you a code and then you have to enter that code into the website or into the software. So two-factor authentication is many, many times more secure than one-factor authentication which is we just enter password and password can get stolen or breached and hacked. Two-factor is a very good security practice, and almost all companies and most of the creators of technology are now supporting two-factor authentication for the world to move in to that direction.

So, up until November you were the head of data science and growth of Google Cloud, and now you are the VP of Product at Entelo. So two questions: one, in your personal journey and life, why did you decide now is the time to go do something different, and then, what about Entelo got you excited? Tell us the Entelo story and what that’s all about.

Thanks for asking that. So Entelo is in the space of recruiting automation. The idea is that recruiting candidates has always been a challenge. I mean it’s hard to find the right fit for your company. Long ago we would put classified ads in the newspaper, and then technology came along, and we could post jobs on our website, we could post jobs on job boards, and that certainly helped in broadcasting your message to a lot of people so that they could apply for your job. But when you are recruiting, people who apply for your job is only one means of getting good people to your company. You also have to sometimes reach out to candidates who are not looking for a job, who are not applying for a job on your website or on a job board, they’re just happily employed somewhere else. But they are so good for the role you have that you have to go and kind of tap on their shoulder and say would you be interested in this new role, in this new career opportunity for you? Entelo creates that experience. It automates the whole recruiting process, and it helps you find the right candidates who may not apply on your website or apply on a job board, who are not even looking for a job. It helps you identify those candidates, and it helps you engage with those candidates—to reach out to them, tell them about your role and see if they are interested about your role, to then engage them further in the recruiting process. All of this is powered by a lot of data and a lot of AI and as we discussed earlier a lot of machine learning.

And so, I’ve often thought that what you’re describing—so AI has done really well at playing games because you’ve got these rules and you’ve got points, and you’ve got winners and all of that. Is that how you think of this? In a way, like, you have successful candidates at your company and unsuccessful candidates at your company and those are good points and bad points? So you’re looking for people that look like your successful candidates more. On an abstract, conceptual level how do you solve that problem?

I think you’re definitely describing the idea where not everybody is a good fit for your company and some people are a good fit. So the question is how do you find the good fit? How do you learn that who is a good fit and who is not? Traditionally, recruiters have been combing through lots and lots of resumes. I mean if you think back like decades ago, a recruiter would have to see a hundred or a thousand resumes stacked on their desk and then they would go through each one of them to say that this is a fit or not. Then about 20 years or so ago we had a lot of keywords search engines kind of developed, where as a human you don’t have to read the thousand resumes. Let’s just do a keyword search and let’s say if any of these resumes have this word and if they had the word then is a good resume and if it doesn’t have that word, then it’s not a good resume. That was a good innovation for scoring resumes or finding resumes, but it’s very imperfect because it’s susceptible to many problems. It’s susceptible to the problem where resumes get stuffed with keywords. It is susceptible to the problem that there is more to a person and more to a resume than just keywords.

Today the technology that we have in identifying the right candidate is just barely keyword search on almost every recruiting platform today. What a recruiter would do is say, “I can’t look through a thousand or a million resumes, let me just do a keyword search.” Entelo is trying to take a very different approach. Entelo is saying, “let’s not think about just keyword search; let’s think about who is [the] right fit for a job.” When you as humans look at a resume, you don’t do [a] keyword search; computers do [a] keyword search. I mean, in fact, if I were to challenge you or propose that I put a resume in front of you for an office manager you’re hiring for your office, you will probably scan that resume, you will have some heuristics in mind, you will look through some information and then say that yes this is a good resume or not a good resume. I can bet you are not going to do a keyword search on that resume and say like, “oh it has the word office, and it has the word manager, and it has the word furniture in it, so it’s a good resume for me.”

There is a lot that happens in the minds of the recruiters where they think through, is this person a good fit for this role? We are trying to learn from that recruiter experience where they don’t have to look through hundreds and thousands of resumes and nor do they have to do [a] keyword search. But we can learn from that experience of which is a good resume for this role and which is not a good resume for this role to find that pattern and then surface the right candidate and we take it a step further. We reach out to those candidates, engage those candidates, and then the recruiter only sees the candidates that are interested, so they don’t have to kind of think about like okay now do I have to do a keyword search in a million resumes and try to reach out to a million candidates. All of that process gets automated through the system that we have built here at Entelo and the system that we are further developing.

So at what level kind of is it training? For instance, if you have, you know, Bob’s House of Plumbing across the street from Jill’s House of Plumbing and then both are looking for an office manager and there both [have] 27 employees, do you say that their pools are exactly the same? Or is there something about Jill and her 27 employees that’s different than Bob and his 27 employees that means that they don’t get necessarily get one for one the exact same candidates?

Yeah, so historically most of the systems were built where there was no fit or contextual information and no personalization. It was whether Bob does the search or Jill does the search, they would get the exact same search results. Now we are moving in that direction of really understanding the fit for Bob’s company and really understanding the fit for Jill’s company so that they get the right candidate for them because one candidate is not right for everybody and one job is not right for every candidate. It is that matching between the candidate and the job.

Another aspect to kind of think about why using a system is sometimes better than just relying on one person’s opinion, is if it was one recruiter who was just deciding who’s a good fit for Bob’s company or Jill’s company, that recruiter may have their own bias and whether we like it or not many times, all of us tend to have unconscious bias. This is where the system or the machine tends to have a much better performance than a human because it’s learning across many humans rather than learning from only one human. If you were learning by copying one human, you will pick up all of their bias, but if you learn across many humans as opposed to a single person, you tend to be very unbiased or at least you tend to kind of average out as opposed to being very biased from one recruiter’s point of view. So that’s another reason why this system performs better than just relying on Bob’s individual judgment or Jill’s individual judgment.

It’s interesting, it sounds like a really challenging thing. As you were telling the story about looking for an office manager, and there are things when you’re scanning that you’re looking for, and it’s true that there is most often some form of an abstraction, because if my company needs an office manager for an emergency room, I’m looking for people who have been in high-stress situations before. Or if my company is, you know, a law firm I’m looking for people who have a background in things that are very secure and where privacy’s super important. Or if it’s a daycare, I maybe want somebody who’s got a background of things dealing with kids or something, so they’re always kind of like one level abstracted away, and so I bet that’s really hard to extract that knowledge. I could tell you I need somebody who can handle the pace at which we move around here, but for the system to learn that sounds like a real challenge, not beyond machine learning or anything, but it sounds like that’s a challenge. Is it?

Yes, you’re absolutely right. It is a challenge, and we have kind of just recently launched a product called Entelo Envoy, that’s trying to learn what’s good for your situation. So what Entelo Envoy will do is it will find the right candidate for your job posting or for your job description, send it to you, and then learn from you as you accept or reject certain candidates. You said that this candidate is over qualified or comes from a different industry. As you categorize those as fit and non-fit, it learns, and then over time, it starts sending you candidates that are much more fine-tuned to your needs. But the whole premise of the system is, initially it’s trying to find information that’s relevant for you, where you are looking for office managers, so you should get office manager resumes and not people who are nurses or doctors. So that’s the first element, and then the second element is let’s remove all the bias because if humans see me say that well we want to have only males or only females, let’s remove that bias and let’s have a system be unbiased in finding the right candidate. And then at the third level, if we do have more contextual information, as we pointed out we are looking for experience in a high-stress situation, then we can fine tune Entelo Envoy to get the third degree of personalization, or the third degree of matching. I want to look for people who have expertise in child care because your office happens to be the office fora daycare. Then there is a third level of tuning that you need to do at the system level. Entelo Envoy allows you to do that third level of tuning. It’ll send you candidates, and as you approve and reject those candidates, it will learn from your behavior and fine tune itself to find you the perfect match for the position that you are looking for.

You know this is a little bit of a tangent, but when I talk to folks on the show about is there really this like huge shortage of people with technical skills and machine learning backgrounds, they are all like “oh yeah, it’s a real problem.” I assume to them it’s like, “I want somebody with a machine learning background, and oh they need to have a pulse, other than that I’m fine.” So is that your experience that people with these skills are, right now, in this like incredibly high demand?

You’re absolutely right, there is high demand for people [with] machine learning skills, but I have been building products for many years now, and I know that to build a good product, to make any good product, you need a good team. It’s not about one person. Intuitively, we have all known that whether you were in machine learning or finance or medical field or healthcare, you know it takes a team to accomplish a job. When you are working in an operation theatre on a patient, it’s not only the doctor that matters, everybody else, it’s the team of people that make an operation successful. The same goes for machine learning systems. When you are building a machine learning system, it’s a team of people that are working together. It’s not only one engineer or one person or one data scientist that makes all of that possible. So creating the right team and creating a team that work[s] well, that respect[s] each other, build[s] on each other’s strengths, whereas creating a team that’s constantly fighting with each other—you will never accomplish anything. So you’re right, there is a high demand for people in the field of machine learning and data science. But every company and every project requires a good team, and you want a right fit of people for that team, rather than just individually good people.

So, in a sense, Entelo may invert that setup where you started where post the job and get a thousand resumes. You may be somebody like a machine learning guru and get a thousand companies that want you. So will that happen? Do you think that people with high demand skills will get heavily recruited by these systems in kind of an outreach way?

I think it comes back to if all we were doing was keyword search, then you’re right. I mean one resume looks good because it has all the right keywords, but we don’t do that. When we hire people in our teams, we are not just doing [a] keyword search. We want to find the person who is a right fit for the team, a person who has the skills, attributes, and understanding. It may be that you want someone who is experienced in your industry. It may be that you want someone who has worked on a small team. Or you want someone who has worked in a startup before. So I think there are many, many dimensions in which candidates are found by companies, and a good match happens. So, I feel like it’s not only one candidate who gets surfaced to a thousand companies and has a thousand job offers. It’s usually that every candidate has the right fit, everyone role has the right need for the right candidate, and it’s that matching of candidate and the role that creates a win-win situation for the entire office.

Well, I do want to say, you know you’re right that this is one of those areas that we still do it largely the old-fashioned way. Somebody looks at a bunch of people and you know makes a gut call. So I think you’re right on that it’s an area that technology can be deployed to really increase efficiency and what better place to increase efficiency and building your team as you said. So I guess that’s it! We are running out of time here. I would like to thank you so much for being on the show and wish you well in your endeavor.

Thank you, Byron. Thanks for inviting me and thank you to your listeners for humoring us.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

1 Comment

Comments are closed.