Session Name: Innovating Algorithms through Human Competition.
Speakers: S1 Mathew Ingram
S2 Karim Lakhani S3 Mike Lydon S4 Audience member
MATHEW I 02:34
Thanks a lot for joining us, both of you. Let’s jump right in, Karim, both of you come to this topic through slightly different perspectives, so tell us a little bit about what you do and your research and how you came to do this.
KARIM L 02:46
Sure. I am a professor at the business school, and I study innovation and innovation happening outside traditional organizations. All of my work initially was on open source communities, and now I’m focused a lot on competition-based models for driving innovation. Two of our major projects are with NASA, where we’re taking NASA’s toughest algorithmic challenges and getting them solved through Mike’s platform, and working with our medical school and taking genomic and life sciences data and also getting them solved through Mike’s platform. The objective for me from a social science perspective is to think about the design of contests and how well they work, and what we can learn about both contests and the general process of innovation.
MATHEW I 03:40
So Mike, for those who don’t know, tell us about TopCoder. What is it, how long has it been around, and what do you do?
MIKE L 03:42
TopCoder is a company that essentially builds enterprise software by breaking down a large software development effort into lots of individual units that can then be run as challenges and competitions. The history of TopCoder, the quick elevator pitch history, is back in 2001, a couple of the co-founders and myself started TopCoder as a way to address a problem at a previous company, a traditional software consultancy. The problem was finding the right people, the superstars, the ones who are going to contribute most of the value to the company and to the customers. We created a kind of for-fun competition format, something like a puzzle. It was a platform that incentivized individuals to participate in trying to solve computational and logical thinking problems as quickly as possible head to head.
It was successful, but the problem we ran into was that the intellect level of the individuals participating in the competition got so high that it became impossible for myself and my team to create the kinds of problems that would continuously stimulate these people. That’s when we realized that we were going to have to leverage the community that we were building to help us. We realized that what should be doing is creating a platform and a workflow around competitions to help this community to help us build software. That’s what TopCoder is.
MATHEW I 05:28
This leads me into discussing the study the two of you worked on together. I understand you effectively disassembled a biological problem with large datasets into chunks. Could you tell us a bit more about how that happened?
KARIM L 05:43
Sure. Part of the blessing and curse of being at Harvard is that we have really smart people. When we say that there are smarter people outside Harvard they scoff at us. What we said to our medical school was that this approach has a lot of currency in the commercial world, perhaps we could apply it to the academic science setting. We found a collaborator at the medical school who works in immunogenomics which involves a lot of big data. He had a tough problem. It was basically a sequence alignment problem using genomic data. We told him that one of the things we had learned with TopCoder was to abstract out the problem from its context, and then design a scoring algorithm that people could then write code to compete on.
MATHEW I 06:49
So the individuals themselves don’t have to know the problem, in this example don’t have to know what genomics is.
KARIM L 06:42
Exactly. So we ran this challenge on TopCoder for two weeks. The basic theory of a contest is because there are multiple independent attempts at the problem, extreme values emerge very quickly. It’s like finding the normal distribution, as you do more trials from the distribution, you’ll eventually get an extreme value coming out of it. But you need multiple independent trials. That’s what a contest platform does.
So we did this, and we didn’t get just one extreme value, we got 20. 20 people far exceeded in performance, and there was an improvement of multiple orders of magnitude in the code that we had compared to the code that the researcher himself had written. He’s no slouch, he has a PhD from Oxford, was an undergraduate at MIT. We also improved on what the NIH had done in the same space.
MATHEW I 07:43
Orders of magnitude improvement.
KARIM L 07:43
That’s right. I keep trying to break their system, and we haven’t had any breaks yet.
MIKE L 07:55
We’ve certainly had to be more accommodating than we were expecting for the Harvard level experimentation, but what Karim and his team have helped us with is understanding the theory behind what motivates individuals to participate in such a project, competitive crowd sourcing. There’s no guarantee of getting paid to participate in this competitions.
MATHEW I 08:20
So the rewards are exclusively financial, or is it a point system, or does reputation play a part in it?
MIKE L 08:26
Not at all. Financial rewards play a big part in it, but the rewards also involve social standing. What TopCoder does is make sure we’re measuring and providing statistics about every aspect of every competition and making those statistics available on people’s profile pages so they can prove their skill.
We have competitions for software architecture, algorithms, data analytics, bug fixing, bug finding, lots of different types of competitions. These statistics play a big role in motivating people to participate in these competitions. As well as T-Shirts. We found out that T-Shirts are hugely motivating.
MATHEW I 09:05
People love T-Shirts! Have you found any sort of qualitative difference between what’s produced based on monetary rewards versus what’s produced for things like reputation and social standing?
KARIM L 09:24
I think the key finding is that people have heterogeneous motivations. There are a lot of things that motivate people. Someone might be highly motivated by monetary rewards but not so much by community rewards, or vice versa, so you need both.
What’s great about this is that participation is based on self-selection. So if somebody shows up and sees an interesting problem and an interesting reward structure and decides to participate. Mike isn’t going out there trying to find the people and asking them to participate. As soon as you go into a self-selection model, the organizers don’t need to worry about motivation. The cash just serves as a way to transfer the intellectual property, and makes the transaction happen, but people are self-motivated. We find that the self-selection process is a big driver of performance, as big as cash rewards.
MATHEW I 10:22
If you’re a company, and you have a huge data problem of some kind, like Harvard you think the people inside your company are pretty smart. What is the benefit of using this model as opposed to just crunching away at it yourself?
KARIM L 10:31
I think about this whole big data explosion in three ways. One is we need managers. There’s a McKinsey report out on big data that said there will be 180, 000 missing programmers but then 1. 5 million missing managers who know how to ask the right questions. So the first thing is having the ability to ask the right questions.
The second is having the right people, and the question of what is the right approach. Often what happens in most organizations is that they match the approach to the people they have, and that is often the wrong approach. What the competition model forces you to do is ask the questions first, because whatever question you ask you’ll get a response to. It forces the managers to think about what it is that they want an answer for. It also assumes that because we have large numbers of people participating, somebody somewhere will have the right combination of skill and motivation to come up with the right approach to crack open the problem.
In this genomics challenge that we ran we discovered beyond all these great solutions was 89 different approaches to solve this problem, which is amazing. Once we exhaustively read through all the code submissions, we found that the NIH had one perspective, my medical school colleague had another perspective, but then we discovered another 87 perspectives on how to solve it. That’s what these mechanisms put into play.
MATHEW I 12:10
So the potential is there’s a better process or solution or way of looking at the problem that you literally aren’t aware of until you open it up.
KARIM L 12:16
Absolutely. Finding those better ways, if you’re working with a team of ten people on say a data analytics problem, then it’s statistically very unlikely that those are the best ten people to be working on that, considering the entire talent pool of the world. So using a competition, incentivizing the crowd to use a clich, to work on that problem increases the likelihood of having the best people working on that problem. As you increase the likelihood you increase the quality level.
MATHEW I 12:54
But I imagine one of the problems is just what you said about Harvard, getting past the feeling that we’re smart, and this is our problem, so why should we let some random dude sign up to try and solve our problem?
KARIM L 13:02
I think that’s the biggest challenge. What’s happened is there’s been a decade’s worth of platform development. What TopCoder has done other platforms have done as well in terms of creating these crowd sourcing approaches. The opportunity now is to somehow say that these platforms are a compliment to our innovation efforts.
MATHEW I 13:29
Not necessarily a replacement.
KARIM L 13:29
Exactly. It’s a real compliment, and that’s what we’re finding with our work with NASA, is that they’re rocket scientists, I’m working with these really smart people and I feel like I’m on the other side of the distribution.
MATHEW I 13:50
So when they say Its not rocket science”, they know what that means.
KARIM L 13:52
Exactly. But what’s been interesting for them is that these platforms actually help them ask better questions, and help them get answers that they can then implement. It complements their work instead of substituting it.
MATHEW I 14:07
I was actually thinking when you were describing the researcher and his genomics problem, just breaking that problem down into understandable chunks might actually help you understand it in a different way.
KARIM L 14:16
Absolutely, because I think often what happens is that from a problem solving point of view people just dive right into finding the solution, and they do that on the basis of what they already know and the skills they already have. But if you step back and work on the abstraction, what the problem is and how you’ll know when it’s solved, that changes the game completely.
MATHEW I 14:36
And even asking like you said what questions to ask can lead you in different directions.
KARIM L 14:40
MIKE L 14:40
Especially when you have hundreds of people asking you questions as well. You can learn a lot about what you’re trying to present as a challenge, are you asking the right question, are you working on the right problem.
MATHEW I 14:50
So if you were advising a company that wanted to try this outsourcing, crowd sourcing, obviously you’d tell them to use TopCoder r. But let’s they wanted to do it themselves some way, what are some tips or things you’ve learned either from the research you’ve worked on or just from watching the community, and how it does that sort of thing, ways of managing it. Sometimes particularly in the open source movement which I know you’ve looked at a lot in your research can be a kind of chaotic thing, and managing a lot of remote people is a difficult problem in its own right. Any tips you can share on how to do that?
MIKE L 15:33
I guess the biggest factor in my opinion in getting a community to participate in this kind of an exercise of competitive crowd sourcing problem is integrity. The community has to believe that you have their best interests in mind, they have to believe that you’re capable of executing a competition where it’s as objective as possible, and you’re not going to have major problems. If you’re going to spend your time doing this, and play by the rules, then if they win they’re going to win.
MATHEW I 16:10
So there’s a level of trust there.
MIKE L 16:10
That’s right, and TopCoder has spent an awful lot of time building that trust level to the point where we can leverage the community to participate in these contests. What I would say is, don’t do it yourself, use a platform. Because they’ve built the infrastructure, they know how to transfer IP from around the world, and there’s not just one platform, there’s several platforms out there. They’ve got the people on board, all the systems are in place, and this is more about thinking about your work from a problem perspective, thinking about the problems you need to be solving, and then deciding whether it should be done internally or whether should be done externally.
MATHEW I 16:56
Or you could do both.
MIKE L 16:59
MATHEW I 16:59
If anyone has any questions… yes. Do you want to go to the mike? It should be right behind you there. All lit up and everything.
AUDIENCE MEMBER 17:07
These models have been around for quite some time. In fact an incentive which started at A&G and Eli Lilly, this R&D, you call it competition or crowd sourcing, has not worked really, I mean the company is not going anywhere because if you repeatedly give these challenges with some amount of monetary reward, after all it’s not a hardware challenge or an [inaudible] challenge. Under what circumstances do you think would such high degree research problems work and where does this whole model fit that an average company or not even an average company, an Eli Lilly, is not able to make an incentive really flourish?
KARIM L 17:53
That’s a great question. So first, these competition models have been around since the 1500s, so there’s nothing new in the contest model. I would question your stance that they haven’t worked. They have worked, but the issue becomes the internal resistance by scientists to take these external ideas and implement them. So the question is a bit more about the cultural shock that happens. You imagine going to Lilly, the same as what we have at Harvard, telling them some kid in Estonia is going to solve their problem for them, the sheer resistance to that notion is unbelievable. So it’s not as if they don’t get solutions, they get solutions, but the question is one of implementation, and putting into place.
One of the reasons I think that TopCoder has done well is that you can create objective metrics around performance. In information technology problems, in software and data, you can see how fast it is, how good it is, how well it solves the problem, which may be more difficult in a chemical engineering or organic chemistry problem. What we’ve seen, and why the open source model has done so well and also why the competition models have done so well is because they have been in software, where we can create objective metrics and we can do the side-by-side comparison, and there’s nothing to debate about.
AUDIENCE MEMBER 19:20
Is it possible that monetary rewards can hurt these kinds of competitions?
KARIM L 19:23
No, I think that’s a myth that I want to disabuse people of. What you should be thinking of is that none of us work for free. Monetary rewards work for us as well, and so it’s more that there are heterogeneous motivations and we want to account for them, and the big thing is self-selection. People are self-selecting for these things. It’s not the monetary aspect at all. Some people may be motivated by money, but it could also be other things as well.
MATHEW I 19:57
Any other questions…? OK, I don’t think 24 seconds is enough to ask any other questions. So thank you very much for coming, both of you. Please give our panelists a round of applause.