[protected-iframe id=”46cd8fd2b466fa423cbdd9406385d474-14960843-25766478″ info=”http://new.livestream.com/accounts/74987/events/2497095/videos/34028184/player?autoPlay=false&height=360&mute=false&width=640″ width=”640″ height=”360″ frameborder=”0″ scrolling=”no”]
Input sound file:
Session Name: Interaction Design and Robotics
Video, Willow Garage
[chuckles] I thought an alarm was going off there for a second. Two things you should know before moving onto the next speaker. One – the only thing that would have made that talk better is if Jack had done a Vine of himself. And the other is, you should complement Om on his socks because they are the new GigaOM blue. Always matching, Om.
So we’re going to bring out our next speaker Ms. Leila Takayama. She is going to be talking about interaction design and robotics. Please welcome Leila out to the stage.
Thanks for the intro. I’m going to be talking to you today about robots, but actually me myself, I am not a robot, I am a social scientist. So we’re going to be talking about the sort of fuzzy things about how we sort of get people to interact more smoothly with robots and how we get robots to sort of learn some social skills and learn some manners. As a research scientist, my sort of job is to understand how we can design these systems to be more human friendly. Not necessarily more human like, although some people do have that goal, but my goal is to really figure out how to make them sort of live in our world and become part of the human world as opposed to forcing us into a robotic world.
At Willow Garage we made a robot called the PR2, stands for Personal Robot 2, and the fancy way of calling it is it’s a mobile manipulation platform and really what that means is it’s a human sized robot that rolls around and grab stuff in human environments. This is sort of one vision of the personal robotic future. It’s sort of like Rosie the robot from The Jetsons. One thing that’s really important to know about this is that these robots cannot be built on their own. One thing that I really like here at this conference is this theme of different disciplines working together, and actually just to make one single robot, you need mechanical engineers, electrical engineers, software engineers, and then you throw these messy humans into the mix [chuckles] and you need all of us social scientists, designers, entrepreneurs, and people who understand the larger ecosystem around the technology.
Something that we did at Willow Garage in order to build that community out and to make this personal robotics vision a reality was to invite other universities and companies to play. So the PR2 was actually made as an open-source platform, meaning that if you got a PR2, these groups of people, you could borrow the hardware for free – and it was a $500,000 piece of hardware – as long as you contributed software to the open source community, as long as you played nicely with others and helped others use your code, because robotics is too complex for any single person to solve on their own. And so we decided we’re going try it the open-source way [music]. And because of that work, we we’re able to build quite a few awesome things, and of course none of these were done just at Willow Garage. A lot of these were done in collaboration with many of those other universities.
So these are what our robots look like. They would roll around the office doing things like dancing, picking up our dirty dishes which was really great, understanding things like, ‘Just where is that cup so that I can put it in the dish tray?’ is actually a really hard technical challenge. Our robots can navigate around the space without smashing those coffee mugs and figure out where they are in the local environment. All of these things require people who are doing perception research, navigation research, localization – lots and lots of technical chops, so you need to do simple things like just plug yourself in. This was done at UC Berkeley by some of our colleagues, Peter Beal’s group, and what’s really cool about this is not that it’s folding towels, it’s that this was a computer vision lab. They don’t know how to do manipulation, they don’t really need to know how to do robotics, but because they’re part of that open-source community. They can do this really cool task just because they’ve been able to pick up other people’s software packages and put together this really cool demo.
One thing that I have learned I guess from living with these robots for about 4 1/2 years is that robots are actually everywhere now already. When you go to an ATM and you get cash out of that machine, you’re interacting with a robot. When you use your automatic braking system in your car, you may think that you’re the one stopping your car, but actually your car’s automatic braking system is figuring out like, ‘The user wants me to stop. Okay, maybe I’m going to stop in this way so that the brakes don’t lock up.’ So you’re just suggesting to your car that maybe it should stop now.
And finally, we have these new machine learning algorithms that are doing things like helping us maintain the temperatures in our homes in green and friendly ways. And of course, there are lots of other more ‘robuddies’ sorts of products. But I would just like to say some of you in this audience who think that you’re not working on robotics might actually be using bits and pieces from robotics, and I think there are some useful lessons that we can use from the robotics research area in order to understand physical product design.
Here’s one of those stories. In our lab, this robot was trying to learn how to open a door, and this is what it looks like when a robot’s opening a door. It sits there for very, very long periods of time staring at the door [laughter]. Which is cool, it’s a hard problem, but the problem for me was my office was over there and the coffee machine was over here, so I often had to run in front of the robot. If you’re a software engineer working on this problem and your robot spent a while trying to see that door and find that door handle, it really sucks when a human runs in front and messes it all up, so they’d be like, “Get out of the way of the point cloud. You’ve messed up our data.” And I’d say, “How am I supposed to know when this thing is thinking? I can even tell when it’s on versus charging versus looking and thinking really hard, so how can we do this better?”
It turns out psychologists are not good at figuring that out, but the people who are good at this are character animators. So we enlisted the help of Pixar character animator Doug Dooley, and this is how he thinks we could do it better. If we just showed a little bit of forethought [laughter] about opening that door, it can help acknowledging that a person is nearby so that person can feel a little bit safer.
Another thing that robots do is they can tell if they’ve succeeded or failed on a task. Most robots today fail at opening the door and just don’t care, but what if those robots showed a little bit of remorse about having failed at those tasks? What happens when there’s a little bit of shame? [laughter] It turns out that when your robots do little tiny things like this, show a little bit of behavior and a little bit of what they’re thinking, people actually think that they’re much smarter, and they’re more willing to deal with them in their environments.
Another thing that’s really, really tempting to do with robots is to say, “Our robots are awesome. They are smart, they are going to be just like people!” And in reality, they are not. This is what happens when you tell people that robots are too smart. They believe you and then they let expensive hardware walk off the edge of the table. This actually broke that robot’s leg off and ear. But if you tell people instead, “It’s a robot. It can sense a few things and sort of interact with people, but it’s not so great.”
Video, Willow Garage
I definitely want to see if it can fall off the table.
You get people testing their hypotheses about what the robots can and can’t do. She actually puts her hand below the table…
Video, Willow Garage 07:04
Yep, it definitely can.
And she saved him. So instead of breaking that robot, this one survived. Because we set her expectations slightly lower, she was actually ready to help that robot perform its task better.
Finally, one more thing that’s really interesting about robots that is different from our current graphical user interfaces is that they’re physical, and you can actually take advantage of that and leverage that in your interaction design. So it’s nice that robots can fold laundry, but really if you were going to have them in your house, most people are very particular about the way that you fold your clothes. If you have seen that YouTube video of Japanese T-shirt folding where you do it in one smooth motion, this is Maya actually teaching the PR2 how to do that one smooth motion just by moving the arms around and saving some waypoints. So she just taught the robot and now he’s going to do it by himself, which is pretty cool, right? Programming by demonstration is this way that you can get end-users to be able to get robots to do the things they want in their way as opposed to clicking on a button that says ‘Fold T-shirt’ and just cross your fingers and hope it does it right, because chances are it won’t.
So a few lessons from here, right? If we just use little bits of tricks and involve more people from the arts and from design to help us figure out how to make robots more expressive, I think we can go a really, really long way to making them more human friendly. Also setting expectations is a tricky thing to do, but it’s particularly important with the loaded space like robotics, so we don’t want to set those expectations too high. And finally, I think there’s some really interesting interaction techniques that we’re going to be able to do with these more physical products as opposed the purely digital products that we can leverage from robotics and steal from robotics for our physical product designs.
Finally, the last lesson I kind of want to push here is that a lot of people sort of are waiting for this robot apocalypse. They’re just sort of waiting for robots to suddenly become just like us, and I would argue that that’s probably not a good strategy. You shouldn’t hold your breath and wait for that, and here’s why. The robots that we build, they can do things by themselves, but they can also be tele-operated meaning a human can operate them, and that can actually be powerful in and of itself.
This is a guy named Henry Evans. Henry had a brain stem stroke about 10 years ago that left him both quadriplegic and mute, but he actually saw the PR2 on CNN one day and decided like, “I’m going to make that thing my body.” So this is Henry using a head tracker to operate the PR to do things that he can’t currently do with his own body. A lot of people sort of speculated about, ‘If I had a disability, this is what I would do with robots and technology,’ and we totally guessed wrong at what Henry would want to do. The first thing he wanted was to scratch his own face, because it turns out your face itches about 80 times a day, and most of us just scratch it and it goes away. But when you have to wait for someone else to come and help you with that itch, it takes away your sense of independence. So being able to use a robot to do it yourself, not have a robot do it for you, but to do it yourself can be really powerful.
The second thing he wanted to do was shave himself because he doesn’t like the way that Jane, his wife, shaves. She’s a little too gentle. He wanted a closer shave. So this is Charlie in the background, a professor at Georgia Tech who works with us, and he’s scared to death that Henry’s going to cut himself [laughter], which is why he’s making that face, but Henry was totally fine and he got that nice close shave.
So I would just say– this is just one example, but don’t wait around for robotics to be this awesome thing. I would say take advantage of the bits and pieces that work right now and see what we can do with them to get stuff done in the here and now. Don’t wait for that future. That future is today.
One example of how we’ve done this also at Willow Garage, is taking some bits of robotics, stole them on the weekend and did a really cool project. This is a project done by Dallas Goecker and Curt Meyers at Willow. This is what Dallas used to look like. Dallas was our coworker in Indiana, we’re in office in California, so he was a voice in a box on a table. And that’s kind of okay except if you really want to be involved in that conversation, it’s hard; someone can mute you, they can hang up on you, they can just ignore you and walk into another room. So we put him on Skype on a laptop on a cart and he pushed himself around, but that also wasn’t so awesome because you could sort of just leave him out and he’d have to say, “Hey, turn my camera a little to the left, I can’t see the projector.”
So one day he stole some body parts from the PR2 and came to work looking like this [chuckles]. It may look familiar to any of you who watch The Big Bang Theory because they actually use this robot on their show as Shelbot. So what we did with these systems, which at first seemed kind of geeky, was we sort of discovered that Dallas actually became a real person, a real coworker, when we worked with him using this system, because he could just sort of come into our office and chit chat just like normal people do. He became part of the team. He could heckle us when we were playing pool in the pool room, and that was really powerful. So we fielded a bunch of these in a bunch of companies around the bay area, and actually it became such a powerful tool for so many other people, not just us who like robots. But this spun out as a separate company now called Suitable Technologies.
Something that I’ve learned from working at a place like this was that you really need the community to make this happen. So right now, these are all the places that own PR2s, and they share their code across companies and universities in order to make this broader vision happen of getting these sort of systems to be a reality in the real commercial world. But you’ll notice a lot of these universities and a lot of these are very large companies. The people that we’re missing right now are the entrepreneurs, the designers, the people who understand products for the sake of being products, not robots for the sake of being robots. I’d say that robotics right now is in the age of the mainframes, right? They’re way too expensive for a single individual to own. They need to be operated by a team of experts just to do something really simple that’s useful, and they need to be in very constrained environments. What we really want is to get here, and to get to this place where we can get these robotic technologies out into the real world is to have real product designers, real entrepreneurs, people who are– those of us from the fuzzy side to make this happen. So I’d like to invite you all to join.
Something that Willow has done to try to plan that space has actually spun out five start-ups and three nonprofit organizations. If you go out and hang out during the coffee break, you’ll actually see one of our spin outs called Unbounded Robotics and they’ve just released their brand-new robots, so go over and say hi to him. With that, I’d like to say thanks. [applause]
What one more thing?
Just the Google X thing.
Oh, okay. We can turn your mic back up.
Yeah, can I get my mic back up?
Can we get Leila’s mic back up?
Awesome, thanks. Just one more thing I wanted to mention while I’m here, I’m formerly from Willow Garage because I’m actually now at Google X, and our team from Google X is here recruiting. So if you see Mac, Simon, or Hayes around, please say hi if you’re interested in doing user experience research and design with doing technology moonshots. Thanks [applause].
Awesome, thank you. All right, we’re at the morning break, and true story, if you find Katie Fehrenbacher and ask her to do the robot, she will perform it for you [chuckles]. So during the break, please hit up the Honeywell Workshop. It begins at 10:40, it’s in screening room level 2. Stop by the GigaOM research table, find out all the cool stuff they’re doing. Enjoy some refreshments and snacks. General session will resume at 11:25. Thanks, everybody.