This is the AI Minute, brought to you by Gigaom. I'm Byron Reese.
One of the use cases for intelligent robots, for artificial intelligence, is to build care-giving units for the elderly in different places. One can imagine that over time, these robots would learn to read the facial expressions of the people that they're taking care of. They may learn to emote with the stories that they're being told, and to say, "That is a very sad story", or "What a funny story". They'll be programmed, no doubt, to listen to the same story repeatedly and to show interest in it each time. They will laugh at jokes that they are told, and in turn will learn the kind of humor to tell back.
Now, the people who deal with them will know at one level it's just a robot, but on the other hand, are we on dangerous ground? Put another way, is the robot learning to work better for the person, or is the robot learning to manipulate the person? How we decide that will go a long way to the direction of how we build these systems.