Gigaom brings you our unique analysis and commentary on the present and future of AI.
Many of the ideas that we have about artificial intelligence come to us from science fiction. One of these is the 1982 movie Blade Runner based on Phillip K. Dick's novel "Do Androids Dream of Electric Sheep." In this world there are replicants which are robots who have artificial intelligence. But they're always kind of awkward because they don't really know how to interact with humans in complex emotional situations because the replicants are quite young. Their life spans are in the single digit years.
The breakthrough that Tyrell Corporation comes up with is to build replicants that have synthetic memories within them; memories from other people's lives. Now when you do this it gives the replicants a history to draw on and makes them more and more indistinguishable from humans. Now, to pull this trick off it's important that the replicants themselves don't know that they are replicants.
How realistic is this scenario? Will we train our own artificial intelligences on the past actions of others? If intelligence really does need to be embodied--it needs to be experienced in order to grow--will this become a convenient shortcut to simplify programing by putting the experiences directly into the machines? While this may sound like science fiction this particular aspect of training artificial intelligence is probably much closer than you might think.