One thing to be aware of is that when we build artificial intelligence it’s pretty easy for us to accidentally encode our own biases. Because in the end, we are the ones who are training the system and telling it right and wrong, as it were, as it tries to sort through things. To illustrate this imagine somebody who had red-green color blindness and couldn’t see the difference between red and green. So they would feed it objects of colors and say this one’s green, this one’s green, this one’s green, this one’s green–covering both the red and the green ones. What they will end up doing, of course, is they will make a color blind artificial intelligence, one that has the same exact limitation as the human who trained it has. In short, we’re creating systems in essence better able to do things than we are, but we’re creating the systems, as it were, in our own image.