One thing to be aware of is that when we build artificial intelligence it's pretty easy for us to accidentally encode our own biases. Because in the end, we are the ones who are training the system and telling it right and wrong as it were as it tries to sort through things. To illustrate this imagine somebody who had blue green color blindness and couldn't see the difference between blue and green. So they would feed examples of colors of objects with colors through the air and say this one's green, this one's green, this was green, this one's green covering both the blue and the green ones. What they will end up doing of course is they will make a color blind artificial intelligence. One that has also got the same exact limitation that the human who trained it has. In short, we're creating systems in essence better able to do things than we are, but we're creating the systems, as it were, in our own image.