Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Yann LeCun, the deep learning expert and recently appointed director of artificial intelligence research at Facebook, held an Ask Me Anything session on Reddit last week. He went deep into the methodologies of AI and deep learning, the best academic training for excelling in the field and even touched on how to deal with the ethical issues that will arise from the advent of advanced AI. The most interesting exchange, however, might have been about the role of emotions in AI systems.
Essentially, LeCun argued that a system like the one popularized in the recent film Her is nowhere near the realm of possibility right now because of its deep understanding of human emotions, but that understanding of emotions is critical to truly useful systems. “Science fiction often depicts AI systems as devoid of emotions,” he wrote, “but I don’t think real AI is possible without emotions.“
“Emotions are often the result of predicting a likely outcome. For example, fear comes when we are predicting that something bad (or unknown) is going to happen to us. Love is an emotion that evolution built into us because we are social animals and we need to reproduce and take care of each other. Future AI systems that interact with humans will have to have these emotions too.”
Later on, in response to a follow-up question on this topic, LeCun elaborated:
“If emotions are anticipations of outcome (like fear is the anticipation of impending disasters or elation is the anticipation of pleasure), or if emotions are drives to satisfy basic ground rules for survival (like hunger, desire to reproduce….), then intelligent agent will have to have emotions.
“If we want AI to be “social” with us, they will need to have a basic desire to like us, to interact with us, and to keep us happy. We won’t want to interact with sociopathic robots (they might be dangerous too).”
Ultimately, he concluded with a reference to Isaac Asimov’s I, Robot, suggesting that if we’re looking to avoid machines that will make irrational decisions, those capable of higher-level reasoning might be better than those operating based on hard-wired rules and behaviors:
“When your emotions conflict with your conscious mind and drive your decisions, you deem the decisions “irrational”.
“Similarly, when the “human values” encoded into our robots and AI agents will conflict with their reasoning, they may interpret their decision as irrational. But this apparently irrational decision would be the consequence of hard-wired behavior taking over high-level reasoning.”
The whole AMA session is good reading for anyone interested in artificial intelligence technically, ethically or just for the sheer sci-fi goodness of it.