The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
Is an artificial general intelligence, or AGI, even possible? Most people working in the field of AI are convinced that an AGI is possible, though they disagree about when it will happen. In this excerpt from The Fourth Age, Byron Reese considers it an open question and explores both sides of the question—is it possible, and are we on the right track to make it happen?
How would we go about building an artificial general intelligence? Simply put, we don’t know. AGI doesn’t exist; nor does anything close to it. Most people in the industry believe it is possible to build an AGI. Of the seventy or so guests I have hosted on my AI podcast, Voices in AI, I can only recall six or seven who believed that it is impossible to build one. However, predictions as to when we will see it vary widely.
And while we set a pretty low bar for narrow AI, to earn the title of “AGI,” the aspiring technology would have to exhibit the entire range of the various types of intelligence that humans have, such as social and emotional intelligence, the ability to ponder the past and the future, as well as creativity and true originality. The historian Jacques Barzun said we would know we had it “when a computer makes an ironic answer.” He might have added, “or is offended at being called artificial.”
Is an AGI really something different than just better narrow AI? Could we, for instance, get an AGI by just bolting together more and more narrow AIs until we had covered the entire realm of human experience, thus in effect, creating an AI that is at least as smart and versatile as a person? Could we make, in effect, an AGI Frankenstein’s monster? Can we, for instance, make a robot that vacuums rugs and another one that picks stocks and yet another that drives a car and ten thousand more, and then connect them all to solve the entire realm of human problems? Theoretically, you could code such an abomination, but unfortunately, this is not a path to an AGI, or even something like it. Being intelligent is not about being able to do 10,000 different things. Intelligence is about combining those 10,000 things in new configurations, or using the knowledge from one of them to do new task 10,001.
At one level, the very idea of us building an AGI seems a bit preposterous compared with our current experiences with narrow AI. Narrow AI is still at a point where we are pleasantly surprised when it works. It has no volition. It can’t teach itself something that it hasn’t been programmed to do. But an AGI is a completely different thing. It’s like comparing a zombie with Einstein. Yeah, they are both bipeds, but the zombie’s skill set is quite narrow whereas Einstein can learn new things easily. A zombie isn’t going to enroll in night school or learn macramé. It just wanders around moaning “Brains! Brains!” all day. That’s what we have today, AI zombies. The question is whether we can build an AGI Einstein. If we did build one, how would we regard it? What would we think it is?
At this point in our narrative, the AGI isn’t conscious. Because it is not conscious, it cannot experience the world and it cannot suffer. So an AGI in and of itself would not cause an existential crisis, a deep reflection about what makes humans special. But it would prompt us to ask two questions: “Is the AGI alive?” and “What are humans for?”
With regard to the first question, whether an AGI is alive, the answer is not obvious. Consciousness is not a prerequisite for life. In fact, an incredibly low percentage of living things are conscious. A tree is alive, as is a cell in your body, but we don’t generally regard them as conscious.
So what makes something alive? What is life? We don’t have a consensus definition for what life is. Not even close. We don’t even have one for death. And although there isn’t agreement on what constitutes life, a wide range of properties have been offered. An AGI would likely exhibit many of them, including the capacity for growth and the ability to reproduce, pass traits onto offspring, respond to stimuli, maintain homeostasis, and exhibit continual change preceding death. However, two attributes of life the AGI would not have: being made of cells and breathing. One has to ask whether these latter two are simply “things all life on earth share” as opposed to “things definitionally required for life.” I suspect we would have no trouble recognizing a nonbreathing, non-cell-based alien who could converse with us as being alive, so why would we insist on those qualities for the AGI?
Those are just the scientific requirements for life—what about the metaphysical ones? Again, we can find little consensus here. Philosophical thought hasn’t invested an enormous amount of time in examining the edge cases of life, such as viruses, which even scientists can’t agree on. Were the bacteria recently revived after millions of years of stasis always alive? Or were they resurrected from death?
That the definition of life has been a controversial topic for literally thousands of years suggests that we will not arrive at a species-wide consensus any time soon. This being the case, we can safely conclude that there will be a variety of opinions on the question of whether the AGI is alive, and our interactions with the AGI may be made uncomfortable because of this ambiguity. Those who answered our foundational question about what they are as “machine,” as well as those who see themselves as monists, may very well regard the AGI as alive, while others may not make that determination, or waver, in good conscience, uncomfortably on the fence.
The second question, “What are humans for?” is concisely framed by Kevin Kelly, the founding editor of Wired:
We’ll spend the next decade—indeed, perhaps the next century— in a permanent identity crisis, constantly asking ourselves what humans are for. . . . The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.
For the last several thousand years, humans have maintained our preeminent place on this planet for only one reason: we’re the smartest thing around. We aren’t the biggest, fastest, strongest, longest-lived, or just about any other “-est.” But we are the smartest, and we have used those smarts to become the undisputed masters and rulers of the planet. What’s going to happen if we become the second-smartest thing on the planet? And not just second, but second by an embarrassingly large margin? If the machines can think better and the robots can manipulate the physical world better, what is our job? I suspect we will fall back on consciousness. We experience the world, machines can only measure it. We enjoy the world. Combine that with mortality and the preciousness of life, and you get something that is meaningfully human. This idea was captured in Zen story of the tigers and the strawberry. In that story, a man is chased by a tiger, and to save himself, he jumps over a cliff, grabs a vine, and hangs there. Above him, the tiger waits. Below him circles another tiger. At the same time, a mouse comes out and starts chewing on the vine he is holding on to. But at that exact moment, the man spies a strawberry plant, growing on the side of the mountain. He picks the strawberry and eats it, and never had anything tasted so good to him in all of his life. at moment, that combination of consciousness and mortality, might be what we use to define us. We are the tasters of the strawberry, able to appreciate it because we hang at the moment between life and death.
To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.