The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
If a computer is sentient, then it can feel pain. If it is conscious, then it is self-awareness. Just as we have human rights and animal rights, as we explore building conscious computers, must we also consider the concept of robot rights? In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of conscious computers.
A conscious computer would be, by virtually any definition, alive. It is hard to imagine something that is conscious but not living. I can’t conceive that we could consider a blade of grass to be living, and still classify an entity that is self-aware and self-conscious as nonliving. The only exception would be a definition of life that required it to be organic, but this would be somewhat arbitrary in that it has nothing to do with the thing’s innate characteristics, rather merely its composition.
Of course, we might have difficulty relating to this alien life-form. A machine’s consciousness may be so ethereal as to just be a vague awareness that occasionally emerges for a second. Or it could be intense, operating at such speed that it is unfathomable to us. What if by accessing the Internet and all the devices attached to it, the conscious machine experiences everything constantly? Just imagine if it saw through every camera, all at once, and perceived the whole of our existence. How could we even relate to such an entity, or it to us? Or if it could relate to us, would it see us as fellow machines? If so, it follows that it may not have any more moral qualm about turning us off as we have about scrapping an old laptop. Or, it might look on us with horror as we scrap our old laptops.
Would this new life-form have rights? Well, that is a complicated question that hinges on where you think rights come from. Let’s consider that.
Nietzsche is always a good place to start. He believed you have only the rights you can take. People claim the rights that we have because we can enforce them. Cows cannot be said to have the right to life because, well, humans eat them. Computers would have the rights they could seize. They may be able to seize all they want. It may not be us deciding to give them rights, but them claiming a set of rights without any input from us.
A second theory of rights is that they are created by consensus. Americans have the right of free speech because we as a nation have collectively decided to grant that right and enforce it. In this view, rights can exist only to the extent that we can enforce them. What rights might we decide to give to computers that are within our ability to enforce? It could be life, liberty, and self-determination. One can easily imagine a computer bill of rights.
Another theory of rights holds that at least some of them are inalienable. They exist whether or not we acknowledge them, because they are based on neither force nor consensus. The American Declaration of Independence says that life, liberty, and the pursuit of happiness are inalienable. Incidentally, inalienable rights are so fundamental that you cannot renounce them. They are inseparable from you. You cannot sell or give someone the right to kill you, because life is an inalienable right. This view of fundamental rights believes that their inalienable character comes from an external source, from God, nature, or that they are somehow fundamental to being human. If this is the case, then we don’t decide whether the computer has rights or not, we discern it. It is up to neither the computer nor us.
The computer rights movement will no doubt mirror the animal rights movement, which has adopted a strategy of incrementalism, a series of small advances towards a larger goal. If this is the case, then there may not be a watershed moment where suddenly computers are acknowledged to have fundamental rights—unless, of course, a conscious computer has the power to demand them.
Would a conscious computer be a moral agent? That is, would it have the capacity to know right from wrong, and therefore be held accountable for its actions? This question is difficult, because one can conceive of a self-aware entity that does not understand our concept of morality. We don’t believe that the dog that goes wild and starts biting everyone is acting immorally, because the dog is not a moral agent. Yet we might still put the dog down. A conscious computer doing something we regard as immoral is a difficult concept to start with, and one wonders if we would unplug or attempt to rehabilitate the conscious computer if it engages in moral turpitude. If the conscious computer is a moral agent, then we will begin changing the vocabulary we use when describing machines. Suddenly, they can be noble, coarse, enlightened, virtuous, spiritual, depraved, or evil.
Would a conscious machine be considered by some to have a soul? Certainly. Animals are thought to have souls, as are trees by some.
In all of this, it is likely that we will not have collective consensus as a species on many of these issues, or if we do, it will be a long time in coming, far longer than it will take to create the technology itself. Which finally brings us to the question “can computers become conscious?”
To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.