Anyone having an in-depth conversation about artificial intelligence quickly realizes that the nature of the words we use to describe it is problematic. In fact, I would go so far as to say that our language in its rigid structure is an actual impediment to understanding the technology.
You know, right now, the words we use don’t have any real clear definition. When you consider a word like “reason,” it used to always be okay for that to have kind of a vague meaning because the only thing we ever really contemplated to applying it to were humans, or maybe animals.
But what is intelligence? What is understanding? What does it mean to see something? What does it mean to know something? What is thinking? What is perceiving? What is experiencing? Can a computer intuit? Can it be creative? Can it be insightful? Can it feel? Can a machine realize something, can it conclude something?
We hardly agree on how to describe consciousness, and when it comes to the idea of life and death, there’s even less agreement, nor an understanding of how we would apply these terms to machines. Because all of these terms evolved in a universe where they only had to apply to organic biological organisms, they are understandably ill-suited to grappling with questions like digital life or machine consciousness.
We find ourselves equipped with linguistic square pegs in a round-holed universe. And it is no wonder that things get muddled as we plow through. It is as if we are now grappling with questions like, “What does seven want?” or “How does blue taste?” In the case of computers, a question like “Can machines think?” would seem equally nonsensical, except for the simple fact that it no longer is.
There is no path out of this, or at least in the short term. Our view of reality is shaped by the words that we have at our disposal. Machines can either think or they can’t. We don’t have a word like “flabe” of “vimvom” to convey some new mechanical form of thinking, an intermediate reality. And so encumbered, we must still push forward.