43 Comments

Summary:

Google hired the noted inventor and futurist to build artificial intelligence that can think like a human. His vision is a computer with a structure modeled on the human brain, giving it a capacity for abstract thought.

ray-kurzweil

In 2012, Google hired Ray Kurzweil to build a computer capable of thinking as powerfully as a human. It would require at least one hundred trillion calculations per second — a feat already accomplished by the fastest supercomputers in existence.

The more difficult challenge is creating a computer that has a hierarchy similar to the human brain. At the Google I/O conference Wednesday, Kurzweil described how the brain is made up of a series of increasingly more abstract parts. The most abstract — which allows us to judge if something is good or bad, intelligent or unintelligent — is an area that has been difficult to replicate with a computer. A computer can calculate 10 x 20 or tell the difference between a person and a table, but it can’t judge if a person is kind or mean.

To get there, humans will need to build computers that can build abstract consciousness from a more concrete level. Humans will program them to recognize patterns, and then from those patterns they will need to be smart enough to learn to understand more.

People have a tendency to dismiss using artificial intelligence for specific applications like speech recognition, Kurzweil said, but he believes each new application is a part of the greater effort to develop AI.

“I like the idea of crossing the river one stone to the next,” Kurzweil said. “We do get from here to there one step at a time.”

When computers do reach the stage that they can compete with, or outstrip, humans in intelligence, Kurzweil said that he is ready to accept them as conscious beings. There is no experiment that can be run to determine if an entity is conscious, he said, but robots and computers will likely someday claim to be experiencing it. He described accepting their consciousness as a “leap of faith” that other people will take too.

Google IO ticker

You’re subscribed! If you like, you can update your settings

Comment

Community guidelines
Saturday, August 30, 2014
you are commenting using your account. Sign out / Change

Comment using:

Or comment as a guest

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.

43 Comments

  1. > Kurzweil believes each new application is a part of the greater effort to develop AI and said “I like the idea of crossing the river one stone to the next”

    Yes agree and with that said check out BabyX a computer generated psychobiological simulation which learns and interacts in real time with computational neuroscience models of neural systems

    https://vimeo.com/97186687

    Cheers….@sardire

  2. their understanding of good and evil will first be brought to them like the commandments, and with free will, they will do what ever they choose. Not a comforting thought.

  3. I am sure Mr. Kurzweil is an intelligent man…but to make the claim that he is doing just goes to show how absurd some intelligent people are…consciousness, thought? heck, humans can’t even define what they are…how can one go about building something that you can’t define?

    1. I assure you, the least intelligent people produce babies with a lot of success without bothering to define what the heck the baby is.

  4. I’m still waiting for Ray Kurzweil to become fully conscious. Seven billion sentient beings and here’s a dope who can’t wait for our machine overlords. If Ray ever actually uses reason to consider his books and his speeches, he’d quickly realize they’re baloney. It’s all baloney, and we’re paying for it. No wonder this guy gets time on Yahoo.

    1. What is baloney? AI? Thinking processors? It isn’t baloney, it’s inevitable.

  5. Acceptance of AI “consciousness” at a popular level will be determined by if people accept them as peers. Frankly, some white people during the time of slavery didn’t even believe Negroes were conscious. Furthermore, as AI progresses to ASI (artificial intelligence smarter than a human), we will predictably judge them as super conscious, or divine. I know this sounds absurd and obscene now. It is difficult to imagine a divine spirit whose nervous system was the cloud, but a ubiquitous ASI could easily be thought of in those terms.

    1. @ Brad
      Just to be clear – did you just compare black people to machines?
      Or more specifically, did you just compare a person’s inability to accept the humanity of another person to a person not willing to accept a machine as it’s equal?

      1. “compare a person’s inability to accept the humanity of another person to a person not willing to accept a machine as it’s equal?”

        This question presupposes that a computer cannot be a person. If Kurzweil is right, then it can, and then the comparison is between not a person and a machine but a biological person and an electronic person. If you think the idea of conscious computers is absurd, then this seems obscene. If you don’t think it is absurd, then it isn’t. From a computationalist perspective the comparison is not intended to be dehumanising to black people but humanising to computers.

      2. I, for one, welcome our new Negro-Machine Overlords…

        1. “You said, ‘make it perfect.'” – Ice Pirates

        2. Recent (the last 20 years) studies originally performed by Roger Pembrooke and others have indicated that the human mind is also quantum based. Not to indicate it’s *only quantum* but that much goes on at an extremely low level. As a result, I think ascribing computers like Watson, which answer Jeopardy questions quickly but not much else is thin ice indeed, to be equating with sentience. Siri (the voice on your IPhone) doesn’t actually run on your IPhone but in servers run by Apple. Combine Siri with Watson and we accomplish much more than believed possible even 15 years ago when I was a grad student in computer science.

          All seriousness aside, I do not welcome our ‘machine overlords’ at all. I do agree with Kurzweil though in that we are on the verge of creating machine intelligences that we will have great difficulty distinguishing from human intelligences. Then comes the real debate, hinted at by bblggr and markobrien82, will we accept them as equals or will they be subservient? I personally think equating them with humans is not the correct answer or even to equate them with being aware the correct answer. Awareness is a slippery slope and simply recognizing yourself in a mirror is not always a strong indicator (as has been used in experiments on various animals to determine if they recognize themselves). It will be an interesting time though. Let’s hope it’s not ‘interesting’ as in the Chinese curse ‘May you live in interesting times’.

          Elon Musk, of SpaceX fame has purchased significant holdings in an AI firm just to keep track of the state of the technology. He says he fears a ‘SkyNet’ like intelligence arising and 20 years ago I would have chuckled about it. Not so much now.

      3. Just to be clear … you don’t understand the difference between an analogy and a comparison.

      4. Renzo Ciafardone bbblggr Saturday, June 28, 2014

        You seam to have read the comment in a complete different way as the OP intended. He was not comparing black people to machines, he was giving an example of how relative the definition of consciousness can be, using as example the old racist idea that black people were not like white people on an intellectual level, despite the clear evidence of the contrary. A full AI could be created right now and it would be very difficult for people, even its own creators, to recognize it as such.

  6. Intelligence does not equal consciousness and it is not necessary for A.I. to be conscious for it to be useful.

    1. Joseph Ray Turnage Art Friday, June 27, 2014

      Yes. The ‘conscious’ element is the ever-useful, boogey-man component for future reference in submitting the masses.

      I bristle when the theory about ‘robots taking over’ is floated over the crowd and those who can say who Ray Kurzweil is look like the smart ones and since he is one of the smartest ones it is ok. I think it is without doubt a fact that any person who has been rewarded billions of dollars for his life’s work and is still working is in the employ of the vast corporate network of people working for someone else, thus assuming only someone else is responsible for what they pitched in for. If the ‘men behind the curtain’ (Like Ray) program the robots to take over, only then would they. Period.

      Anyhow, there is a war of sorts with robots being entangled with our Fate. We are losing the war against the robots. But not because of their greater intelligence. Because of our not using ours.

  7. @markobrien82,
    Thank you for the well-reasoned response. To your point, my biggest hurdle with the Kurzweil’s (and Brad’s) logic is that personhood is established by reaching a certain cognitive level. If that’s the case, what’s the cut-off? Can I lose my person status if I suffer a brain injury? Am I less a person if I happen to be born with brain damage or a mental defect?
    Put differently – if my phone has the cognitive ability of my pet dog, does that make my iPhone a German Shepard?
    I would say that questions regarding who/what is or isn’t a person are out of Google’s depth. As a society, we need a larger philosophical and theological discussion around what direction we want these technologies to grow.

    1. Very well stated. Having consciousness and being a person are not the same. ASI should not have personal protections such as the Bill of Rights, because there may be a day when we need to cut it off. Kurzweil belongs to this fanciful notion camp that believes ASI will be inherently friendly to humans, because why would it be evil? The question is, why would it be friendly? Once it achieves ASI status, which will likely happen, very quickly after achieving AGI, it will think itself a God. We will have no choice but to kill it or accept our own doom.

  8. Kurzweil has become no more than self inflated baloney. He interviewed me for a job long ago (37 years if memory serves me well enough) regarding speech to text that it turned out I wasn’t qualified for and that man resembles this one in no way whatsoever. Success isn’t good for some types of egos.

    What we think and respond with and label consciousness is an emergent phenomenon produced by scale, functionality and complexity at a level that can never be understood much less simulated at other than the crudest levels.

    I’m poorly paraphrasing some incisive thinker when I say that if we could understand our brain it would necessarily be too simple to accomplish the understanding.

  9. Computers will never be more conscious than a rock or an old calculator.

    1. How do you know that? WHO TOLD YOU?!

      1. It can be worked out logically.

      2. Search for “The Limitations of Materialistic Atheism”: a fairly long debate which may surprise you.

        1. Oh, a godbot. Sorry, that’s not logic.

  10. the only thing I have against this futuristic idea is not being credited or even paid for the ideas you come up with. meaning anyone could steal your inventions or artistic style before you can even create them or have time to pick up a pen and write them down. If it’s not hacking it’s stealing ideas.

  11. blah blah blah…artificial intelligence….blah blah blah…consciousness…blah blah blah…black people are robots….blah blah…what?!

  12. You can never make a computer equal to a human being because it doesn’t have the wetware – the emotion, the instinctual motivations, the chemistry. However there is no reason you cannot make a computer intelligent, they obviously already are intelligent, they solve problems, they think about the input they receive and act according to their motivations – which of course are pre-programmed by human beings.
    The primary motivation of the human species is to survive and reproduce. Nature already programmed us to this end and has devised many wondrously economical biological solutions. But like all animals we are a kind of computer ourselves…

  13. Header does not suits the text at all

  14. Digital computers are just massive calculators, with the outputs totally predetermined by the inputs. Human brain and conciousness are a different thing, the process that makes self-awareness emerge is an absolute mystery. No matter how many trillions of arithmetic operations a computer can do, the difference with a brain is qualitative, not quantitative. Computers manipulate long packs of zeroes and ones according to a predefined set of rules, and the meaning we humans give to them are abstractions on our part, doesn’t exist at all inside the computer.

    To pretend that one day computers will be truly intelligent and self aware just throwing at them massive math power is as silly as Michelangelo pretending that his masterpiece statue Moses could talk just because it looked so human once he finished it.

  15. moonwalker72 Friday, June 27, 2014

    What should he do with (at least) a possibility of that the consciousness is non-computable?

  16. T M Jenkins Friday, June 27, 2014

    I was concerned that I’d stumbled into 1970 – then thankfully the comments brought me back to the 20th C. Unfortunarely it took EIGHT bad ads for something called “chubby” for me to realize this… Guess I still cant tell good from bad?

  17. What? We are going to spend umpteen millions (billions?) to develop a computer as stupid and arrogant as humans but without any feelings? God help my grandchildren..

  18. gmsrwinther Friday, June 27, 2014

    For those of you who accept biological evolution through “natural selection”, I would offer some food for thought: A vast array of biological traits have come into being incrementally over time, and among the most recent of these are intelligence and self-awareness. While there may be an over-arching grand-design across time that pre-determined the emergence of the more “advanced” traits like self-awareness, the basic biological machinery underlying this process is quite simple. Also, I’d like to point out that there are other types of selective pressures, such as “artificial selection” that drives biodiversity among domesticated animals and agricultural plants. So I would venture that if somehow humans can create a virtual adaptive-landscape having a viable AI inheritance/diversification/selection self-sustaining feedback-framework, that the results could be quite unimaginable (for better or worse).

  19. > A computer can calculate 10 x 20 or tell the difference between a person and a table, but it can’t judge if a person is kind or mean.

    That’s easy:

    if (kickDog) {
    person = “Mean”
    }
    else {
    person = “Kind”
    }

  20. Charles Teague Friday, June 27, 2014

    Creating a computer to be able to decide concepts such as “good vs. bad” is essentially defining a morality for it to follow. In this vein, humans basically assign morality around the “value” or “importance” of human life. There’s lots of exceptions to the rule here, but basically, if something is harmful to us as a species, we consider it “bad” and if something is beneficial to us it is considered “good”. I wonder if the root of morality and gut feeling decision making, intuition, is found in how well we can program a machine to “care” about it’s (and other machines like it) own well-being? Make decisions according to how beneficial they are to itself and its own kind. This is more like human decision making, than the typical agnostic A != B decision making process. The machine’s decision making process has to incorporate at it’s root the continual question of “how does decision benefit, impact or affect me and my kind?”

    1. What happens when thinking machines decide that parasitic humans are not a benefit? That’s the basis for our fear.

  21. Keep watching the other hand Mr. Kurzweil.

  22. You guys have been watching WAY too many sci-fi movies

  23. Makes me laugh when people make pronouncements about what can’t be achieved, as though they were qualified to make those statements. The flat earth society.

  24. One great problem with “artificial intelligence” is that we humans more or less unconsciously define or at least often compare it with human intelligence. So, we seam to try to rebuild human intelligence which is so complex that the delepment of AI itself is so complex.
    (Consciousness is a typical idea of human intelligence.)

  25. I’m not sure I agree that a certain level of intelligence = consciousness. I think that intelligence + self-awareness = consciousness.

    Just a thought.