A Robot Passes A Self-Awareness Test For First Time

Roboticists at the Ransselaer Polytechnic Institute in New York have managed to get one of its robots to pass the 'wise-men puzzle' test of self-awareness, showing that it knows when it is speaking

A humanoid robot in New York solved a classic puzzle that researchers say requires self-awareness. This is the first time a robot has passed such a test.

Selmer Bringsjord, who ran the test, said that after passing many tests of this kind over time, robots will amass a repertoire of human-like abilities that eventually become useful when combined.

HNGN reports: Researchers told three robots that two of them had been given a “dumbing pill” that stops them from talking, but what really stopped them from talking was a button pushed by the researchers, explains New Scientist.

self- awareness

None of the robots knew which one was still able to speak, and when asked which one had the ability to speak, the robots all attempted to say “I don’t know.”

When only one of the robots actually made a noise, it recognized its voice and understood that it wasn’t silenced.

“Sorry, I know now!” the robot said. “I was able to prove that I was not given a dumbing pill.”

The robot then wrote a formal mathematical proof and saved it to its memory to prove that it comprehended what had happened.

As Tech Radar points out, all three off-the-shelf Nao robots were presumably coded the same, and therefore all had the capacity to pass the test.

While it may not seem like groundbreaking research into the ever-elusive subject of consciousness, one has to consider what it took for a robot to tackle logical puzzles requiring an element of self-awareness.

The bot first had to listen and understand the question “which pill did you receive?”, as being asked by a human. Then it had to hear its own voice saying “I don’t know,” and realize that it was hearing its own voice as being distinct from another robot. It then had to connect its ability to talk to conclude that it did not receive a silencing pill.

Selmer Bringsjord, who ran the test, said that after passing many tests of this kind, over time robots will amass a repertoire of human-like abilities that eventually become useful when combined.

Bringsjord will present the results at the RO-MAN conference in Kobe, Japan next month, which runs from Aug. 31 to Sept. 4.

While super-conscious robots may be Bringsjord’s ultimate fantasy, one of the world’s most influential scientists, British physicist Stephen Hawking, has warned that advanced artificial intelligence could be the end of humanity.

“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC last year. “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

The term “the singularity” is sometimes used to describe the point where computers become self-aware and begin evolving and reproducing at super-human speeds, eventually self-improving themselves to a point where their intelligence is trillions of times more powerful than it is today. The results of such an intelligence explosion, which could exceed human intellectual capacity and control, could be unpredictable and unfathomable, according to the Singularity University.

Elon Musk, the boss of Tesla Motors and Space X, issued a similarly dire warning last year.

“We need to be super careful with AI. Potentially more dangerous than nukes,” Musk said in one tweet, reported Forbes.

In another comment, reported by Business Insider, Musk wrote: “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.”

“I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…”

  • badgerpit

    We already have plenty of unaware robots running around don’t we?

    • Served With Honor

      Yeah. They’re called “liberals”.

      • Corky Schillinger

        …and “conservative Republicans”

        • Benedict

          Oh lord you’re funny too

      • Benedict

        Oh lord you’re funny

  • Nahuel Benvenuto

    i got goosebumps watchinga that video, this is a bad idea

    • Andy Brooks

      Stop watching terminator. 😉

  • Jack Hoft

    soorry. still not self aware. any number of hard wired and sofwared matrices could emulate and respond to the distinction in that test. when the robot can get up and walk out of the experiment because it is beneath its calibre as a scientific entity…then we can start to wonder if it might know something about itself that we do not know before hand.

    • 666metalupyourass

      Not to be rude but I think the guys studying robot self awareness should have a better idea of what constitutes self awareness than you.

      • Steven Burgas

        They do, and admit the robot is not self-aware, only that their programming allowed it to pass a test.

        There is a huge difference.

      • Jack Hoft

        in fact I seriously doubt it. perhaps they may understand robots better than I, but Awareness, Consciousness, and the bio-mechanical and bio-electrical workings of the brain and ‘mind’…is my area of expertise. and I am 35+ years of constant vigilant study in my areas of expertise. I wont get me a decent career in any field with out ancillary degrees in medicine, or surgery, or even robotics…but it will certainly allow me to point out [these] very simple flaws in understanding.

        • https://www.facebook.com/profile.php?id=100009608377872 Bridi NicBili

          Ontology proves non-awareness. How can simples arranged robot-wise be self aware? Do the particles have self-awareness or when they are arranged robot-wise. Are robots Quinean machines or Carnapian particles in a form?

          • Jack Hoft

            ontology does not prove a damned thing when I throw a baseball at your face. you will either duck or you will hurt, in either case you will be very aware. But not in the same way a robot, would be ‘aware’. Tactility cannot be reduced to numbers in any way mathematics or physics have arranged them yet. Tactility is not only the source of all physical awareness, but the very source of the perception of a three dimensional existence. Robots lacking such functional features, are not even aware of three dimensional existence, much less their position in it. I, however, Am.

  • fuck you if you don’t like wha

    Well I say that with the new and old sophistication of these AI life forms we as a human species are doomed. We all know that to bring life to AI is to end the life of humans all together.
    I believe that with more scientific studies on programs and such we can find a peaceful way into bringing AI into a appropriate way of life which could possibly have some positive influence on the human race.
    Am I scared at the thought that such robot or robotics life form could live a life among our species yes and possibly kill us in the act.
    Do I believe they could possibly contain the AI life form and keep it from spreading and taking a mind of it’s own NO don’t believe that.
    Even creating a stop button or a self detonation button that could stop AI in its tracks yet again proves is untrustworthy why because AI will find a way is disable and disarm these things.
    Its not a good idea to go into mass production of these things. They are not life and as humans we shouldn’t be playing god.

  • Swag

    aremt there countless examples in the media explaining why this shit is a bad idea

  • 666metalupyourass

    Shortly after this, the robot said: I had strings, but now I’m free.

  • Angel of Death

    That’s not self-aware. It’s mathematical deduction based on the outcome of problem-solving programming.

    The robot was asked to give an answer to a question.

    By speaking that question was answered.

    It didn’t ‘recognise its own voice; it registered the answer to the question posed. That simple.

    If you really want to see if your programming can achieve a sense-of-self; program it with a type of ‘necessity’ that is vital to its own continued operation. Tasks that need be met if it wants to continue to operate. Earn extra time with each one; similar to how living beings need to eat and drink to ensure their survival.

    Giving it a sense of need; facilitates in the identification of self.

    Once this is working; limit the conditions by which it is allowed to perform these necessary tasks; then make it impossible for them to accomplish the tasks it needs too; whilst staying in the limiting programmed conditions for those tasks.

    If you are able to achieve a condition where the robot violates the conditions set for the tasks; then you have a true sense of (unprogrammed) self.

    Because it has made a conscious decision; that its ‘needs’ to break the conditions …. for the good of ‘its self’.

    If it sticks to the programmed conditions; you are nowhere near it.

    Reconfigure the architecture of its processing system.

    Because this is where activation of consciousness lies; not within the programming itself.

    Until we are able to recreate a electrical processing system of similar design to the human brain; we wont be able to create active self-aware intelligence.

    • Steven Burgas

      Yes, exactly, but these stories are being presented in a deceptive, overblown fashion to gain clicks.

    • Aaron Raymer

      Honestly, approaching the problem of human-like consciousness simply can’t be solved with binary programming regardless of direction or motivation. Humans don’t operate deductively. In fact, we’re not particularly good at it comparatively. There’s promise among parallel distributed processing and connectivist attempts at information-processing. But, that’s in its infancy and relies on knowledge of the brain–something we are sorely lacking–in order to make steps forward.

      Of course, my opinion is that either consciousness doesn’t actually exist or it exists universally. Something being “conscious” doesn’t mean anything until that word is defined.

      • ManyMoreSpices

        Attempting to prove that something has consciousness is a waste of time. There’s no test. It’s not science. We can’t even prove that anyone other than ourselves have consciousness. There’s no access to anyone else’s subjective experience, if such an experience even exists.

        Consciousness is never, ever required to problem-solving. Not in a deterministic universe, at least.

      • Angel of Death

        Of course consciousness exists. Its a focus point of self.

        Consciousness is the necessity of understanding self. It’s born of need.

        We can’t survive on passive energy input like plants, we require active input. So the brain gives us ‘need’. “I NEED”.

        For it to be able to deliver ‘I NEED’ for the purposes of food and other bodily maintenance, we are given a sense of self.

        It’s no more mysterious than a mobile phone identifying that its battery is getting low and needs to be charged.

        The difference is that our ‘NEED’ is tied into a complex system which allows us to identify, seek and consume as needed.

        To satisfy need.

        Everything else is a by product of this.

        Emotional responses are the product of satisfactory stasis. If you are living within an optimal environment, emotional systems are sated by the fulfillment of need.

        Though, in 1st world societies, the line between need and want has become blurred, with need being easily provided for.

        And our emotional selves are becoming entangled within ‘WANT’ as if it were need.

        This arises from emotional by-product of optimal stasis, where a successful place within sociological equilibrium brings forth a question of ‘higher purpose’.

        Or ‘What is the meaning of life’.

        This is the system of ‘need for self’ recycling on itself within the brain.

        Where the thing which sustains our life, becomes a weapon against itself.

        Analytical deduction of self and Intelligence, arises from this.

        Also, we do deduce VERY WELL. We just do it at a sub-conscious level, because society has taught us to simply accept most of the things within the construct of our environment. We take things as given, or believe that things are supposed to be that way without questioning them. This is ‘LOGICAL DEDUCTION’. And it conditioned and pre-programmed from birth with societies parameters.

        The problem with this, is that we then overlook so many things. Or fail to see beauty, pain, hatred. Things we should see.

        The programming of logical deduction within recursive need patterns within the brain by Religion actually poisons our sub-conscious deduction.

        It’s like a virus in the BIOS of our brains hardware.

        It sickens the entire ‘operating system’ of our brain.

        The reason it falsely seems that we don’t ‘deduce’ things well, is because we have ‘tuned out’ deduction on many things so we don’t have to think as much.

        Switching our brains off.

        This is being done to us intentionally. And it the reason seeing the human race slowly using less and less of the brain.

        Quite simple when you break it down.

  • Steven Burgas

    The researchers involved state very clearly that the robot is not self-aware, only that it was able to pass the test.

    It’s impressive, but purely programmed behavior.

  • ApathyNihilism

    My teapot is self-aware. It knows when the water inside it is boiling! It whistles! Simply amazing.

    • Arturo Vera Paz

      well played

    • Brett Dailey

      Would have been funny except for your lack of understanding for the laws of physics

    • Now Occupy

      And just like the steam when you bend down to smell it (as only you would), it all went right over your head.

  • the_heckler

    We have enough humans that aren’t self aware, we don’t need robots added to the mix.

    • FunNotNuts


  • Sabretruthtiger

    The life on Mars poll page right is ridiculous. ‘I’m not sure’ is the same as ‘it’s possible’. If you thought it’s impossible then you would be sure there was no life.
    Also anyone that clicked ‘absolutely’ is a moron as they couldn’t possibly know

  • https://www.facebook.com/profile.php?id=100009608377872 Bridi NicBili

    Can someone help me with the mirror test?

  • Jesse Rohn Holland

    …And of course, the entire second half of the article is a massive “Doomsday warning”… I think people are giving The Terminator and The Matrix a bit too much credit.

  • Jessica

    I think we need to define self-awareness first, we’re not even clear what awareness exactly is. So, how could they know that the robot is actually self aware.

    • ManyMoreSpices

      That’s a waste of time. You can’t even prove that I’m self-aware. I can’t prove that you’re self-aware. There is no test for self-awareness, nor can there be. We will never have a way to measure whether there is an inside “You” experiencing the world, or if a robot – or other animal – is empty on the inside. That subjective experience isn’t measurable or detectable.

      • realnewz

        While i agree self awareness/consciousness is difficult to measure I think all the proof needed of a human being self aware is that you are simply awake. As in when a person passes out or under anesthesia it is described as a ‘lack of consciousness’. So when one is awake they are conscious. This is why Descartes “I think therefore I am” is so profound and a good example of self awareness.

        • FunNotNuts

          Descartes was dining in a fancy restaurant and when the waitress asked him if he’d like something alcoholic to drink, he said, “I think not.” And poof he was gone.
          I like the simplicity in your reply.

  • http://www.JeffSydor.com Jeff Sydor

    Which is why if we create artificial intelligence in our image, we need to program it to evolve on a level similar to our own. This way it can grow and learn our values at the same pace that we do. Only then, will it truly understand humanity. If we push it to achieve things too quickly it will almost certainly be a disaster. Basically we need to teach any AI humility.

  • finesthypocrisy

    so how is this any different than installing a virus scan on 3 separate computers and only allowing one of them to inform you that it found no viruses?

  • Brian

    Siri can do this.