There’s a chance that AI may be ‘slightly conscious,’ according to AI scientists.

You’re trapped in a room, and inside with you is a computer connected to another user outside. You begin a conversation with the user seeking directions out of the room. The user seems so intelligent, so you doubt whether it’s a real person. You think it’s a computer instead. Human or Android, how would you be able to tell the difference?

This is a classic example of the Turing Test — a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human. According to AI scientists, artificial intelligence may be slightly conscious, and the Turing Test might soon be realized.

A Peek Inside An Artificial Mind

In a tweet by Ilya Sutskever, co-founder of OpenAI, he stated:

There’s this growing debate about when and how AI and human intelligence will be comparable. If you think that scenario is too Sci-Fi than reality, then according to leading computer scientists, modern AI may be displaying glimmers of consciousness.

Several AI scientists — including Sutskever and Tamay Besiroglu of the Massachusetts Institute of Technology (MIT) — are warning that some machine learning AI may have achieved a limited form of sentience. This is what sparking the debate among AI scientists and neuroscientists.

As professor Murray Shanahan from Imperial College London explained with this analogy: “In the same sense that it may be that a large field of wheat is slightly pasta.”

Besiroglu tweeted in defense of Sutskever’s idea, making claims that such a possibility shouldn’t be underestimated.

Artificial Mindset: Fake Thinking

In a recent study that made attempts to track the frontiers in machine learning over the past decade, it found a clear distinction between major advances in vision and language. Pop quiz: define consciousness. (Read this article).

Defining consciousness is still a debate among many philosophers and neuroscientists. We think of it as how our brains are functioning at any given point in time, say, now. It even has a broader sense of being what we’re currently thinking about, or what else? In short, it’s your imagination.

OpenAI’s sophisticated text generator GPT-3 — which was categorized as “maybe slightly conscious” — was compared with Google’s AlphaGo Zero, developed by their DeepMind AI division. Besiroglu, who was one of the co-authors of the study, drew a line across this trending attempt to classify which AI algorithms was capable of having some form of consciousness.

“I don’t actually think we can draw a clear line between models that are ‘not conscious’ vs. ‘maybe slightly conscious,’” Besiroglu told Futurism. “I’m also not sure any of these models are conscious. That said, I do think the question could be a meaningful one that shouldn’t just be neglected.”

Is AI Getting ‘Slightly Conscious?’

As OpenAI CEO Sam Altman offered his thoughts about his company’s AI, he tweeted:

The prospect of artificial consciousness rather than simply artificial intelligence raises ethical and practical questions: If machines achieve sentience, then would it be ethically wrong to destroy them or turn them off if they malfunction or are no longer useful?

Jacy Reese Anthis also tweeted:

Do you think AI will achieve consciousness?

Read more facts like this one in your inbox. Sign up for our daily email here.

The Factionary is ever ready to provide you with more interesting content for your reading pleasure. If you’re amazed by our work, you can support us on Patreon with a donation fee of your choice. Thank you!

Written by: Nana Kwadwo, Mon, Feb 14, 2022.


  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at


This site uses Akismet to reduce spam. Learn how your comment data is processed.