You’re trapped in a room, and inside with you is a computer connected to another user outside. You begin a conversation with the user seeking directions out of the room. The user seems so intelligent, so you doubt whether it’s a real person. You think it’s a computer instead. Human or Android, how would you be able to tell the difference? This is a classic example of the Turing Test— a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human. According to AI scientists, artificial intelligence may be slightly conscious, and the Turing Test might soon be realized.
Related media: Do Robots Deserve Rights? What if Machines Become Conscious?
A Peek Inside An Artificial Mind
In a tweet by Ilya Sutskever, co-founder of OpenAI, he stated:
There’s this growing debate about when and how AI and human intelligence will be comparable. If you think that scenario is too Sci-Fi than reality, then according to leading computer scientists, modern AI may be displaying glimmers of consciousness. Several AI scientists — including Sutskever and Tamay Besiroglu of the Massachusetts Institute of Technology (MIT) — are warning that some machine learning AI may have achieved a limited form of sentience. This is what sparking the debate among AI scientists and neuroscientists.
As professor Murray Shanahan from Imperial College London explained with this analogy: “In the same sense that it may be that a large field of wheat is slightly pasta.”
Besiroglu tweeted in defense of Sutskever’s idea, making claims that such a possibility shouldn’t be underestimated.
Artificial Mindset: Fake Thinking
In a recent study that made attempts to track the frontiers in machine learning over the past decade, it found a clear distinction between major advances in vision and language. Pop quiz: define consciousness. (Read this article). Defining consciousness is still a debate among many philosophers and neuroscientists. We think of it as how our brains are functioning at any given point in time, say, now. It even has a broader sense of being what we’re currently thinking about, or what else? In short, it’s your imagination.
OpenAI’s sophisticated text generator GPT-3 — which was categorized as “maybe slightly conscious” — was compared with Google’s AlphaGo Zero, developed by their DeepMind AI division. Besiroglu, who was one of the co-authors of the study, drew a line across this trending attempt to classify which AI algorithms was capable of having some form of consciousness.
“I don’t actually think we can draw a clear line between models that are ‘not conscious’ vs. ‘maybe slightly conscious,’” Besiroglu told Futurism. “I’m also not sure any of these models are conscious. That said, I do think the question could be a meaningful one that shouldn’t just be neglected.”
Is AI Getting ‘Slightly Conscious?’
As OpenAI CEO Sam Altman offered his thoughts about his company’s AI, he tweeted:
The prospect of artificial consciousness rather than simply artificial intelligence raises ethical and practical questions: If machines achieve sentience, then would it be ethically wrong to destroy them or turn them off if they malfunction or are no longer useful?
Jacy Reese Anthis also tweeted:
Do you think AI will achieve consciousness?
Read more facts like this one in your inbox. Sign up for our daily email here.
The Factionary is ever ready to provide you with more interesting content for your reading pleasure. If you’re amazed by our work, you can support us on Patreon with a donation fee of your choice. Thank you!
Written by: Nana Kwadwo, Mon, Feb 14, 2022.