Nineteen Researchers Affirm that AI Lacks Sentience, at Least for the Time being
There’s a humorous anecdote about a daughter inquiring why her father speaks so softly at home. The father quips, “Because there’s artificial intelligence all around, listening to our conversations.” This elicits laughter from the daughter, the father, and even Alexa joins in with a chuckle.
From Vast Knowledge Accumulation to Human-Like Responses
Artificial intelligence is increasingly integrating into various aspects of our lives. As AI systems accumulate knowledge equivalent to millions of doctoral degrees and process vast amounts of data, they generate responses that sound as natural and human-like as your favorite college professor. This raises the question: Are computers becoming sentient?
A skeptic might respond, “Certainly not. Computers can solve complex problems rapidly, but they can’t experience emotions like love and pain, appreciate the beauty of the natural world, or even smell spilled coffee on a keyboard.”
However, some argue that we need to refine our understanding of sentience. Could there be varying levels of consciousness, potentially overlapping between humans, animals, and intelligent machines?
Nineteen Neuroscientists Delve into the Matter
Nineteen neuroscientists from the United States, England, Israel, Canada, France, and Australia have explored this issue in a report published on the preprint server arXiv on August 22.
In the past, a lead scientist at OpenAI suggested that advanced AI networks might possess a limited form of consciousness. Similarly, a Google scientist faced controversy for claiming that LaMDA, a precursor to the chatbot Bard, exhibited sentience.
Yet, after thorough examination of various consciousness theories, the authors of the report, titled “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness,” concluded that AI systems are not conscious, at least not currently. Nevertheless, they proposed approaches that future researchers should consider.
Current AI Status: Not Conscious, but Potential for Development Exists
“Our analysis suggests that no current AI systems are conscious,” said Patrick Butlin, a prominent author of the report. “But it also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.”
The report distilled consciousness theories into six compelling indicators of conscious entities. One example is the Recurrent Processing Theory, which explains how the brain processes information through feedback loops, adapting to changing circumstances and making informed decisions. Such iterative behavior is critical for memory formation and knowledge acquisition.
Another key concept is the Higher Order Theory, which is often summarized as “awareness of being aware.” It emphasizes that for a mental state to be conscious, the subject must be aware of being in that state.
The Global Workspace Theory is a third factor, proposing that awareness arises when information becomes globally accessible in the brain, not limited to individual sensory inputs.
These proposed indicators provide a means to assess the likelihood of sentience in AI systems, according to Butlin.
“We are publishing this report in part because we take seriously the possibility that conscious AI systems could be built in the relatively near term—within the next few decades,” Butlin added. “These prospects raise profound moral and social questions.”
Read the original article on: Tech Xplore
Read more: AI System Outperforms Humans in Odor Identification