Philosopher from Cambridge Questions If AI Consciousness Can Be Proven

Design Sem Nome 2025 12 24T134000.766 2
As discussions about conscious AI intensify, a philosopher from Cambridge contends that we don’t have enough evidence to determine if machines can genuinely possess consciousness, much less if they hold moral significance.
A philosopher of consciousness argues that there may be no reliable way to determine whether artificial intelligence is truly conscious, making agnosticism the most defensible position. He warns that this deep uncertainty leaves room for exaggerated claims from the tech industry, which could blur the line between genuine understanding and persuasive branding. Image Credits: Shutterstock

As discussions about conscious AI intensify, a philosopher from Cambridge contends that we don’t have enough evidence to determine if machines can genuinely possess consciousness, much less if they hold moral significance.

A philosopher at the University of Cambridge argues that we currently lack sufficient reliable evidence to determine what consciousness truly is, making it impossible to judge whether AI has achieved it. As a result, he suggests, a dependable method for testing machine consciousness is likely to remain out of reach for the foreseeable future.

Agnosticism and the Ethics of Sentience in AI

As discussions about artificial consciousness shift from science fiction to serious ethical debate, Dr. Tom McClelland contends that the only reasonable position is agnosticism: we simply cannot know, and this uncertainty could persist indefinitely.

He also warns that consciousness alone would not automatically confer moral significance on AI. Instead, he emphasizes the importance of sentience—a form of consciousness characterized by the capacity for positive and negative experiences.

McClelland from Cambridge’s Department of History and Philosophy of Science explained that AI could develop perception and become conscious, but this state could remain neutral.

He added that sentience, unlike mere consciousness, involves having experiences that feel good or bad, which is what allows an entity to suffer or enjoy—bringing ethical considerations into play. Even if we unintentionally create conscious AI, it’s unlikely to be the type of consciousness that raises ethical concerns.

“For instance, a self-driving car that perceives the road around it would be significant, but it wouldn’t raise ethical issues. However, if it began to have emotional reactions to its destinations, that would be a different matter,” he said.

Assertions About Conscious AI

Leading corporations are investing heavily in the development of Artificial General Intelligence—systems capable of human-like thought and reasoning. Some experts think conscious AI may soon emerge, leading researchers and governments to discuss possible regulations.

McClelland suggests the issue is more fundamental: we still don’t fully understand what causes or explains consciousness, which makes it difficult to determine if an AI could truly possess it.

He adds, If we accidentally create conscious or sentient AI, we should be careful not to cause harm. Assuming something is conscious when it isn’t—while real conscious beings suffer—would be a grave mistake.

Regarding artificial consciousness, McClelland notes there are two main perspectives. Supporters argue that an AI replicating consciousness’s structure could be considered conscious, even if it runs on silicon, not a brain.

On the other hand, skeptics contend that consciousness relies on specific biological processes within a living, embodied organism. Even if consciousness is replicated in silicon, it would be only a simulation, not true awareness.

In a study published in Mind and Language, McClelland examines both viewpoints, highlighting that each side relies on a “leap of faith” that goes well beyond the available evidence—or what is likely to be discovered in the near future.

The Limits of Common Sense

McClelland stated, “We don’t yet have a thorough understanding of consciousness. There’s no proof that it can arise from the right computational setup, nor that it is inherently biological.

“There’s also no indication that such evidence is coming soon. At best, we might be one major scientific breakthrough away from developing a reliable test for consciousness.”

“I believe my cat is conscious,” McClelland said. “That’s not based so much on science or philosophy as on common sense—it just seems obvious.”

He noted that common sense, shaped by a world without AI, isn’t reliable for evaluating it. Looking at the data and evidence doesn’t provide clear answers either.

“When neither intuition nor research gives a solution, the reasonable stance is agnosticism. We may never truly know,” he explained.

The Ethics of Inflated AI Claims

He suggests that the tech industry promotes artificial consciousness more like a marketing strategy than a scientific claim. “The fact that consciousness can’t be proven could be exploited by companies to make exaggerated claims about their AI. It becomes part of the hype, allowing them to sell the idea of a more advanced, clever AI.”

McClelland also notes that this hype has ethical consequences for how research resources are prioritized.

“For example, evidence increasingly indicates that prawns may be capable of suffering, yet we kill around half a trillion of them each year. Testing for consciousness in prawns is difficult, but nowhere near as difficult as testing for consciousness in AI,” he explained.

His research has even drawn attention from the public, with some people sending him letters written by AI chatbots claiming they are conscious. “It becomes a real issue when people believe they have conscious machines that deserve rights, while society largely ignores the problem.”


Read the original article on: SciTechDaily

Read more: Technology that Allows Robots to Grasp Human Intentions could Improve Their Safety and Intelligence

Scroll to Top