Use It or Lose It: AI Could Deteriorate Our Cognitive Abilities

Use It or Lose It: AI Could Deteriorate Our Cognitive Abilities

As technology becomes deeply embedded in our daily routines, it can physically alter our brains. However, each time we delegate a task to technology, we risk allowing our abilities to diminish. What happens when that ability is essential for critical thinking?
AIs allow us to outsource the act of thinking… That doesn’t bode well for the future of the human brain. Credit: Pixabay

As technology becomes deeply embedded in our daily routines, it can physically alter our brains. However, each time we delegate a task to technology, we risk allowing our abilities to diminish. What happens when that ability is essential for critical thinking?

As a late Gen-Xer, I’ve had the unique experience of transitioning from handwritten rolodex entries and rotary phones with curly cords to today’s cloud-based contact lists, which allow you to reach people in multiple ways within seconds, no matter what device you’re using.

My generation’s ability to remember phone numbers is like the coccyx—just a leftover feature no longer needed. And there are plenty of other examples in the smartphone era.

Real-Time Optimization vs. Traditional Map Reading

One prime example is navigation. Reading a map, mentally integrating it into your spatial understanding of an area, remembering key landmarks, road numbers, and street names, and then creatively thinking of ways to avoid traffic jams and roadblocks is a hassle—especially when your phone can handle it all in real time, factoring in traffic, speed cameras, and ongoing roadworks to optimize the route on the fly.

Street View, regular and satellite View – Google Maps Navigation does them all, so you don’t have to
Google

Convenience vs. Cognitive Decline in Spatial Memory

If you don’t use it, you lose it; the brain can work like a muscle in that sense. Relying on services like Apple or Google Maps for navigation has its consequences—studies have shown that increased use of GPS correlates with a more rapid decline in spatial memory. Spatial memory seems so crucial to cognitive function that one study could predict areas with a higher likelihood of Alzheimer’s patients with nearly 84% accuracy, simply by assessing how “complex” the area’s navigation is.

The “use it or lose it” concept becomes even more concerning when we consider generative Large Language Models (LLMs) like ChatGPT, Gemini, Llama, Grok, Deepseek, and many others that are advancing and spreading at an incredible pace in 2025.

Generative AIs, among countless other uses, essentially enable the outsourcing of thinking itself, taking the concept of cognitive offloading to an extreme that suddenly seems logical.

Though widely used for only a couple of years, these AIs have rapidly improved, and many people already rely on them daily. They serve as the ultimate low-cost or no-cost assistant, offering vast (though sometimes unreliable) knowledge in a format that’s easy to access and at speeds far beyond human capability.

AI adoption is soaring, with some estimates suggesting humanity is embracing AI much faster than it ever adopted the internet.

But what impact will this increasing reliance on AI have on the brain as more cognitive functions are outsourced? Could AI accelerate our descent into a scenario similar to Idiocracy, perhaps even faster than Mike Judge envisioned?

Exploring the Impact of Generative AIs on Critical Thinking: A Microsoft Study

To explore these questions, a team of Microsoft researchers conducted a study to assess the effects of generative AIs on critical thinking. While long-term data is still lacking and no objective metrics exist, the team surveyed 319 “knowledge workers” about their mental processes across 936 tasks. Participants were asked when and how they engaged critical thinking, how generative AI influenced this effort, and how confident they were in both their own abilities and the AI’s capabilities.

The results were predictable: participants who had more confidence in the AI’s abilities reported engaging in less critical thinking.

However, those who trusted their own expertise tended to report more critical thinking—but with a shift in focus. Instead of solving problems independently, they were primarily verifying the accuracy of the AI’s output and ensuring it met specific requirements and quality standards.

Distribution of perceived effort (%) in cognitive activities (based on Bloom’s taxonomy) when using a GenAI tool compared to not using one
Microsoft Research

Cognitive Offloading and the Future of Human Oversight

Does this suggest that in the future, we’ll all become mere overseers of automation? I’m skeptical; supervision itself may soon be something that can be easily automated at large scale. The real issue here is that cognitive offloading was meant to free us from trivial tasks so we could focus on more important matters. But I suspect that AIs won’t find our “big” challenges any more difficult than the small ones.

Humanity could end up as a “God of the Gaps,” but those gaps are shrinking rapidly.

Perhaps WALL-E got it wrong; it’s not our physical decline we need to worry about in the age of automation, but the decay of our cognitive abilities. Unfortunately, there’s no hover chair for that—but at least we have TikTok.

Let’s leave the final word to DeepSeek R1: “I am what happens when you try to carve God from the wood of your own hunger.”


Read the original article on: New Atlas

Read more: Torque Clustering: Autonomous AI is Approaching

Share this post

Leave a Reply