Using Generative AI to Uncover Human Memory and Imagination
UCL research employing AI models enhances our comprehension of memory by revealing the brain’s process of reconstructing past events and envisioning novel scenarios.
UCL researchers find that recent progress in generative AI sheds light on how memories facilitate learning about the world, reliving past experiences, and creating entirely new scenarios for imagination and planning.
AI Models Mimicking Brain Functions
Published in Nature Human Behaviour and supported by Wellcome funding, the research employs an AI computational model, referred to as a generative neural network, to replicate the process by which neural networks in the brain learn and recall sequences of events, with each event represented by a simple scene.
Incorporating networks symbolizing the hippocampus and neocortex, the model explores their interaction, as both components collaboratively contribute to processes such as memory, imagination, and planning in the brain.
Lead author, UCL Institute of Cognitive Neuroscience PhD student Eleanor Spens, expressed that recent progress in AI’s generative networks illustrates the extraction of information from experiences. This enables the recollection of specific experiences and the flexible imagination of potential new experiences. She further noted, “We consider remembering as envisioning the past through the lens of concepts, merging stored details with our expectations of what might have occurred.”
Memory Replay and Prediction
To survive, humans must predict outcomes (such as avoiding danger or locating food), and AI networks indicate that when we revisit memories during rest, it assists our brains in recognizing patterns from past experiences. These patterns can then be utilized for making predictions.
The researchers exposed the model to 10,000 images of basic scenes, with the hippocampal network swiftly encoding each scene upon experience. Subsequently, it replayed these scenes repeatedly to train the generative neural network in the neocortex.
The neocortical network acquired the ability to transmit the activity of numerous input neurons (responsible for receiving visual information) representing each scene through smaller intermediate layers of neurons, with the smallest layer containing only 20 neurons. This process aimed to reconstruct the scenes as patterns of activity in its thousands of output neurons (responsible for predicting visual information).
Implications of the Study
As a result, the neocortical network developed highly efficient “conceptual” representations of the scenes, encapsulating their meaning, such as the arrangements of walls and objects. This capability enables both the recreation of past scenes and the generation of entirely novel ones.
As a result, the hippocampus gained the ability to encode the meaning of new scenes without the necessity to encode every single detail. This allowed it to concentrate resources on encoding distinctive features that the neocortex couldn’t replicate, such as novel types of objects.
The model elucidates the gradual acquisition of conceptual knowledge by the neocortex and highlights how, in collaboration with the hippocampus, this process enables us to “re-experience” events by reconstructing them in our minds.
Furthermore, the model clarifies how it is possible to generate new events during imagination and future planning. It also sheds light on why existing memories often exhibit “gist-like” distortions, wherein unique features are generalized and remembered as more similar to features from previous events.
Professor Neil Burgess, the senior author from UCL Institute of Cognitive Neuroscience and UCL Queen Square Institute of Neurology, explained that the reconstruction of memories, rather than being accurate records of the past, demonstrates how the meaning or essence of an experience is amalgamated with specific details. He further noted that this process can result in biases in how we recall things.
Read the original article on: SciTechDaily