Tag: Students

  • Self-Adapting LLMs Resemble Students Acquiring New Knowledge

    Self-Adapting LLMs Resemble Students Acquiring New Knowledge

    In an MIT classroom, a professor delivers a lecture as students take careful notes to review and absorb essential material in preparation for an exam.
    Image Credits: AI-generated image

    In an MIT classroom, a professor delivers a lecture as students take careful notes to review and absorb essential material in preparation for an exam.

    Humans naturally learn and retain new information, but large language models (LLMs) lack this ability. Once deployed, a trained LLM has a fixed “brain” that can’t permanently incorporate new knowledge.

    As a result, if a user shares important information with an LLM today, it won’t recall it in future conversations.

    MIT Develops Method for LLMs to Self-Update Like Students

    MIT researchers have now introduced a new method that allows LLMs to self-update and permanently absorb new information. Much like a student, the model creates its own study notes from user input and uses them to adjust its internal parameters. This work is detailed in a paper published on the arXiv preprint server.

    The model produces several self-edits based on a single input and tests each to determine which yields the greatest performance boost. Through this trial-and-error process, it learns how to optimize its own training.

    The researchers discovered that this method enhanced LLM accuracy in both question-answering and pattern-recognition tasks, even allowing a smaller model to surpass the performance of much larger ones.

    Although challenges remain, this technique could eventually enable AI systems to continuously adapt to new tasks and dynamic objectives in ever-changing environments.

    “Like humans, advanced AI systems can’t stay static throughout their lifetimes. LLMs operate in dynamic settings where they constantly encounter new user inputs. Our goal is to build a model that’s more human-like—one that can continuously improve itself,” says Jyothish Pari, an MIT graduate student and co-lead author of the paper describing the technique.

    Pari co-authored the work with Adam Zweiger, an MIT undergraduate and fellow co-lead author; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, also an EECS assistant professor and CSAIL member.

    The research will be presented at the Conference on Neural Information Processing Systems.

    Training the Model to Acquire Knowledge

    LLMs are neural network models with billions of parameters, known as weights, which store the model’s knowledge and help it generate predictions from inputs. During training, these weights are adjusted to incorporate information from the training data.

    Once deployed, however, the weights become fixed and can no longer be permanently modified.

    LLMs excel at in-context learning, where they learn a new task by observing a few examples. While these examples influence the model’s responses in the moment, the learned knowledge does not persist beyond the current interaction.

    MIT researchers aimed to harness a model’s strong in-context learning abilities to train it to permanently adjust its weights when it acquires new information.

    They developed a framework called SEAL, short for “self-adapting LLMs,” which allows an LLM to create synthetic data from an input and then figure out the most effective way to update itself using that data. Each piece of synthetic data serves as a self-edit the model can implement.

    Overview of SEAL. In each RL outer loop iteration, the model generates candidate self-edits (SE)—directives on how to update the weights—applies updates, evaluates performance on a downstream task, and uses the resulting rewards to improve the self-edit generation policy. Image Credits: arXiv (2025). DOI: 10.48550/arxiv.2506.10943

    LLMs Learn by Creating and Testing Synthetic Study Sheets

    For language tasks, the LLM generates synthetic data by rephrasing the information and its implications from an input passage, much like students create study sheets by summarizing and rewriting lecture notes.

    The model produces multiple versions and then tests each one to determine which self-edit yields the largest improvement on a downstream task, such as question answering. This trial-and-error process uses reinforcement learning, rewarding the model for the edits that boost performance the most.

    Finally, the LLM internalizes the information from the most effective study sheet by updating its weights.

    “Our goal is for the model to craft the most effective study sheet—one that provides the right level of detail and a balanced set of information—so that applying it to update the model enhances its overall performance,” Zweiger explains.

    Selecting The Optimal Approach

    The framework also lets the model decide how it wants to learn. It can choose which synthetic data to use, set its learning rate, and determine how many training iterations to perform.

    In this way, the model not only generates its own training data but also manages how that data is applied to update its weights.

    “As humans, we understand the methods that help us learn best. We aim to give LLMs a similar ability. By letting the model control how it processes information, it can determine the most effective way to handle the incoming data,” Pari explains.

    SEAL outperformed several baseline approaches across a variety of tasks, including learning new skills from a few examples and integrating knowledge from a text passage. On question-answering tasks, SEAL increased model accuracy by nearly 15%, and for certain skill-learning tasks, it raised the success rate by over 50%.

    A key limitation of this method is catastrophic forgetting: as the model continually adapts to new information, its performance on previously learned tasks gradually declines.

    The researchers aim to address catastrophic forgetting in future work and explore applying this technique in multi-agent scenarios, where multiple LLMs train one another.

    “One major obstacle for LLMs to perform meaningful scientific research is their current inability to update themselves when exposed to new information. While fully deployed self-adapting models are still a long way off, we hope that systems capable of learning in this manner could eventually address this limitation and contribute to scientific progress,” Zweiger says.


    Read the original article on: Tech Xplore

    Read more: Scientists Create a Synthetic Leaf that Turns Pollution into Energy

  • AI Is Rendering Books Less Relevant, Endangering Students’ Educational Growth

    AI Is Rendering Books Less Relevant, Endangering Students’ Educational Growth

    Reading is facing a looming crisis. AI emerged at a time when both children and adults were already reading fewer books than in the recent past. As a linguist, I examine how technology shapes the way people read, write, and think.
    Image Credits: Pixabay

    Reading is facing a looming crisis. AI emerged at a time when both children and adults were already reading fewer books than in the recent past. As a linguist, I examine how technology shapes the way people read, write, and think.

    That includes studying AI’s influence, which is rapidly transforming how people interact with books and other forms of writing—whether for assignments, research, or leisure. I’m concerned that AI is speeding up a broader decline in the value placed on reading as a uniquely human activity.

    Anything Except The Book

    AI’s writing abilities have drawn much attention, but only recently have researchers and educators begun discussing its capacity to “read” vast datasets and then produce summaries, analyses, or comparisons of books, essays, and articles.

    Need to read a novel for class? Today, you could simply skim an AI-generated summary covering the plot and main themes. This shortcut, which can sap people’s motivation to read for themselves, inspired me to write a book on the pros and cons of letting AI do the reading.

    Relying on summaries or analyses isn’t new. CliffsNotes has been around since the late 1950s. Centuries before that, the Royal Society of London produced digests of scientific papers in its extensive Philosophical Transactions. By the mid-20th century, abstracts had become standard in scholarly articles, allowing potential readers to review a brief overview before deciding whether to read the full work.

    The internet introduced countless new shortcuts for reading. Take Blinkist, for example—an app-based subscription service that distills mostly nonfiction books into 15-minute text or audio summaries known as “Blinks.”

    AI That Reads and Thinks for You

    Generative AI takes these shortcuts even further. AI-powered tools like BooksAI now produce summaries and analyses once created by humans, while BookAI.chat lets users “chat” with books. In both cases, there’s no need to read the books firsthand.

    If you’re a student tasked with comparing Mark Twain’s The Adventures of Huckleberry Finn and J.D. Salinger’s The Catcher in the Rye as coming-of-age stories, CliffsNotes can only take you so far. You might find summaries of each, but the actual comparison is up to you. With general large language models or specialized tools like Google NotebookLM, however, AI can handle both the “reading” and the comparison—and even generate insightful questions for class discussion.

    The trade-off is missing a key benefit of reading a coming-of-age novel: the personal growth that comes from living through the protagonist’s struggles in your imagination.

    In academic research, tools like SciSpace, Elicit, and Consensus merge the capabilities of search engines with large language models, finding relevant papers and summarizing or synthesizing them—cutting literature review time dramatically. Elsevier’s ScienceDirect AI even boasts on its site: “Goodbye wasted reading time. Hello relevance.”

    Perhaps—but in the process, you lose the chance to decide for yourself what’s relevant and to make your own connections between ideas.

    Unwelcoming To Readers?

    Even before generative AI became widespread, book reading—both for leisure and for school—was already in decline.

    In the U.S., the National Assessment of Educational Progress found that the share of fourth graders who read for fun almost daily fell from 53% in 1984 to 39% in 2022. For eighth graders, it dropped from 35% in 1984 to just 14% in 2023. In the U.K., a 2024 National Literacy Trust survey showed only one in three 8- to 18-year-olds enjoyed reading in their free time—nearly nine points lower than the year before.

    The pattern is similar among older students. In a 2018 survey of 600,000 15-year-olds across 79 countries, 49% said they read only when necessary, up from 36% a decade earlier.

    Declining Reading in Higher Education

    College students fare no better. Recent reports highlight declining reading in U.S. higher education. My research with literacy scholar Anne Mangen found that faculty are assigning less reading, often because students refuse to do it.

    Cultural commentator David Brooks captured the issue with a telling anecdote: “I once asked a group of students on their final day at their prestigious university what book had changed their life over the past four years. After a long, awkward pause, one student finally replied, ‘You have to understand, we don’t read like that. We only sample enough of each book to get through the class.’”

    The trend extends well beyond students. A YouGov survey found that only 54% of Americans read at least one book in 2023. In South Korea, the figure was just 43%—a steep drop from nearly 87% in 1994. In the U.K., The Reading Agency also reported declines in adult reading, noting one possible cause: in 2024, 35% of adults identified as “lapsed readers,” meaning they once read regularly but no longer do. Of these, 26% said they had stopped because they spent more time on social media.

    Today, the term “lapsed reader” could apply to anyone who sidelines reading—whether due to waning interest, the pull of social media, or the habit of letting AI do the reading for them.

    Everything Lost, Overlooked, and Forgotten

    Why read at all?

    The reasons are countless—and so are the books and websites advocating for it. People read for pleasure, stress relief, learning, and personal growth.

    Research links reading to childhood brain development, greater happiness, longer life spans, and slower cognitive decline.

    That last point matters even more as more people let AI handle mental tasks for them—a phenomenon known as cognitive offloading. Studies show that when people rely on AI to do the work, they view themselves as using less of their own thinking ability. EEG research even found distinct brain connectivity patterns when participants used AI to help write an essay compared to when they wrote it entirely on their own.

    It’s still too early to know how AI might affect our long-term ability to think independently. Current research has mostly examined writing tasks or general AI use, not reading. Yet if we stop practicing how to read, analyze, and form our own interpretations, those skills will inevitably weaken.

    And it’s not just cognitive abilities at stake. Letting AI do our reading also means missing the joys that make reading worthwhile—being moved by a line of dialogue, savoring a clever turn of phrase, or feeling a bond with a character.

    AI’s promise of efficiency is tempting, but it comes with the risk of eroding the rewards of literacy.


    Read the original article on: Phys.Org

    Read more: A Study Finds AI Chatbots Can Be Tricked Into Revealing More Personal Information

  • Surprising Discovery: Students Get Bored During Exams

    Surprising Discovery: Students Get Bored During Exams

    When we consider boredom, our minds typically don't immediately associate it with exams. Nevertheless, a group of scholars led by Thomas Götz from the University of Vienna, spanning international borders, has delved into precisely this aspect of exam-related boredom.
    Credit: Pixaobay

    When we consider boredom, our minds typically don’t immediately associate it with exams. Nevertheless, a group of scholars led by Thomas Götz from the University of Vienna, spanning international borders, has delved into precisely this aspect of exam-related boredom.

    Their pioneering research has yielded intriguing findings, revealing that students indeed experience significant levels of boredom during exams. Furthermore, the study has unveiled that this profound boredom adversely impacts exam performance. These research outcomes have recently been shared in the Journal of Educational Psychology.

    The Global Study on Exam-Induced Boredom

    Despite the current extensive research on boredom, an overlooked aspect has been the phenomenon of test-related boredom. In a groundbreaking international collaboration, psychologists from various institutions including the University of Vienna, the University of Konstanz, the University of Zurich, the University of Applied Sciences and Arts Northwestern Switzerland, LMU Munich, the City University of New York, the University of Essex, and the Australian Catholic University in Sydney have shed light on the occurrence of test boredom and its detrimental impact on performance.

    The primary contributors to test boredom were identified as both being inadequately challenged and excessively challenged during examinations. Furthermore, it was found that test boredom was notably more pronounced when the exam content lacked personal relevance for the students. The central finding of the study underscored that high levels of test boredom had an adverse influence on exam results.

    Understanding Exam-Induced Boredom

    The researchers introduced the “abundance hypothesis” for the first time in their study, and it was validated. According to this hypothesis, boredom particularly impairs exam performance when students are overwhelmed because their cognitive resources are fully devoted to completing the tasks, leaving none available for dealing with boredom. Conversely, in the case of boredom due to being underchallenged, ample resources are available for task engagement.

    The study involved 1,820 German students in grades 5 through 10 and directly incorporated questions regarding the extent of boredom, feelings of being underchallenged or overchallenged, and the personal relevance of the tasks within the examination.

    Recommendations for Educators and Caregivers Based on Research Findings

    From these research findings, the scholars draw recommendations for educators and caregivers. “To combat test boredom, teachers should design exam tasks that resonate with students’ real-life experiences. Additionally, tasks should avoid being excessively simple or overly complex,” suggests educational psychologist Thomas Götz from the University of Vienna.

    Parents or guardians can also assist young individuals by initiating open dialogues about potential instances of being overwhelmed or underchallenged at school. Particularly in cases of overchallenging, it’s crucial to respond swiftly to prevent boredom and other adverse consequences, such as a downward spiral of declining performance.”

    This inaugural examination of test boredom also opens up a new frontier in research. The academics significantly contribute to our understanding of the negative impacts of boredom within the school context. Götz notes, “Numerous studies already indicate that boredom not only hinders learning and performance but also affects mental and physical well-being. With our work, we are now broadening our perspective to a vital aspect of children and adolescents’ everyday school life: examinations.”


    Read the original article on: Phys Org

    Read more: Researchers Find Diminished Brain Volume in Adolescents Who Smoke