The compact GridEdge Analyzer developed by the University of Tennessee and ORNL can be embedded into power electronics or even plugged into a wall outlet to measure the smallest changes in electrical voltage and current. Image Credits: Amy Smotherman Burgess/ORNL, U.S. Dept. of Energy
Scientists at the Department of Energy’s Oak Ridge National Laboratory, working with the University of Tennessee, created a low-cost, secure sensing device that provides unprecedented real-time visibility into how the electric grid operates. Known as the Universal GridEdge Analyzer, the technology recently earned an R&D 100 Award, recognizing it as one of the world’s leading inventions.
The small-form analyzer captures minute fluctuations in voltage and current as waveform data, then rapidly compresses, encrypts, and transmits the information to centralized servers. Capable of handling 60,000 measurements per second—about 500 times the rate of earlier systems—it can detect split-second responses from the power electronics that are essential to operating the modern grid.
“Unlike conventional centralized power plants, data centers and distributed energy resources with batteries rely on power electronics to connect to the grid,” explained Yilu Liu, lead researcher and UT–ORNL Governor’s Chair for Power Electronics. “These components switch extremely fast, and their rapid behavior can affect overall grid stability. By monitoring these dynamics, we can strengthen future grid operations and ensure reliable electricity for everyone.”
Advancing Grid Monitoring
The technology builds on UT’s long-established grid frequency monitoring system, FNET/GridEye. This network, with 200 sensors across the U.S. and roughly 100 more worldwide, gathers and transmits aggregated data to provide a broad view of grid activity. The new device, however, delivers more detailed information at higher speeds, capturing events that previous technology might have missed. Designed for versatility, it can be integrated into power electronics, mounted on distribution lines, or even plugged into a standard wall outlet.
Utilities in states like Hawaii and Texas are using the analyzer to study how clusters of power electronics interact with the grid. For instance, at AI data centers, small voltage fluctuations can trigger switches to backup power, requiring quick responses to manage the energy load. The device helps operators anticipate these situations and maintain stable grid operations.
Other ORNL contributors to the project include Bruce Warmack and Ori Wu, as well as former team members Ben LaRiviere and Lingwei Zahn.
New research shows that intelligence significantly influences how people understand speech in noisy settings. By comparing neurotypical and neurodivergent individuals, the study found that cognitive ability predicted performance in every group. The findings challenge the idea that listening problems stem solely from hearing loss, emphasizing the brain’s key role.
Picture chatting in a noisy café—what seems like a hearing issue may actually stem from how your brain processes sound.
Cognitive Ability Affects Speech Understanding in Noise
A study found that cognitive ability strongly influenced speech understanding in noisy settings. Even though everyone had normal hearing, their performance differed according to their intellectual capacity.
“The link between cognitive ability and speech perception appeared across all groups,” said lead researcher Bonnie Lau, a UW assistant professor of otolaryngology–head and neck surgery.
The study’s results were published in PLOS One.
Small Study Links Intelligence to Noisy Listening
Lau noted that with fewer than 50 participants, the study should be replicated with larger groups, but the results suggest intelligence influences how well people listen in noisy environments.
To test their hypothesis, researchers included participants with autism and fetal alcohol syndrome—groups with normal hearing but known listening challenges—to widen the IQ range and enable broader comparison.
The study included 12 autistic, 10 fetal alcohol syndrome, and 27 neurotypical participants matched by age and sex. Participants ranged in age from 13 to 47. Each underwent an audiology screening to confirm normal hearing before completing a computer-based listening task.
Participants Tasked with Focusing on One Voice Amid Competing Speakers
In the experiment, participants listened to a main speaker’s voice while two other voices spoke at the same time in the background. Their task was to focus on the main speaker—always male—and ignore the competing voices. Each voice gave a brief command containing a call sign, color, and number, such as “Ready, Eagle, go to green five now.”
Participants then selected the box with the correct color and number that matched the main speaker’s command as the background voices gradually increased in volume.
Afterward, they completed standardized intelligence tests assessing verbal and nonverbal skills as well as perceptual reasoning. The researchers compared these cognitive scores with participants’ performance on the multitalker listening test.
The data revealed a strong link between intelligence and listening performance.
Study Finds Strong Link Between Intelligence and Speech Perception
“We observed a highly significant association between directly measured intellectual ability and multitalker speech perception,” the researchers wrote. “Intellectual ability was significantly correlated with speech perception thresholds across all three groups.”
Lau noted that effective listening in noisy environments relies on extensive brain processing.
“You must separate voices, focus on one speaker, and filter out noise,” Lau said. “Then, you need to process language—identifying phonemes, syllables, and words—while also engaging socially by smiling or nodding. All of this adds to the cognitive effort required to communicate in noisy environments.”
Lau said the study challenges a common misconception that listening difficulties always indicate peripheral hearing loss.
“You don’t need to have hearing loss to struggle with listening in a restaurant or other noisy real-world setting,” she noted.
Researchers said neurodivergent or lower-IQ individuals may benefit from better listening setups, like front seating or assistive devices.
Lau carries out her research at the UW Virginia Merrill Bloedel Hearing Research Center. Her coauthors represent multiple UW departments and centers, as well as the University of Michigan’s Department of Otolaryngology–Head and Neck Surgery.
Intelligence is a powerful asset in both life and work—and science has identified several ways to “boost” it, according to Inc.com.
When asked about the key factor behind success, most high achievers point to intelligence (though research shows luck also plays a role—Bill Gates being a prime example).
Science-Backed Ways to Boost Intelligence
Fortunately, there are science-backed strategies to enhance intelligence: studying different subjects in sequence to leverage interleaving, changing up study methods, self-testing, getting more—and better—sleep, and, perhaps unexpectedly, exercising.
A review published in Translational Sports Medicine found that just two minutes of moderate- to high-intensity aerobic exercise can enhance attention, focus, and memory for up to two hours.
Even brief activities—like climbing stairs, doing push-ups or squats, or running in place—can give memory a boost.
The challenge, however, is practicality: it takes some planning (or even a bit of creativity). After all, who’s realistically going to drop to the floor for push-ups or start running in place in the middle of an office?
Luckily, there’s an alternative. A recent study in Psychology of Sport and Exercise showed that exercising after learning can significantly enhance memory, recall, and retention.
Exercising After Learning Strengthens Memory
Participants who worked out post-learning displayed much stronger recognition memory than the control group—an effect not seen in those who exercised before learning. In fact, the research indicates that intense cycling after studying boosts recognition memory, while pre-learning exercise does not.
It’s easy to understand why memory and retention improve compared to a control group. What’s less intuitive is the advantage over pre-learning exercise, since earlier research suggested that working out beforehand also benefits memory.
The key factor is duration. Even two minutes of movement can sharpen memory, but longer sessions have a bigger impact. For instance, participants who exercised 40 minutes, three times a week, experienced over a 2% increase in hippocampal volume.
In the post-exercise study, participants cycled for 20 minutes at a “hard” intensity level—a demanding task that proved more effective than exercising before learning.
Want to sharpen your memory, learn faster, and retain more? This is yet another reason to make exercise part of your daily routine.
Short Bursts of Exercise Boost Cognitive Performance Anytime
Although that study emphasized exercise immediately after learning, research in the Journal of Epidemiology and Community Health showed that even six to ten minutes of moderate to vigorous activity can boost working memory and significantly improve higher-level cognitive functions—such as organization, prioritization, and planning—no matter when you work out.
So, while exercising at the “optimal” time may give the best results, timing isn’t everything. Any exercise is better than none—for both your health and your memory. In the end, what matters most is consistency: when, how, and what you do has an impact, but showing up regularly makes the real difference.
As automation rapidly progresses, robot collaboration has moved beyond science fiction. Picture a warehouse where dozens of machines move goods without crashing, a restaurant where robots deliver meals to the right tables, or a factory where robot teams instantly adapt their tasks to meet changing demand.
Open-Source ROS2 Framework Brings Collaborative Robotics to Life
This vision is becoming reality through an open-source framework built on ROS2, which enables multiple robots to collaborate intelligently, flexibly, and safely. The research was recently published in IEEE Access.
Turning theory into practice requires studying how robots learn to navigate collectively. Successful collaboration depends on their capacity to communicate and make real-time decisions. The system incorporates three key elements:
Autonomous navigation: Each robot computes the best routes using GPS-like algorithms adapted for dynamic environments. With simulation tools such as GAZEBO, they first train in virtual settings before operating in the real world. When faced with unexpected obstacles—like a fallen box—they immediately recalculate their route.
Adaptable behavior: The system relies on “behavior trees,” which act like a dynamic set of instructions. For instance, if a robot gets stuck, it will first attempt to turn, then back up, and if the issue continues, it asks the central system for assistance. This method not only avoids collisions but also makes the system scalable—from just two robots in a lab to dozens in a factory.
Computer vision and task allocation: Acting as the eyes and brain of the collaborative setup, this component ensures robots know both their position and their assigned tasks. It combines two key technologies: ArUco markers—similar to QR codes, small printed symbols placed in the environment that serve as reference points—and distributed cameras that track these markers, calculating each robot’s location with an accuracy of under 3 cm.
It’s as though the robots maintain a continuously refreshed internal map. The second technology is smart task assignment: the system dispatches the nearest available robot, much like a courier selecting the quickest route. If one robot breaks down, another seamlessly takes over, ensuring operations continue without interruption.
Scalable, intelligent, and ready for any industry. Image Credits: Francisco Yumbla/ESPOL
Simulated Warehouses, Restaurants, and Labs Put Collaborative Robots to the Test
To test the system, researchers simulated a variety of complex scenarios. Into industrial warehouses, robots transported packages between ArUco-marked stations while avoiding traffic jams. In restaurants, machines delivered meals to specific tables, coordinating to prevent collisions in tight hallways. In laboratories, diverse teams—including small robots and robotic arms—collaborated to carry out experiments.
The results were impressive: robots located themselves with an average error margin of just 2.5 cm. The system also proved highly resilient—when one robot failed, another seamlessly took over its task within seconds.
Scalability, often a challenge in robotics, was also demonstrated, as the framework functioned equally well with five robots as with 15, adapting smoothly to different environments.
Because it is open-source and built on ROS2, a widely adopted platform, the system is accessible to any organization. Hospitals could program robots to deliver medications, logistics hubs could optimize package flow, and museums could deploy autonomous tour guides. At the same time, it reduces reliance on humans for repetitive duties, freeing staff for more strategic tasks.
Shenzhen-based Manifold Tech raised a seven-figure RMB pre-Series A round, backed by Hony Capital.The funds will be used to support custom production of core components, scale up product deployment, and broaden market reach. Earlier, the company raised seed funding from ZhenFund and received angel investment from Junsan Capital.
Deep Tech Roots: Manifold Emerges from University Lab to Advance 3D Perception for Robots and Drones
Founded in 2022, Manifold specializes in developing 3D perception and reconstruction algorithms for robots and drones. These systems let machines perceive and recall their surroundings for real-time interaction. Manifold’s founders come from HKU’s MaRS Lab, experts in drone navigation and LiDAR SLAM. Lab director Professor Zhang Fu, a former DJI advisor and Livox co-founder, now guides Manifold’s technical strategy.
With advancements in AI and machine learning, 3D sensing systems have become faster and more precise.High-performance platforms now process large image and point cloud data efficiently, enabling real-time environment reconstruction. These technologies are rapidly gaining traction across multiple industries.
High-resolution 3D models support cultural preservation through virtual exhibits and education, while in emergencies, 3D data helps map fire scenes, trace ignition points, and assess structural damage for faster, informed responses.
Spatial awareness and memory are vital for robotics, enabling safe, efficient navigation in complex environments. Yet many current systems have significant limitations. GPS is unreliable indoors, and technologies like UWB and Bluetooth depend on fixed infrastructure. Meanwhile, visual and LiDAR-based navigation can struggle in unfamiliar or constantly changing surroundings, often resulting in navigation errors.
Hardware limitations also pose problems. Manual calibration is labor-intensive, and any change to the setup or environment usually demands expert-level parameter adjustments. Teams must also manage large volumes of data and continually optimize algorithms for different applications, which adds cost and delays.
To overcome these challenges, Manifold developed MindSLAM—a solid-state, multi-sensor fusion system built on proprietary algorithms. At its core is Odin 1, the first module to combine spatial perception and memory for advanced robotic navigation.
Image Credits:Image of the Odin 1 module. Image and header photo source: Manifold Tech via 36Kr.
Odin 1 features a SPAD dTOF depth sensor, high-res color camera, and IMU. It synchronizes spatial and temporal data across all sensors to produce highly accurate and stable results. Could you clarify what you mean by “sh“? Are you looking to shorten something or referring to something specific? It also generates 700,000 point cloud data points per second, significantly enhancing the detail and completeness of spatial data.
Odin 1 as a Robotic Hippocampus
Functionally, Odin 1 mimics the hippocampus in biological organisms—responsible for processing spatial memory. Odin 1 merges spatial and temporal data to create detailed 3D maps, allowing real-time object detection and mapping even in low-light or sparse environments.
These features boost autonomous navigation, enable more effective route planning, and improve robotic performance in complex or dynamic environments.
In addition, MindSLAM is linked with Manifold’s MindCloud platform, which enables users to instantly convert real-world spaces into photorealistic 3D simulations. The platform supports the creation of digital twins, manages spatial data, and facilitates simulation-based training for robotics algorithms—forming a comprehensive foundation for both development and operational planning.
Image Credits:Photo shows the installation of the Odin 1 module on a quadruped robot intended for industrial deployment. Photo source: Manifold Tech via 36Kr.
According to 36Kr, Manifold’s real-time, true-color 3D reconstruction technology is already being applied across industries such as construction digitization, renovation surveying, fire scene modeling, traffic accident analysis, and industrial manufacturing.
In real-world scenarios, drones and robots using Odin 1 can enter disaster areas and produce live 3D maps, giving emergency teams detailed structural insights. On construction sites, the device can monitor spatial developments, track project progress, and evaluate construction quality—streamlining workflows and reducing manual labor.
Manifold is also working with multiple robotics companies to make intelligent sensing modules more affordable and provide end-to-end solutions for navigation, mapping, and localization. Mass production of Odin 1 is planned for July, followed by a global launch.
The first robot I can recall is Rosie from The Jetsons, soon followed by the sophisticated C-3PO and his loyal companion R2-D2 in The Empire Strikes Back. But the first AI I encountered without a physical form was Joshua, the computer from WarGames—a system that nearly triggered nuclear war until it grasped the concept of mutually assured destruction and opted to play chess instead.
That moment, when I was seven, left a lasting impression. Could a machine grasp ethics? Feel emotions? Understand what it means to be human? These questions only grew more compelling as portrayals of artificial intelligence became more nuanced—whether through the android Bishop in Aliens, Data in Star Trek: The Next Generation, or more recent figures like Samantha in Her and Ava in Ex Machina.
But these questions are no longer purely theoretical. Today, roboticists are actively exploring whether artificial intelligence requires a physical form—and if it does, what kind of embodiment is most suitable.
Then there’s the question of how to achieve it: if embodiment is essential for developing true artificial general intelligence (AGI), could soft robotics hold the key to unlocking that next breakthrough?
The Boundaries Of Bodiless AI
Recent research is beginning to expose the shortcomings of today’s most advanced—yet still disembodied—AI systems. A new study from Apple looked at so-called “Large Reasoning Models” (LRMs), a type of language model designed to generate reasoning steps before delivering answers. While these models outperform traditional LLMs on many tasks, the study found they tend to break down when faced with more complex problems. And rather than merely hitting a ceiling, their performance sharply deteriorates—even when supplied with ample computational resources.
More troubling is their inconsistency in reasoning. Their “reasoning traces,” or the steps they take to solve problems, often lack coherent logic. As tasks grow more difficult, the models appear to exert even less effort. The researchers conclude that these systems don’t actually “think” in a way that resembles human cognition.
“What we’re creating today are systems that process words and predict the most likely next word … which is quite different from how humans think,” said Nick Frosst, a former Google researcher and co-founder of Cohere, in an interview with The New York Times.
Thinking Goes Beyond mere Computation
How did we arrive at this point? Throughout much of the 20th century, researchers developed artificial intelligence using a framework called GOFAI—’Good Old-Fashioned Artificial Intelligence’—which approached cognition through symbolic logic. Early AI pioneers aimed to build intelligence by actively manipulating symbols, similar to how computers execute code. Under this model, abstract reasoning didn’t require a physical body.
However, this view began to unravel when early robotic systems struggled to operate effectively in the unpredictable, messy conditions of the real world. Researchers from psychology, neuroscience, and philosophy began to reconsider the foundations of intelligence—especially in light of insights from studying animals and even plants, which learn and adapt through physical interaction with their environments rather than through abstract reasoning alone.
In humans, for example, the enteric nervous system—often called the “second brain”—regulates digestion using the same kinds of neurons and chemicals as the brain. Interestingly, octopus tentacles use similar components to sense and react independently, right within the limb.
All of this raises a compelling question: what if adaptable intelligence emerges by spreading throughout the body and staying deeply connected to the physical world, rather than concentrating in a centralized brain?
This is the core principle behind embodied cognition: thinking, sensing, and acting are not distinct functions—they form a single, integrated process. As Rolf Pfeifer, Director of the Artificial Intelligence Laboratory at the University of Zurich, explained to EMBO Reports, “Brains have always evolved alongside bodies that must engage with the world to survive. There’s no abstract, algorithmic void where brains simply emerge.”
Embodied Intelligence
To build truly intelligent systems, we may need to develop smarter bodies alongside smarter AI—and according to Cecilia Laschi, a leading figure in soft robotics, “smarter” often means “softer.” After years of working with rigid humanoid robots in Japan, she turned her focus to soft-bodied machines, drawing inspiration from the octopus—an animal with no skeleton whose limbs can act independently.
“In a humanoid robot, every movement must be precisely controlled,” she explained in an interview with New Atlas. “If the terrain changes, even slightly, you have to adjust the programming.”
In contrast, animals don’t need to consciously plan each step. “Our knees, for example, are naturally compliant,” she noted. “We adapt to uneven surfaces mechanically, without involving the brain.” This concept—where the body itself handles part of the cognitive load—is known as embodied intelligence.
Designing Smarter Bodies
From an engineering standpoint, embodied intelligence offers clear benefits. By shifting perception, control, and decision-making to the robot’s physical design, engineers can reduce the demands on its central processor. This makes robots more adaptable and efficient in unpredictable, real-world environments.
In a May special edition of Science Robotics, Laschi explains it this way: “Motor control isn’t handled solely by the computing system … physical behavior is also shaped mechanically by external forces acting on the body.” In other words, behavior emerges from interactions with the environment, and intelligence is acquired through experience—not hardcoded into a program.
From this perspective, intelligence isn’t simply a matter of faster processors or larger AI models—it’s rooted in interaction. A major driver of progress in this area is soft robotics, which employs materials like silicone or advanced fabrics to create more adaptable, flexible robot bodies. These soft systems can adjust to their surroundings, move fluidly, and learn in real time. Much like an octopus tentacle, a soft robotic arm can grasp, sense, and adapt on the fly—without needing to compute every action in advance.
Living Materials and Feedback: How To Build Self-Thinking Systems
To achieve soft robotics that function as seamlessly as something like an octopus tentacle, engineers are shifting away from programming every potential outcome. Instead, they’re exploring new approaches that enable machines to sense and respond dynamically. Researchers in this field are developing a concept called autonomous physical intelligence (API).
Ximin He, an Associate Professor of Materials Science and Engineering at UCLA, is at the forefront of this research. Her work involves developing soft, responsive materials—such as gels and polymers—that do more than just react to external stimuli. These materials are capable of self-regulating their movements through built-in feedback mechanisms.
“We’re trying to embed more decision-making capabilities directly into the material,” He explained in an interview with New Atlas. “If a material changes shape in response to a trigger, it can also determine how to adapt that trigger based on its deformation—essentially correcting or fine-tuning its next action.”
Embedding Intelligence in Matter
In 2018, He’s team showcased a gel that could control its own movement. Since then, they’ve demonstrated that this concept also works with other soft materials, like liquid crystal elastomers, which function effectively even in open air.
The core principle behind autonomous physical intelligence (API) is nonlinear time-lagged feedback. While conventional robots rely on external control systems to interpret sensory input and direct their actions, Ximin He’s method embeds that decision-making logic directly into the material itself.
“In robotics, it’s not enough to just sense and actuate—you also need decision-making in between,” He tells New Atlas. “We’re building that into the material structure through internal feedback mechanisms.”
She likens this approach to how living organisms function. Biological systems often use negative feedback—like how the body regulates blood sugar or how a thermostat maintains temperature—to correct imbalances. Positive feedback, in contrast, intensifies changes. Nonlinear feedback blends these two, producing stable, rhythmic patterns of behavior, such as those seen in walking or pendulum motion.
“Natural movement—like walking or swimming—is often repetitive and steady,” He explains. “With nonlinear, time-delayed feedback, soft robots can be designed to move forward, reverse, and then move forward again, all without step-by-step external commands.”
This marks a significant evolution from earlier soft robots that depended entirely on outside cues to function. As He and her collaborators outlined in a recent review paper, by integrating sensing, control, and actuation directly into the material, they’re building robots capable of not just responding to their environment—but of making decisions, adapting, and acting independently.
The Future Lies In Intelligent Softness
Soft robotics is still an emerging field, but its potential is immense. Laschi highlights early, clear applications such as endoscopic surgical tools capable of simultaneously inspecting and responding to delicate human tissue, as well as rehabilitation devices that can bend or adjust in real time to meet a patient’s needs.
To progress from AI to AGI, machines may need physical forms—especially ones that are soft and adaptable. Most living beings, including humans, gain knowledge through movement, touch, trial and error, and adaptation. We navigate a messy, unpredictable world with ease—something current AIs still find challenging. Our understanding of something as simple as an apple comes not from reading a definition, but from physically engaging with it: holding, tasting, dropping, bruising, slicing, squeezing, and watching it decay.
This kind of embodied, sensory, and context-rich knowledge is difficult to instill in models that rely solely on text or images. By linking AI more directly to the real world through sensory feedback, we can bypass the constraints of language that large language models face. This opens the door for AI to form its own kind of understanding—distinct from a human one. For instance, a soft robot equipped with alternative sensory inputs—like infrared vision, low-frequency hearing, or the ability to detect disease through smell cancer—could develop a unique and potentially valuable perspective on life on Earth.
“If you want to develop something like human intelligence in a machine, the machine has to be able to acquire its own experiences,” explains Giulio Sandini, Professor of Bioengineering at the University of Genoa. Like children, it must learn through interaction with the world—and that almost certainly means it needs a body.
If you’ve had the opportunity to interact with Australia’s renowned magpies, you’re aware of their remarkable intelligence. With their distinctive black and white feathers, melodious calls, and intricate social interactions, magpies exhibit a level of avian cleverness that captivates both bird enthusiasts and researchers.
However, what factors contribute to the success of these intelligent birds? Are their keen cognitive abilities inherent, predetermined by their genetic composition? Or are the smarts of magpies primarily shaped by their surroundings and social interactions?
A recent study published in Royal Society Open Science delves into the ongoing “nature versus nurture” debate, particularly concerning avian intelligence.
Larger social gatherings result in more intelligent birds
Our research centered on Western Australian magpies, which differ from their eastern counterparts by residing in large, cooperative social communities year-round. We conducted a learning ability test on young fledglings, as well as their mothers.
We crafted wooden “puzzle boards” featuring holes covered by lids of various colors. Underneath one lid per board, we concealed a delectable food reward. Each bird was individually tested to prevent them from simply mimicking their peers.
Through trial and error, the magpies had to discern which color corresponded to the food prize. Mastery of the puzzle was achieved when the birds consistently selected the rewarded color in 10 out of 12 consecutive attempts.
We evaluated fledglings at 100, 200, and 300 days after leaving the nest. While their puzzle-solving abilities improved with age, the cognitive performance of young magpies exhibited minimal correlation with the problem-solving skills of their mothers.
Associative learning array with color combinations presented to fledglings at (a) 100, (b) 200 and (c) 300 days post-fledging. Each fledgling is randomly assigned a color shade as the rewarded well at each testing period. Credit: Royal Society Open Science (2024). DOI: 10.1098/rsos.231399
Rather than genetics or maternal influence, the primary determinant of fledglings’ learning speed in selecting the correct color was the size of their social circle. Those raised in larger groups demonstrated significantly quicker mastery of the test compared to those from smaller social groups.
Fledglings residing in groups of ten or more birds required approximately a dozen attempts to consistently identify the rewarded color. Conversely, those raised in groups of three needed over 30 attempts to establish the connection between color and food.
The Impact of Social Environment on Cognitive Development
Why Living in Larger Social Groups Enhances Cognitive Abilities
The mental demands faced by social animals, such as recognizing group members and managing relationships within a complex social structure, likely contribute to the cognitive benefits observed in larger social groups.
Magpies demonstrate the ability to recognize and remember humans, indicating their capacity for social cognition even in the wild.
Young magpies in larger groups receive more mental exercise by navigating complex social dynamics, which may enhance their problem-solving abilities.
These findings challenge the notion that intelligence is solely determined by genetic inheritance, emphasizing the role of environmental factors, particularly during early development.
While our study focused on Australian magpies, its implications could apply to other socially adept and intelligent species.
Researchers from Duke University’s biomedical engineering department have showcased a novel approach that significantly enhances the performance of machine learning models in the search for new molecular therapeutics, even when utilizing only a small portion of the available data. By employing an algorithm that actively detects gaps in datasets, the accuracy of the models can be more than doubled in certain instances.
This innovative approach has the potential to simplify the identification and classification of molecules with valuable characteristics for the development of new drugs and materials. The research was published in the journal Digital Discovery by the Royal Society of Chemistry on June 23.
Challenges of Machine Learning Algorithms in Predicting Molecular Properties
Machine learning algorithms play an increasingly crucial role in predicting the properties of small molecules, including drug candidates and compounds. However, their effectiveness is currently limited by imperfect datasets used for training, particularly due to data bias.
This bias arises when certain properties of molecules are overrepresented compared to others in the dataset, leading the algorithm to prioritize the overrepresented property and overlook other important features.
Daniel Reker, an assistant professor of biomedical engineering at Duke University, compared this bias issue to training an algorithm to differentiate between pictures of dogs and cats but providing it with an overwhelming number of dog pictures and only a few cat pictures. As a result, the algorithm becomes excessively proficient at identifying dogs and ignores other important distinctions.
Data Bias and Its Impact on Drug Discovery
This bias poses significant challenges in drug discovery, where datasets often consist of a vast majority of “ineffective” compounds, with only a small fraction showing potential usefulness. To address this, researchers resort to data subsampling, where the algorithm learns from a smaller but hopefully representative subset of the data. However, this process can lead to the loss of crucial information, impacting the accuracy of the algorithm.
The new method proposed by the Duke University biomedical engineers addresses this limitation by employing an algorithm that actively identifies gaps in datasets. By doing so, the researchers can enhance the accuracy of machine learning models, sometimes achieving more than double their original accuracy when using only a fraction of the available data. This breakthrough could greatly facilitate the identification and classification of molecules with desirable properties for drug development and other material applications.
Reker and his team set out to investigate whether active machine learning could address the longstanding issue mentioned earlier.
An Interactive Approach
In active machine learning, the algorithm can ask questions or request more information when it encounters confusion or detects data gaps, making it highly efficient in predicting performance. While active learning algorithms are usually used to generate new data, the team wanted to explore its application on existing datasets in molecular biology and drug development.
To assess the effectiveness of their active subsampling approach, the team compiled datasets containing molecules with various characteristics, such as those crossing the blood-brain barrier, inhibiting a protein linked to Alzheimer’s disease, and compounds inhibiting HIV replication. They compared their active-learning algorithm with models that learned from the complete dataset and 16 state-of-the-art subsampling strategies.
The results showed that active subsampling outperformed each of the standard subsampling strategies in identifying and predicting molecular characteristics. Moreover, it was up to 139 percent more effective than the algorithm trained on the full dataset in some cases. The model also demonstrated its ability to adapt to mistakes in the data, proving especially valuable for low-quality datasets.
Surprising Discoveries
Interestingly, the team found that the ideal amount of data needed was much lower than expected, sometimes requiring only 10% of the available data. The active-subsampling model reached a point where additional data became detrimental to performance, even within the subsample.
While the team intends to explore this inflection point further in future research, they also plan to utilize this new approach to identify potential therapeutic target molecules. They believe their work will enhance understanding of active machine learning and its resilience to data errors in various research fields.
Besides boosting machine learning performance, this approach can reduce data storage needs and costs since it works with a more refined dataset, making machine learning more accessible, reproducible, and powerful for all researchers.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.