Tag: Human

  • With Adaptive Motion, Robots Achieve Human-Like Dexterity from Minimal Data

    With Adaptive Motion, Robots Achieve Human-Like Dexterity from Minimal Data

    Although robotic automation is advancing quickly, most systems have difficulty adjusting their pre-trained movements to environments with objects that vary in stiffness or weight. To address this, a team of researchers in Japan has created an adaptive motion reproduction system based on Gaussian process regression.
    This image depicts the real-time transfer of a human’s motion to a robotic avatar, enabling the latter to perform a dexterous task. Image Credits: Keio University Global Research Institute (KGRI)

    Although robotic automation is advancing quickly, most systems have difficulty adjusting their pre-trained movements to environments with objects that vary in stiffness or weight. To address this, a team of researchers in Japan has created an adaptive motion reproduction system based on Gaussian process regression.

    Their approach models the relationship between human motion and object characteristics, allowing robots to accurately mimic human grasping actions. It can achieve this with minimal training data and handle unfamiliar objects with impressive precision and efficiency.

    Obstacles to Robotic Flexibility

    “Rapid advancements in robotic automation have the potential to transform industries and enhance our lives by taking over tasks that are dangerous, physically strenuous, or monotonous for humans.”

    Although current robots perform exceptionally well in structured settings like assembly lines, the true challenge of automation is operating in unpredictable, dynamic environments, such as cooking, elderly care, or exploration.

    Achieving this requires overcoming a major obstacle: enabling robots to sense and adapt through touch. Unlike human hands, which naturally adjust their grip to objects of varying weight, texture, or stiffness, most robotic systems still lack this essential adaptability.

    Progress in Motion Replication Technologies

    To equip machines with advanced human-like dexterity, researchers have created a variety of motion reproduction systems (MRSs). These systems focus on precisely capturing human movements and replicating them in robots through teleoperation.

    Nevertheless, MRSs often struggle when the characteristics of the object being manipulated differ from those used during the original motion recording. This reduces the flexibility of MRSs and, consequently, limits the broader usability of robots.

    To tackle this core challenge, a research team in Japan has created an innovative system capable of adaptively modeling and replicating intricate human movements.

    The study was spearheaded by Master’s student Akira Takakura from the Graduate School of Science and Technology at Keio University, with contributions from Associate Professor Takahiro Nozaki of the Department of System Design Engineering, Doctoral student Kazuki Yane, Professor Emeritus Shuichi Adachi of Keio University, and Assistant Professor Tomoya Kitamura from Tokyo University of Science, Japan.

    Enhancing Adaptability Through Gaussian Process Regression

    The team’s key innovation was moving beyond linear modeling approaches and adopting Gaussian process regression (GPR), a method capable of capturing complex nonlinear relationships even from limited training data.

    By recording human grasping motions across a variety of objects, the GPR model learned how an object’s “environmental stiffness” relates to the position and force commands applied by humans. This process effectively uncovers the underlying human motion intent, or human stiffness,” enabling the robot to perform suitable actions with objects it has never handled before.

    “Equipping robots with the ability to manipulate everyday objects is crucial for allowing them to interact naturally with their environment and respond appropriately to encountered forces,” says Dr. Nozaki.

    System Testing and Performance Results

    To validate their method, the researchers compared it with traditional MRSs, linear interpolation, and a standard imitation learning model.

    The proposed GPR-based system showed substantially improved performance in generating accurate motion commands for both interpolation and extrapolation tasks.

    For interpolation—predicting motions for objects with stiffness values within the training range—the method reduced the average root-mean-square error (RMSE) by at least 40% for position and 34% for force.

    For extrapolation—handling objects stiffer or softer than those in the training set—the approach remained highly effective, achieving a 74% reduction in position RMSE. Overall, the GPR-based method significantly outperformed all other tested approaches.

    Applications in Industry and the Evolution of Robotics

    By effectively modeling human–object interactions using minimal training data, this novel approach to MRSs enables the generation of precise and dexterous motion commands for a diverse array of objects. Its capacity to capture and replicate complex human skills allows robots to operate beyond rigid, pre-defined scenarios, paving the way for more sophisticated and adaptable services.

    “Because this technology requires only a small dataset and reduces the costs associated with machine learning, it has broad potential across many industries,” explains Mr. Takakura. “For example, life-support robots, which must adjust their movements to different targets each time, could greatly benefit, and companies that previously struggled to implement machine learning due to large data requirements may now find it more accessible.”


    Read the original article on: Tech Xplore

    Read more: China Unveils a Humanoid Robot with Smooth, Human-like Balance

  • China Unveils a Humanoid Robot with Smooth, Human-like Balance

    China Unveils a Humanoid Robot with Smooth, Human-like Balance

    Chinese startup Matrix Robotics has officially introduced MATRIX-3, its third-generation humanoid robot, representing a significant advance in physical AI.
    Image Credits:MATRIX-3 moves humanoid robots from pre-set tasks to adapting and understanding the real world, ready for everyday life.

    Chinese startup Matrix Robotics has officially introduced MATRIX-3, its third-generation humanoid robot, representing a significant advance in physical AI.

    The platform is a complete from-scratch overhaul of algorithms, hardware, and applications, moving humanoid robots beyond rigid task performance toward flexible, real-world interaction.

    Built to be safe, autonomous, and highly adaptable, MATRIX-3 integrates biomimetic perception, precise manipulation, natural human-like motion, and a new cognitive core that supports zero-shot learning.

    The company says the robot is designed to operate beyond factories, extending into commercial, healthcare, and household environments.

    Advancing Adaptive, Human-Like AI

    MATRIX-3 is framed as a significant step forward in physical artificial intelligence. Designed as a safe, autonomous, and adaptable platform, it handles complex, human-like tasks in real-world conditions.

    Our vision with MATRIX-3 is to bring machine intelligence into human environments in the most natural and secure way possible,” said Allen Zhang, CEO of Matrix Robotics, in a statement.

    MATRIX-3 features a biomimetic interface with “skin” and touch, covered in flexible 3D fabric that houses an underlying sensor network. This design absorbs physical contact and monitors impact forces in real time, enhancing safety during close interactions with people.

    Advanced Visual–Tactile Perception for Precise and Safe Manipulation

    A multimodal perception system combines high-sensitivity fingertip sensors with advanced vision, creating a visual–tactile loop that lets MATRIX-3 safely handle fragile and flexible objects.

    MATRIX-3 also marks a major advance in mobility and manipulation. Its dexterous 27-DOF hand mimics human anatomy with lightweight, cable-driven actuation for fast, precise motion. This allows the robot to handle everyday tools, operate delicate equipment, and manipulate soft materials like fabrics.

    Whole-body movement uses a natural gait generated by a motion control model trained on human motion-capture data. Built-in linear actuators deliver high power density with minimal noise, allowing for stable, agile, and well-coordinated full-body motion.

    Matrix’s intelligence division built a new cognitive core that underpins these abilities. Its neural network enables zero-shot learning, letting MATRIX-3 perform new tasks from natural-language instructions without task-specific training.

    Using universal intelligent manipulation, the robot can autonomously plan its grasps, modulate force in real time, and navigate around obstacles through smooth hand–eye coordination.

    Real-World Performance Yet to Be Verified

    So far, videos show MATRIX-3’s capabilities, but researchers have not yet verified its real-world performance; consistently replicating its hand dexterity would mark a major robotics breakthrough.

    Matrix Robotics has launched an early access program for select industry partners, with pilot deployments of MATRIX-3 expected to begin in mid-2026.


    Read the original article on: Interestingengineering

    Read more:Rainbow Around Nearby Dead Star Puzzles Scientists

  • Chinese Robotics Firm Develops Robot with a Lifelike Human Face

    Chinese Robotics Firm Develops Robot with a Lifelike Human Face

    A video from a Chinese company featuring a robot with a human-like face is going viral online, evoking strong Westworld comparisons.
    Image Credits:AheadForm/ Youtube

    A video from a Chinese company featuring a robot with a human-like face is going viral online, evoking strong Westworld comparisons.

    AheadForm, a robotics startup founded in 2024, is developing realistic humanoid robots integrated with AI to interact seamlessly with people.

    Lifelike Expressions and Emotional Interaction

    According to AheadForm, the robot’s expressive face, moving eyes, and synchronized speech allow it to display emotions and interpret human non-verbal signals, creating more natural and engaging interactions.

    The circulating video focuses solely on the robot’s face, showing it blinking, moving its eyes, and making human-like expressions.

    AheadForm says the Origin M1 uses 25 micro motors for realistic facial gestures.

    Exploring AheadForm’s Robot Lineup

    The company’s website showcases other humanoid robots in its Lan Series, as well as an Elven-themed model called ELF V1.

    Within ten years, we might interact with robots and feel they are almost human; in 20 years, they could walk and perform tasks just like humans,” AheadForm founder Hu Yuhang told the South China Morning Post last year.


    Read the original article on: Mashable

    Read more:Hormones Travel To The Brain by “Piggybacking” On Extracellular Vesicles

  • An AI Robotic Dog With Human-Like Precision in Search-and-Rescue Missions

    An AI Robotic Dog With Human-Like Precision in Search-and-Rescue Missions

    Meet the robotic dog that remembers like an elephant and reacts with the instincts of a seasoned first responder.
    Like humans, the robot uses reactive and deliberative behaviors and thoughtful decision-making. It quickly responds to avoid a collision and handles high-level planning by using the custom MLLM to analyze its current view and plan how best to proceed. Image Credits: Logan Jinks/Texas A&M Engineering

    Meet the robotic dog that remembers like an elephant and reacts with the instincts of a seasoned first responder.

    Developed by Texas A&M engineering students, this AI robotic dog can see, remember, and reason. Designed for chaotic environments, the robot could revolutionize search-and-rescue and disaster response.

    The project was led by Sandun Vitharana, a master’s student in engineering technology, and Sanjaya Mallikarachchi, a doctoral student in interdisciplinary engineering. Together, they developed a robotic dog that retains memory of where it has been and what it has observed, responds to voice commands, and uses AI and camera data to plan paths and recognize objects.

    How The Robot’s Memory System Functions

    A roboticist might characterize it as a ground-based robot equipped with a memory-centered navigation system driven by a multimodal large language model (MLLM). The system uses visual data to guide navigation, combining imaging, reasoning, and path optimization for both strategic planning and real-time response.

    Credit: Logan Jinks/Texas A&M Engineering

    Robot navigation has progressed from basic landmark-based approaches to advanced computational systems that fuse data from multiple sensors. Still, operating autonomously in unpredictable, unstructured settings—such as disaster zones or remote locations—remains a major challenge, where adaptability and efficiency are essential.

    Although robot dogs and navigation systems powered by large language models exist separately, combining a custom multimodal large language model with a visual, memory-based navigation system in a general-purpose, modular framework is a novel approach.

    “Some academic and commercial platforms have incorporated language or vision models into robotics,” Vitharana said. “But no approach has used MLLM-driven memory navigation in the structured way we propose with custom pseudocode guiding decisions.”

    Creation and Possible Uses

    Mallikarachchi and Vitharana began by exploring how an MLLM could interpret a robot camera’s visual data. Supported by the National Science Foundation, they combined this with voice commands to create an intuitive system blending vision, memory, and language.

    This AI-powered robotic dog doesn’t just follow commands — it sees, remembers and thinks. Designed to navigate chaos with precision, the robot could revolutionize search-and-rescue missions, disaster response and many other emergency operations. Image Credits: Logan Jinks/Texas A&M Engineering

    Similar to humans, the robot combines reactive and deliberative behaviors with careful decision-making. It can swiftly react to avoid obstacles while also performing high-level planning, using the custom MLLM to assess its surroundings and determine the optimal path forward.

    “Looking ahead, this type of control architecture is likely to become a standard for human-like robots,” Mallikarachchi noted.

    Its memory-driven system enables the robot to remember and reuse previously traveled routes, improving navigation efficiency by minimizing redundant exploration. This capability is especially valuable in search-and-rescue operations, particularly in unmapped regions or areas where GPS is unavailable.

    Expanding Applications Beyond Emergency Response

    The potential uses of the robot extend far beyond emergency response. Hospitals, warehouses, and other large facilities could employ it to enhance operational efficiency. Its sophisticated navigation system could also aid people with visual impairments, explore minefields, or conduct reconnaissance in dangerous environments.

    Dr. Isuru Godage, assistant professor in the Department of Engineering Technology and Industrial Distribution, provided guidance for the project.

    “The heart of our vision is deploying MLLM at the edge, giving our robotic dog immediate, high-level situational awareness and a form of emotional intelligence that was previously unattainable,” Godage said. “This enables the system to bridge the gap between humans and machines seamlessly. Our aim is to make this technology not just a tool, but a truly empathetic partner, creating the most advanced, first-responder-ready system for any unmapped environment.”

    Nuralem Abizov, Amanzhol Bektemessov, and Aidos Ibrayev from Kazakhstan’s International Engineering and Technological University contributed to developing the ROS2 infrastructure for the project. HG Chamika Wijayagrahi from Coventry University in the UK assisted with map design and the analysis of experimental results.

    Vitharana and Mallikarachchi showcased the robot and its capabilities at the recent 22nd International Conference on Ubiquitous Robots. Their research was published in the conference proceedings for the 2025 22nd International Conference on Ubiquitous Robots (UR).


    Read the original article on: Tech Xplore

    Read more: Robotic Dogs Handle Bomb Detection, Neutralization, and Disposal

  • Human Survives with Engineered Pig Liver

    Human Survives with Engineered Pig Liver

    Image Credits:Shutterstock

    A recent Journal of Hepatology study reports the first successful pig-to-human auxiliary liver transplant. The patient survived for 171 days, demonstrating that modified pig livers can perform key metabolic and synthetic functions in humans. The case also highlights the ongoing technical and medical challenges that limit long-term survival in such procedures.

    According to the World Health Organization, thousands die each year waiting for donor organs due to shortages. In China alone, hundreds of thousands of people develop liver failure each year, but surgeons performed only about 6,000 liver transplants in 2022. This experimental success points to a potential future solution for the critical gap between organ demand and availability.

    Genetically Modified Pig Liver Transplanted into High-Risk Human Patient

    The 71-year-old patient with hepatitis B–related cirrhosis and liver cancer was ineligible for surgery or a human liver transplant. Surgeons implanted an auxiliary liver from a genetically modified Diannan miniature pig, which had 10 specific gene edits. These modifications removed xenoantigens and added human genes to improve compatibility with the patient’s immune and coagulation systems.

    During the first month, the pig liver functioned well, producing bile and coagulation factors, but surgeons removed it on day 38 due to xenotransplantation-associated thrombotic microangiopathy (xTMA). Treatment with the complement inhibitor eculizumab and plasma exchange successfully addressed the xTMA. The patient later suffered multiple episodes of upper gastrointestinal bleeding and passed away on day 171.

    Pioneering Pig-to-Human Liver Transplant Shows Promise and Highlights Remaining Challenges

    This case shows a genetically engineered pig liver can function in a human long-term,” said Beicheng Sun, MD, PhD, noting ongoing coagulation and immune challenges.

    Heiner Wedemeyer, MD, called the report a milestone, showing a genetically modified pig liver can function in a human while highlighting ongoing challenges. Xenotransplantation could offer new treatment options for patients with acute liver failure, acute-on-chronic liver failure, and liver cancer. A new era in transplant hepatology has begun.

    The release of this case further cements the Journal of Hepatology as the premier liver journal worldwide. “We are committed to publishing cutting-edge hepatology research,” said Vlad Ratziu, MD, PhD, Editor-in-Chief of the Journal of Hepatology, Sorbonne Université, Paris.


    Read the original article on: Sciencedaily

    Read more: Scientists Watch Flu Viruses Enter Human Cells Live

  • A Speech-Restoring Brain Implant has Won FDA Approval for Human Trials

    A Speech-Restoring Brain Implant has Won FDA Approval for Human Trials

    Paradromics, a U.S. BCI startup, is emerging as a key neural tech contender after FDA approval for a human trial of its speech-restoration implant for people with paralysis.
    Image Credits: The trial will investigate the Paradromics BCI for speech restoration
    Paradromics

    Paradromics, a U.S. BCI startup, is emerging as a key neural tech contender after FDA approval for a human trial of its speech-restoration implant for people with paralysis.

    The Austin-based company, with multiple FDA Breakthrough Device designations, received IDE approval for its Connexus BCI Connect-One Early Feasibility Study. It is the first company to obtain IDE clearance for a fully implantable BCI intended for speech restoration.

    Assessing Connexus BCI’s Potential to Restore Communication Abilities

    The study will test Connexus BCI’s safety and performance, aiming to help people with paralysis communicate via text or voice.

    The company says Connexus is designed for long-term clinical use and is the first high–data-rate BCI built for top performance.

    The device has a titanium-alloy casing with 400+ electrodes and onboard processing to capture brain activity. Each electrode measures under 40 microns—thinner than a human hair.

    How the Fully Implantable BCI System Operates

    The full BCI system implants under the skin, captures motor signals, and wirelessly transmits them via a chest transceiver to an AI-powered computer that converts them into text, speech, or device controls.

    We’re thrilled to introduce this new hardware into a clinical study,” says Matt Angle, CEO of Paradromics.

    The initial trial will involve two participants receiving 7.5-mm-wide implants in the motor cortex to capture neuron activity. They will imagine speaking sentences, with signals sent to an external computer. Over time, the system will learn which neural patterns correspond to specific speech sounds, tailoring the interface to each user.

    First BCI Trial Aiming for Real-Time Personalized Synthetic Speech

    This marks the first BCI study focused on generating a synthesized voice in real time, using past recordings of the participants’ speech as a basis.

    Researchers will also test whether the implant can pick up neural signals linked to imagined hand movements, which could enable cursor control.

    If early results are promising, the trial may expand to include 10 participants, with two of them receiving dual implants for stronger signal acquisition.

    It’s an exciting step,” says Mariska Vansteensel, a BCI expert at the University Medical Center Utrecht. “A fully implantable system is essential for the technology to advance toward real-world clinical use.”


    Read the original article on: New Atlas

    Read more: Chinese Firm Unveils Highly Agile Life-sized Robotic Hand

  • A Multi-Function Mimicking Neuron Moves Robots Closer to Human-Like Abilities

    A Multi-Function Mimicking Neuron Moves Robots Closer to Human-Like Abilities

    Scientists have developed an artificial neuron that can imitate multiple brain regions, bringing us closer to robots that perceive and react to their surroundings much like humans.
    An electronic chip used to create an artificial transneuron – a tiny electronic circuit that replicates how brain cells pass signals between one another by generating small electrical pulses. Image Credits: Loughborough University

    Scientists have developed an artificial neuron that can imitate multiple brain regions, bringing us closer to robots that perceive and react to their surroundings much like humans.

    The Power and Limits of Neuromorphic Neurons

    Artificial neurons—small electronic circuits that mimic how brain cells interact—are central to neuromorphic computing, which seeks to give machines human-like intelligence.

    However, current artificial neurons are limited to specific tasks, requiring thousands to perform even simple brain functions. This makes the process expensive and energy-intensive compared with the brain’s natural efficiency.

    Now, brain-like intelligence might be within reach, thanks to an international team led by Loughborough University, collaborating with researchers from the Salk Institute and the University of Southern California.

    In a recent paper, the researchers report that their single artificial neuron, called a “transneuron,” can take on the roles of brain cells involved in vision, planning, and movement—demonstrating a flexibility once considered unique to the human brain.

    Recreating the Human Brain with Transneurons

    “Nature Communications published a study titled ‘Artificial transneurons emulate neuronal activity in different areas of brain cortex.’”

    “Is the human brain an elusive device beyond our reach, or could we one day recreate it with electronics—and perhaps even surpass it?” asks Professor Sergey Saveliev, a theoretical physics expert at Loughborough University and the study’s corresponding author.

    Our work moves us closer to answering this question. We’ve demonstrated that a single artificial neuron can be adjusted to mimic the behavior of visual, motor, and pre-motor neurons.

    “This breakthrough could lead to electronic chips capable of executing complex, brain-like tasks—such as processing visual data and controlling movement—using only a few artificial neurons. In the long run, this brings us nearer to creating more human-like robots.”

    Electronic chips used to create artificial transneurons – tiny electronic circuits that replicate how brain cells pass signals between one another by generating small electrical pulses. They are pictured in front of the experimental setup used to capture how they responded to electrical input. Image Credits: Loughborough University

    Study Outcomes

    The researchers evaluated how closely their device replicates brain activity by sending electrical signals into the transneuron and measuring its output pulses. These were then compared to the electrical signals used by real brain cells, recorded from macaque monkeys.

    They concentrated on three brain regions: one responsible for vision, another for movement control, and a third involved in preparing actions. Each region generates a distinct pulse pattern—sometimes steady, sometimes irregular, and sometimes rapid bursts.

    Impressively, by fine-tuning the device’s electrical settings, a single transneuron was able to mimic all three pulse patterns with 70–100% accuracy.

    “Our brains are extremely efficient, capable of handling complex tasks like face recognition or movement control while consuming very little energy,” says Professor Alexander Balanov, Professor of Physics at Loughborough University.

    By adjusting the electric circuit settings of our devices, such as altering the voltage, a single unit can mimic different types of brain neurons. Our artificial neurons also respond effectively to environmental changes, like pressure and temperature, which could enable artificial sensory systems.

    “This technology could pave the way for future computers that are faster and more energy-efficient than today’s, as well as robots that can adapt their behavior in real time, much like living organisms.”

    Transneurons Compute Like Neurons

    Importantly, the researchers showed that the transneuron does more than mimic neuron behavior—it actually performs computations like real neurons.

    By altering the electrical signals fed into the device, the transneuron adjusted its pulse frequency, similar to how brain cells change their activity in response to incoming signals.

    When given two signals simultaneously, the transneuron reacted differently depending on whether the signals were synchronized or not, indicating it can distinguish between inputs—something that typically requires several artificial neurons working in concert.

    Mechanism of Artificial Transneurons

    Like other artificial neurons, the transneuron is a tiny electronic chip that imitates how brain cells communicate by generating small electrical pulses.

    Its brain-like adaptability comes from a newly identified component called a memristor—a nanoscale device that physically changes when electricity passes through it, allowing it to “remember” past signals and adjust its responses, similar to how neurons learn.

    As electricity flows through the transneuron, silver atoms within the memristor shift to form and break microscopic bridges, creating the electrical pulses.

    Environmental factors—such as temperature, voltage, and resistance—affect the memristor, which in turn alters the pulse behavior.

    From left to right, Professor Alexander Balanov, Professor Sergey Saveliev, and Dr Pavel Borisov, of the Loughborough University Department of Physics. The scientists are part of a team of international researchers that have created a new artificial neuron that can mimic different parts of the brain – which could be the key to more human-like robotics. Image Credits: Loughborough University

    This is how the researchers can adjust the transneuron to mimic different brain regions without relying on software.

    “Most of today’s AI runs on computers that process information very differently from the brain,” explains Dr. Sergei Gepshtein, an expert in visual perception and visually guided behavior at the Salk Institute.

    Laptops and phones handle data with rigid, step-by-step logic, whereas the brain operates through vast networks of neurons firing in irregular, often unpredictable patterns.

    “Our transneuron brings us closer to hardware that doesn’t just simulate brain-like activity in software—it functions in a genuinely brain-like manner.”

    Designing a Robotic Nervous System

    The researchers’ next goal is to develop a “brain cortex on a chip” by linking multiple transneurons into networks capable of perception, learning, and control.

    They believe this approach could transform robotics, laying the groundwork for a robotic nervous system that allows machines to sense, adapt, and respond to their environment like living organisms.

    “This represents a small but important step toward robots with artificial nervous systems,” says Professor Joshua Yang, an expert in electrical and computer engineering at the University of Southern California.

    “Such systems could enable robots to learn more efficiently, using less energy, time, and data. They could also support continuous, lifelong learning, adapting seamlessly to new experiences—capabilities that remain challenging for today’s AI systems.”

    Potential Uses of Transneurons in the Brain

    Dr. Pavel Borisov, an experimental physicist at Loughborough University, suggests the research could also enhance our understanding of the human brain.

    “This brings us a step closer to recreating at least a small part of the brain in electronic form,” he said.

    Devices like those described in this study could one day interact with the human central nervous system, potentially replacing or supplementing certain brain regions.

    “Additionally, these artificial neurons provide a sandbox for neuroscientists to explore how different brain areas communicate and to gain deeper insights into the formation of consciousness.”


    Read the original article on: Tech Xplore

    Read more: A Microrobot Moves Through the Bloodstream to Deliver Medication Precisely

  • Researchers Found that Human Emotions may Influence Water Crystal Formation

    Researchers Found that Human Emotions may Influence Water Crystal Formation

    Researchers suggest that emotions such as joy, love, and anger may extend beyond the human body, potentially leaving traces on the structure of water itself. According to specialists, water exposed to different emotional inputs appears to show distinct crystal patterns, displaying orderly or chaotic shapes depending on the type of “vibration” it receives.
    Image Credits: jetss

    Researchers suggest that emotions such as joy, love, and anger may extend beyond the human body, potentially leaving traces on the structure of water itself. According to specialists, water exposed to different emotional inputs appears to show distinct crystal patterns, displaying orderly or chaotic shapes depending on the type of “vibration” it receives.

    The idea gained attention when scientists compared water samples influenced by positive expressions like “gratitude” and “hope” with others exposed to negative words. The contrast was striking: crystals linked to uplifting messages formed balanced, aesthetically pleasing shapes, while those associated with negative emotions broke into irregular and messy patterns.

    Emotional States and Their Potential Influence on the Body and Environment

    These findings have sparked debate about whether our emotional states could affect not only our surroundings but also our own bodies, given that humans are largely made of water. The research hints that nurturing positive feelings might produce physical effects that are both real and unexpected.

    Although many remain doubtful of these claims, the experiments have encouraged new avenues of inquiry. If emotions can alter microscopic structures, some argue, what broader influence might they have on the way we experience and shape the world around us?


    Read the original article on: Jetss

    Read more: A South Korean Innovation is Literally Reinventing the Whee

  • Scientists Were Amazed to Find a Marine Species that Can Destroy a Major Human Threat

    Scientists Were Amazed to Find a Marine Species that Can Destroy a Major Human Threat

    For decades, plastic has posed one of the greatest environmental threats to the planet. It builds up in oceans, harms marine life, and can take hundreds of years to break down—particularly durable varieties like polyurethane. Now, however, scientists have been stunned by a new finding: a marine species capable of degrading this very type of plastic, offering fresh hope in the fight against ocean pollution.
    Image Credits:Xataka

    For decades, plastic has posed one of the greatest environmental threats to the planet. It builds up in oceans, harms marine life, and can take hundreds of years to break down—particularly durable varieties like polyurethane. Now, however, scientists have been stunned by a new finding: a marine species capable of degrading this very type of plastic, offering fresh hope in the fight against ocean pollution.

    Deep-Sea Discovery Offers Hope Against Plastic Pollution

    The United Nations Environment Programme (UNEP) reports that people produce about 400 million tons of plastic worldwide each year, and a large share of it flows into the oceans, threatening marine life and entire ecosystems.Yet, a remarkable deep-sea discovery may offer a way to ease this seemingly unsolvable environmental crisis.

    A team from the University of Hawaii has discovered marine fungi that can break down polyurethane. Their study centered on fungi collected from sand, seaweed, coral, and sponges along Hawaii’s coast. The researchers placed these fungi in dishes containing polyurethane to track how quickly they consumed the plastic. The outcome was striking: over 60% of the ocean-derived fungi were able to digest it.


    Read the original article on: Terra

    Read more: Human Skin Cells Have Heen Converted into Fertilizable Eggs for The First Time

  • Harvard Reverses Aging in Monkeys; Human Trials Coming soon

    Harvard Reverses Aging in Monkeys; Human Trials Coming soon

    In a Moonshots podcast interview, Harvard Medical School genetics professor David Sinclair discussed a once-unthinkable breakthrough — restoring youth to animal cells and tissues. He said human clinical trials are set to start soon.
    Image Credits: Freepick

    In a Moonshots podcast interview, Harvard Medical School genetics professor David Sinclair discussed a once-unthinkable breakthrough — restoring youth to animal cells and tissues. He said human clinical trials are set to start soon.

    In the discussion, the scientist noted that experiments on mice and green monkeys showed strong potential for substantially reversing aging.

    We’ve successfully reversed aging in mice and monkeys, and human trials will start next year,” Sinclair stated.

    AI and Gene Therapy

    He said AI and gene therapies drive this breakthrough, promising major gains in health and longevity. Sinclair aims to make these treatments widely available, calling them “a turning point in preventative and regenerative medicine.”

    In his conversation with podcast host Peter Diamandis, Sinclair acknowledged that the concept of “reprogramming” adult cells to regain youthful traits was initially met with doubt. However, he and his team succeeded in selectively activating specific genes known as Yamanaka factors, effectively rejuvenating tissues.

    Additionally, a 2020 study demonstrated that gene therapy could reactivate genes typically found only in embryos, enabling the treatment of conditions like blindness caused by optic nerve damage.

    This isn’t science fiction—we do this regularly in my lab,” Sinclair remarked.

    Rejuvenation Results

    Animals involved in the research showed clear reductions in biological age and significant physical recovery. In mice, just four weeks of treatment with a molecular cocktail produced signs of rejuvenation, while monkeys exhibited noticeable optic nerve regeneration.

    We can actually track optic nerve rejuvenation, and the data shows it aging in reverse.

    The team also discovered that aging is largely driven by changes in the epigenome, rather than just cellular deterioration:

    “The epigenome is the real issue because aging stems from the loss of instructions that tell cells how to function,” Sinclair explained.

    Their work demonstrated that these instructions could be restored—without cloning. “We found a safe way to reset the epigenome without needing to be reborn,” he said.

    Following the animal studies, the team is preparing to move on to human trials. Sinclair stated that testing will begin next year, initially targeting individuals with eye conditions like glaucoma and ischemic optic neuropathy, as the eye is easily accessible and allows for clear measurement of outcomes.


    Read the original article on: O globo

    Read more: Ice Shown to Break Down Iron More Quickly Than Water