Author: Marcílio Santos

  • Primitive Cell Structures May Have Formed in the Lakes on Saturn’s Moon Titan

    Primitive Cell Structures May Have Formed in the Lakes on Saturn’s Moon Titan

    Image Credit: Pixabay

    A new study suggests that when NASA’s upcoming Dragonfly mission glides over the lakes of Saturn’s moon Titan, it might encounter a frothy substance resembling the earliest traces of life on Earth.

    Titan’s Methane Cycle Mirrors Earth’s Water Cycle, Fueling Speculation About Life

    Titan shares some surprising similarities with our planet. Its surface features vast lakes and seas, not of water, but of liquid hydrocarbons like methane and ethane. These liquids follow a cycle much like Earth’s water cycle—evaporating into clouds and returning as rain.

    Since Earth’s water cycle plays a vital role in sustaining life, scientists believe that Titan’s own version of this cycle could similarly support the formation of life.

    Titan’s methane has a meteorological cycle of evaporation, cloud formation, and rainfall, similar to Earth’s water cycle. (NASA/ESA)

    Study Explores Formation of Cell-Like Vesicles on Titan, Hinting at Building Blocks of Life

    A recent study published in the International Journal of Astrobiology investigates the potential for primitive cell-like structures, known as vesicles, to form on Saturn’s moon Titan. These vesicles—tiny bubbles made of fatty molecules—encapsulate a gooey interior within a membrane, resembling the basic architecture of a living cell.

    The presence of vesicles on Titan would signify a step toward greater complexity and organization, which are key ingredients for life to emerge,” says Conor Nixon, a planetary scientist at NASA’s Goddard Space Flight Center.

    Nixon, along with physical chemist Christian Mayer from the University of Duisburg-Essen in Germany, expanded on earlier theories about how life on Earth may have originated from inorganic material interacting with turbulent environments like splashes and storms.

    Methane Rains on Titan May Deliver Amphiphilic Molecules That Spark Vesicle Formation

    According to their hypothesis, vesicle formation on Titan could result from a unique chain of events driven by its active liquid cycle. It would begin with methane rainfall, delivering atmospheric molecules—called amphiphiles—to the surface of Titan’s lakes. These amphiphilic molecules have dual characteristics: one end that bonds with liquids and another that bonds with fats, making them ideal for forming membrane-like structures.

    (1) Methane lakes and seas on Titan’s surface become coated with a film of amphiphiles. (2) Methane raindrops splash the lake surface. (3) Splashes create a mist of droplets coated in the same film. (4) Droplets settle back onto the lake and sink, becoming coated in a bilayer which becomes a vesicle. (Mayer & Nixon, Int. J. Astrobio., 2025)

    Nixon and Mayer explain that “over time, stable vesicles will accumulate, along with the stabilizing amphiphilic molecules that are temporarily shielded from breaking down.”

    Selective Vesicle Survival on Titan May Drive Evolution Toward Greater Complexity

    They propose that, through a gradual process of compositional selection, the most resilient vesicles will thrive, while the less stable ones will fade away—essentially creating an evolutionary path toward greater complexity and function.

    If such a process is occurring on Titan, it could offer valuable insight into how life can emerge from non-living chemistry.

    To test this idea, researchers could search for amphiphilic compounds floating in Titan’s atmosphere using techniques like laser analysis, light scattering, and surface-enhanced Raman spectroscopy—potentially uncovering signs that the building blocks of life are present.

    Unfortunately, NASA’s Dragonfly mission—scheduled to reach Titan in 2034—won’t be equipped with the tools needed to directly detect vesicles. However, it will perform chemical analyses to investigate whether complex chemistry is currently taking place or has occurred in the past. These findings could help answer a profound question: is life a common outcome in suitable environments, or is Earth an exceptional case?


    Read the original article on: Science Alert

    Read more: Private Japanese Moon Lander Sets Course for Landing in the Moon’s Remote Northern Region

  • SS Innovations Completes Intercontinental Robotic Heart Surgery with SSi Mantra

    SS Innovations Completes Intercontinental Robotic Heart Surgery with SSi Mantra

    SS Innovations announced that CEO Dr. Sudhir Srivastava performed an intercontinental robotic cardiac telesurgery with the SSi Mantra 3 on July 19, 2025.
    Image Credits:SS Innovations’ CEO Dr. Srivastava performed the surgery 4,000 miles away from the patient in Strasbourg, France. | Source: SS Innovations International

    SS Innovations announced that CEO Dr. Sudhir Srivastava performed an intercontinental robotic cardiac telesurgery with the SSi Mantra 3 on July 19, 2025.

    Remote Atrial Septal Defect Closure Performed Between India and France During SRS Meeting

    The atrial septal defect closure was done at SAIMS, Indore, with Dr. Srivastava remotely operating the robot from IRCAD, Strasbourg, over 4,000 miles away. The surgery took place during the Society of Robotic Surgery (SRS) Annual Meeting.

    Dr. Srivastava thanked SRS Chairman Dr. Vipul Patel, the IRCAD India team, and SS Innovations, noting the achievement showcases the SSi Mantra 3’s capabilities and expands cardiac care access worldwide.

    On-site support in Indore was provided by:

    • Dr. Lalit Malik, Chief of Cardiac Surgery at Manipal Hospital, Jaipur
    • Dr. Ram Krishna Shukla, Cardiologist at SAIMS, Indore
    • Dr. Bipin Arya, Anesthesiologist at SAIMS, Indore
    • The teams from SS Innovations and SAIMS

    SS Innovations Reports Flawless Performance in Robotic Telesurgery

    SS Innovations reported the surgery went smoothly, with no technical issues, seamless latency, and precise robotic control.

    SS Innovations in Fort Lauderdale aims to make robotic surgery more affordable and accessible worldwide. Its product lineup includes the proprietary SSi Mantra surgical system and a full set of SSi Mudra instruments.

    Over the past year, the company has made significant strides in enhancing its teleoperation technology. Surgeons have performed 35 telesurgeries, including 10 cardiac, and 250 total cardiac surgeries with the SSi Mantra system.

    SS Innovations Achieves Bariatric Telesurgery Milestone Across 560 Miles

    SS Innovations recently completed a robotic bariatric telesurgery linking its Gurugram HQ with a surgery center 560 miles away in Indore.

    The operation included two One-Anastomosis Gastric Bypass (OAGB) procedures, a bariatric surgery that reduces stomach size and reroutes digestion for lasting weight loss and better metabolic health.

    Image Credits:A breakdown of the telesurgery demo at the Society of Robotic Surgery. | Credit: Intuitive

    Last week at the SRS conference, Intuitive Surgical showcased transatlantic telesurgery, connecting surgeons in Georgia and France via the da Vinci 5 system.

    Intuitive’s Iman Jeddi to Highlight da Vinci 5 at RoboBusiness Keynote

    Iman Jeddi, Intuitive’s Senior VP, will deliver RoboBusiness’s closing keynote on October 15–16, highlighting the da Vinci 5 platform.

    In October 2024, MicroPort performed six robot-assisted prostatectomies in Angola, with two done remotely. These surgeries marked the first telesurgical procedures in sub-Saharan Africa using MicroPort’s Toumai system.

    Jim Hirsch, QNX Vice President, told The Robot Report that robotic telesurgery is a key focus for the company.


    Read the original article on: The Robot Report

    Read more:Report Shows That Close to 75% of American Teens Have Engaged with AI Companions

  • Why Cartken Shifted its Focus from Last-mile Delivery to Industrial Automation

    Why Cartken Shifted its Focus from Last-mile Delivery to Industrial Automation

    Autonomous robotics startup Cartken, best known for its four-wheeled delivery robots used on college campuses and in Tokyo’s busy streets, is now turning its attention to industrial applications.
    Image Credits: Techcrunch

    Autonomous robotics startup Cartken, best known for its four-wheeled delivery robots used on college campuses and in Tokyo’s busy streets, is now turning its attention to industrial applications.

    CEO Christian Bersch told TechCrunch that industrial use was always considered, and growing business interest pushed the team to explore it further.

    Growing Demand for Industrial Applications Drives New Focus

    We found strong demand for industrial applications,” said co-founder Bersch. “In some cases, there’s even more immediate value in helping companies improve how they move materials or manage production workflows.

    In 2023, Cartken secured its first major industrial client: German manufacturer ZF Lifetec. Initially, ZF used the company’s existing delivery model—the Cartken Courier, a cooler-sized robot capable of carrying up to 44 pounds.

    Our food delivery robot started transporting production samples, and it quickly became our most active unit,” said CEO Christian Bersch.“That’s when we saw real demand and shifted focus to the industrial market.”

    Expanding Sidewalk Delivery with Major Partners

    At the time, Cartken was still actively growing its sidewalk delivery business, with partnerships in place with Uber Eats and Grubhub for last-mile services on U.S. college campuses and in Japan.

    But the strong early results with ZF inspired co-founders Jake Stelman, Jonas Witt, and Anjali Naik to broaden their approach. According to Bersch, adapting their robots from food delivery to industrial tasks wasn’t a significant hurdle. The robots were trained on delivery data and built for varied environments and weather.

    This allows the robots to operate seamlessly between indoor and outdoor environments. Their ability to navigate around obstacles comes from data gathered during food deliveries on the busy streets of Tokyo.

    Image Credits:Cartken

    Cartken, backed by over $20 million from investors like 468 Capital and Vela Partners, is expanding its robot lineup with the Cartken Hauler, a larger model carrying up to 660 pounds. It also launched the Cartken Runner for indoor deliveries and is developing a robotic system similar to a forklift.

    CEO Christian Bersch said, “We designed a navigation system that adapts to different robot sizes.” “All the AI, machine learning, and training we’ve done transfers directly to the new models.

    Strengthening Ties with Mitsubishi for Expanded Deployment

    Cartken also recently expanded its four-year partnership with Mitsubishi.The automaker helped the startup secure Tokyo street deployment certifications.

    Melco Mobility Solutions, a Mitsubishi subsidiary, recently announced plans to purchase nearly 100 Cartken Hauler robots for deployment in industrial sites across Japan.

    We’re seeing strong interest from various industries,” said CEO Christian Bersch. “Many still move materials manually or with small equipment—that’s the need we’re addressing.

    Cartken will keep its food delivery operations but won’t expand them, Bersch said. However, those existing routes are still used to test and refine new features.


    Read the original article on: Techcrunch

    Read more:SS Innovations Has Deployed More Than 100 Surgical Robots

  • Now Anyone Can Train a Robot

    Now Anyone Can Train a Robot

    Teaching robots new skills once demanded coding expertise, but a new wave of robots may soon learn from nearly anyone. Engineers are creating robotic assistants that can "learn from demonstration," a more intuitive training method where a person guides the robot through a task in one of three ways: by using a remote control like a joystick, physically moving the robot, or performing the task while the robot observes and imitates.
    Image Credits: Techxplore

    Teaching robots new skills once demanded coding expertise, but a new wave of robots may soon learn from nearly anyone. Engineers are creating robotic assistants that can “learn from demonstration,” a more intuitive training method where a person guides the robot through a task in one of three ways: by using a remote control like a joystick, physically moving the robot, or performing the task while the robot observes and imitates.

    Typically, learning-by-demonstration robots rely on just one of these training modes. However, MIT engineers have introduced a versatile, three-in-one training interface that enables robots to learn using any of the three methods.

    A Flexible, Hands-On Tool for Teaching Robots Any Task

    This innovative tool is a handheld, sensor-equipped device that attaches to standard collaborative robotic arms. It lets users teach a robot by remotely controlling it, guiding it manually, or demonstrating the task themselves—whichever approach is most convenient or effective for the situation.

    The MIT team evaluated their new device, dubbed a “versatile demonstration interface,” using a standard collaborative robotic arm. They enlisted volunteers with manufacturing experience to use the interface to carry out two hands-on tasks commonly performed in industrial settings.

    Researchers say the new interface offers greater training flexibility, allowing more people to teach robots. It could also allow robots to acquire a more diverse set of skills. For example, one person could remotely train a robot to handle hazardous materials, another could guide it through packing by hand, and a third could demonstrate logo drawing for the robot to learn.

    Our aim is to build smart, skilled robotic teammates that work seamlessly with humans on complex tasks,” says Mike Hagenow, postdoctoral researcher at MIT. “Flexible training tools like this could be just as useful in homes or caregiving settings—not just on factory floors. Because hey, who wouldn’t want a robot that can fold laundry and understand nuance?

    Hagenow is set to present a paper on the new interface at the IEEE Intelligent Robots and Systems (IROS) conference this October. The paper is also available on the arXiv preprint server.

    Collaborative Effort by Experts Across MIT Departments

    The research was co-authored by MIT colleagues Dimosthenis Kontogiorgos, a postdoc at the Computer Science and Artificial Intelligence Lab (CSAIL); Yanwei Wang, who recently completed a Ph.D. in electrical engineering and computer science; and Julie Shah, professor and head of the Department of Aeronautics and Astronautics.

    At MIT, Julie Shah’s research group focuses on designing robots that can collaborate with humans in various environments—including workplaces, hospitals, and homes. A core aspect of her work is creating systems that allow people to teach robots new tasks or skills in real time, directly within their working environments.

    For example, such systems could let a factory worker make quick, intuitive adjustments to a robot’s movements on the spot—without needing to halt operations and reprogram its software, a task that may require expertise the worker doesn’t have.

    This latest project builds on a growing robot learning approach known as “learning from demonstration” (LfD), which emphasizes training robots through more natural and human-friendly interactions.

    Three Core Approaches to Robot Training Identified

    As they reviewed existing LfD research, Shah and postdoc Mike Hagenow identified three primary training approaches: teleoperation, kinesthetic teaching, and natural demonstration. Each method has strengths that may suit different users or tasks.

    This led them to ask whether a single tool could combine all three approaches, making it easier for a broader range of people to teach robots a wider array of tasks.

    If we can integrate these three ways of interacting with a robot, we might unlock new possibilities for both people and applications,” Hagenow explains.

    With this goal in mind, the team developed a new tool called the Versatile Demonstration Interface (VDI). This handheld device mounts to a standard collaborative robot arm and includes a camera, tracking markers, and force sensors to monitor movement and applied pressure.

    Once attached to the robot, the VDI allows for full remote control of the robot, with the onboard camera capturing its movements as training data the robot can later use to learn the task independently. Alternatively, a person can physically guide the robot through a task with the VDI in place.

    Detachable Design Enables Robots to Learn by Watching Human Demonstrations

    The VDI can also be removed from the robot and used independently by a person to perform the task manually. As the user performs the task, the camera records the motion, and once the VDI is reattached, the robot can replicate the actions based on what it observed.

    To evaluate the device’s ease of use, the researchers took the VDI and a robotic arm to a local innovation center where manufacturing professionals explore technologies that can enhance factory operations.

    They conducted an experiment where volunteers used the VDI in all three training modes to teach the robot two standard manufacturing tasks: press-fitting and molding. In the press-fitting task, participants trained the robot to press pegs into holes—representing a common fastening procedure on factory floors.

    In the molding task, a volunteer trained the robot to press and roll a soft, dough-like material evenly around a central rod—a process similar to certain thermomolding techniques.

    Testing All Three Training Modes on Real-World Manufacturing Tasks

    Each participant performed both the press-fitting and molding tasks using all three training methods: joystick teleoperation, physical guidance, and direct demonstration with the detached VDI. During the final method, the robot recorded the interface’s force and movement data for learning purposes.

    The researchers observed that volunteers generally favored the natural demonstration method over teleoperation and kinesthetic guidance. However, the manufacturing professionals also identified use cases where each approach could be particularly effective. For example, teleoperation may be ideal when training a robot to manage dangerous or toxic materials.

    Kinesthetic teaching could be advantageous when guiding robots in tasks involving large or heavy items that require precise repositioning. Natural demonstration, on the other hand, is especially useful for showing tasks that demand precision and delicate handling.

    We envision this demonstration interface being used in dynamic manufacturing settings, where a single robot might assist with a variety of tasks—each better taught through a different method,” says Hagenow. He plans to refine the interface based on user feedback and use the updated version in future robot training experiments.

    This study shows that we can make collaborative robots more adaptable by designing interfaces that expand how end-users interact with them during the teaching process,” he adds.


    Read the original article on: Techxplore

    Read more:Can AI Think Like Humans? New Research Reveals Behavior-Predicting Model

  • Samsung Is Focused on Regaining Its Competitive Edge — Tri-Fold Phone Expected by Year’s End

    Samsung Is Focused on Regaining Its Competitive Edge — Tri-Fold Phone Expected by Year’s End

    Despite rumors and a supposed leak, Samsung didn’t unveil a tri-fold phone at last week’s Unpacked event, where it showcased the sleek Galaxy Z Fold 7, Z Flip 7 with a full cover screen, and the stylish Galaxy Watch 8 lineup. Still, the tri-fold device is real — and it's on the way.
    Image Credits. gizmodo

    Despite rumors and a supposed leak, Samsung didn’t unveil a tri-fold phone at last week’s Unpacked event, where it showcased the sleek Galaxy Z Fold 7, Z Flip 7 with a full cover screen, and the stylish Galaxy Watch 8 lineup. Still, the tri-fold device is real — and it’s on the way.

    Samsung Confirms Tri-Fold Phone Is in the Works, Aiming for Year-End Launch

    TM Roh, head of Samsung’s Device Experience Division, told the Korea Times they’re “working hard” on a tri-fold phone set to launch by year’s end. While the name hasn’t been finalized—though rumors suggest “Galaxy Fold G”—Roh said the device is nearly complete. “We’re now focused on refining the product and its usability,” he added.

    Android Authority released a video along with multiple images and animations, uncovered from the latest version of Samsung’s One UI 8 software, which it believes reveal Samsung’s upcoming tri-fold phone.

    My Galaxy Z Fold 7 review drops this week on Gizmodo, but it’s clear Samsung has finally nailed the book-style foldable it aimed for in 2019. It’s nearly as slim as the S25 Ultra, with the performance and cameras you’d expect from a $2,000 phone. With the Fold series now close to perfected, Samsung is ready to explore more ambitious designs.

    Samsung Eyes Larger Leap with Tri-Fold Design to Rival Huawei’s Flagship

    Enter the tri-fold phone — featuring two folds for an even larger, tablet-like display. This would be Samsung’s first tri-fold phone, set to compete with Huawei’s Mate XT Ultimate—which isn’t available in the U.S. due to trade restrictions. Tri-folds could restore Samsung’s innovation edge, but expect a higher price than the Z Fold 7. Huawei’s model starts at €3,499 (around $4,000) and offers three modes: phone, tablet, and near-laptop display.

    Launching a tri-fold phone won’t come without hurdles. Multiple folds offer a bigger screen but require larger batteries and add bulk. Durability could be another issue—more screen surface means more potential for damage, particularly around the folding edges.Still, it’s encouraging to see Samsung rekindle its innovation after years of iPhone-style designs. A bold move could finally set it apart from rivals.


    Read the original article on: Gizmodo

    Read more: Automated System Swiftly Evaluates New Material Features

  • NASA Selects Equipment For The Artemis Moon Exploration Rover

    NASA Selects Equipment For The Artemis Moon Exploration Rover

    NASA selected three instruments for a lunar mission engineers will mount two on a Lunar Terrain Vehicle (LTV), and they will reserve the third for a future orbital mission.
    Image Credits: Pixabay

    NASA selected three instruments for a lunar mission engineers will mount two on a Lunar Terrain Vehicle (LTV), and they will reserve the third for a future orbital mission.

    NASA’s Lunar Terrain Vehicle (LTV) is a key element of the Artemis program, marking the return of crewed surface mobility to the Moon for the first time in over five decades. Built to carry two astronauts or operate remotely without a crew, the LTV will support a wide range of scientific and exploratory missions across expansive lunar regions.

    “The Artemis Lunar Terrain Vehicle will carry humanity farther across the Moon than ever before, ushering in a new era of scientific discovery and exploration,” said Nicky Fox, associate administrator of NASA’s Science Mission Directorate.

    “By integrating the strengths of both human and robotic exploration, the selected science instruments aboard the LTV will uncover insights about the Moon that not only deepen our understanding of Earth’s closest celestial neighbor but also enhance astronaut safety and spacecraft performance on the lunar surface.”

    AIRES to Unveil Volatile Distributions at the Lunar South Pole

    One of these instruments, the Artemis Infrared Reflectance and Emission Spectrometer (AIRES), will analyze, measure, and map lunar minerals and volatile substances materials that easily vaporize, such as water, ammonia, or carbon dioxide. AIRES will gather spectral data superimposed on visible imagery, capturing both detailed targets and wide-area views to reveal how these materials are distributed across the Moon’s south polar region. The instrument is led by principal investigator Phil Christensen of Arizona State University in Tempe.

    The Lunar Microwave Active-Passive Spectrometer (L-MAPS) is designed to probe beneath the Moon’s surface and identify potential ice deposits. Equipped with both a spectrometer and ground-penetrating radar, the instrument will assess temperature, density, and underground structures at depths of over 131 feet (40 meters). The L-MAPS team is led by Matthew Siegler of the University of Hawaii at Manoa.

    Together, data from L-MAPS and AIRES will provide a comprehensive view of the Moon’s surface and subsurface composition, offering vital insights for future human exploration. These instruments will also shed light on the Moon’s geological history and help identify its resources such as mineral content, possible ice locations, and long-term surface changes.

    UCIS-Moon to Deliver Orbital Insights on Lunar Geology and Volatiles

    Alongside the instruments chosen for the Lunar Terrain Vehicle, NASA has also selected the Ultra-Compact Imaging Spectrometer for the Moon (UCIS-Moon) for a future orbital mission. This instrument will offer a broader perspective to complement findings from the LTV. From its vantage point in orbit, UCIS-Moon will map the Moon’s geology and volatile compounds, as well as monitor how these volatiles are influenced by human activity. Additionally, the spectrometer will aid in pinpointing scientifically significant sites for astronaut sample collection, while its wide-field imagery will provide essential context for understanding the locations of those samples.

    The UCIS-Moon scientific instruments mwill deliver the highest spatial resolution data yet on the Moon’s surface water, mineral composition, and thermophysical characteristics. Led by Abigail Fraeman from NASA’s Jet Propulsion Laboratory in Southern California, the UCIS-Moon team aims to provide detailed insights into the Moon’s composition.

    “These three instruments together will make major strides in uncovering which minerals and volatile substances exist on and beneath the lunar surface,” said Joel Kearns, deputy associate administrator for Exploration in NASA’s Science Mission Directorate.

    “With UCIS-Moon in orbit and the other instruments aboard the LTV, we’ll be able to study the lunar surface not only where astronauts land and work, but across the entire south polar region opening up exciting possibilities for long-term science and exploration.”

    NASA Completes Design Reviews with LTV Vendors to Validate Rover Concepts

    In preparation for selecting these instruments, NASA collaborated with all three Lunar Terrain Vehicle (LTV) vendors Intuitive Machines, Lunar Outpost, and Venturi Astrolab to complete preliminary design reviews. These reviews confirm that each company’s initial rover design meets NASA’s technical requirements, incorporates the right design choices, identifies key system interfaces, and outlines appropriate verification strategies.

    NASA will assess the task order proposals submitted by the LTV vendors and expects to select a provider for the demonstration mission by the end of 2025.

    As part of the Artemis program, NASA aims to tackle top-priority science objectives—particularly those best achieved by astronauts working directly on and around the Moon. By integrating robotic systems both on the lunar surface and in orbit, Artemis will drive scientific exploration, promote economic opportunities, and lay the groundwork for future crewed missions to Mars.


    Read the original article on: Phys.Org

    Read more: Netflix Partners with NASA to Enhance its live Television Content

  • Trump Appoints Transportation Secretary Sean Duffy as Interim Head of NASA

    Trump Appoints Transportation Secretary Sean Duffy as Interim Head of NASA

    Amid historic budget reductions and the threat of widespread layoffs, U.S. President Donald Trump has named Transportation Secretary Sean Duffy as the acting Administrator of NASA.
    Image Credits:Stefani Reynolds / AFP / Getty Images

    Amid historic budget reductions and the threat of widespread layoffs, U.S. President Donald Trump has named Transportation Secretary Sean Duffy as the acting Administrator of NASA.

    The appointment is temporary, with Duffy set to maintain his role at the Department of Transportation while also taking on leadership at NASA, Trump announced on his social media platform, Truth Social.

    Trump Praises Duffy’s Leadership in Transportation as He Steps In as Acting NASA Chief

    Sean is doing a FANTASTIC job managing our nation’s transportation systems, from developing cutting-edge air traffic control to revitalizing our roads and bridges—making them efficient and beautiful once more,” Trump stated.

    The appointment is highly unconventional, as there is no known precedent for a sitting Secretary of Transportation simultaneously serving as NASA’s acting chief. Traditionally, NASA Administrators have been former astronauts, veteran agency officials, ex-members of Congress, or military leaders.

    Duffy lacks a formal background in science or space exploration, though the Department of Transportation does oversee the Federal Aviation Administration, which regulates commercial spaceflight and air traffic. Given that he will hold both leadership positions at once, his role at NASA will likely center on executing President Trump’s immediate policy agenda.

    That agenda includes deep budget cuts to NASA, expected to significantly impact science programs and reduce staffing levels. According to the White House’s “One Big Beautiful Bill,” the agency’s budget would be slashed by 25%, with an estimated 5,000 job losses.

    After Withdrawing Isaacman Nomination, Trump Taps Duffy Amid Concerns Over Ties to Musk and Democratic Donations

    Duffy’s appointment follows President Trump’s sudden decision to withdraw his nomination of billionaire entrepreneur Jared Isaacman for NASA Administrator just weeks earlier. Isaacman, the founder of Shift4 Payments, has traveled to space twice on private SpaceX missions. Trump said he pulled his support after a “thorough review of prior associations,” pointing to Isaacman’s past donations to Democratic candidates and his close relationship with SpaceX CEO Elon Musk.

    Trump’s decision to backtrack on Isaacman’s nomination reportedly strained his relationship with Elon Musk.

    I also felt it was inappropriate for someone so close to Elon, and involved in the space industry, to lead NASA—given how much of NASA’s work is tied to Elon’s business interests,” Trump explained in a separate post on Truth Social.

    Duffy takes over from Janet Petro, director of the Kennedy Space Center and a long-serving NASA official. It remains uncertain how long Duffy will serve in the role.


    Read the original article on: TechCrunch

    Read more: Netflix Partners with NASA to Enhance its live Television Content

  • Diligent Robotics Hires Two Ex-Cruise Leaders

    Diligent Robotics Hires Two Ex-Cruise Leaders

    Diligent Robotics is strengthening its leadership team as it prepares to scale its fleet of hospital and pharmacy-assisting humanoid robots.
    Image Credits: Techcrunch

    Diligent Robotics is strengthening its leadership team as it prepares to scale its fleet of hospital and pharmacy-assisting humanoid robots.

    The Austin-based startup announced Thursday that it has appointed Rashed Haq as CTO and Todd Brugger as COO—both formerly of Cruise, GM’s autonomous vehicle unit that shut down earlier this year. Haq led AI and robotics at Cruise, while Brugger served as COO.

    CEO and co-founder Andrea Thomaz told TechCrunch the timing was right for the hires, as the company has deployed around 100 Moxi robots and is now shifting its focus to expansion.

    Strategic Slowdown Paves the Way for Scalable Growth

    Over the past two to three years, we’ve intentionally grown at a slower pace to refine our operations and prepare for large-scale growth,” said Thomaz. “Now, we’re positioning ourselves to scale significantly by the end of this year and into next.

    Thomaz first connected with Haq and was impressed by his strong AI background and his hands-on experience applying advanced algorithms in real-world settings at Cruise.

    During those early discussions, a mutual contact introduced her to Brugger. His track record of helping Cruise grow from zero to hundreds of vehicles made him a natural fit for Diligent’s next phase, she explained.

    Todd and Rashed were a great team at Cruise,” Thomaz said. “Things really began to fall into place. We needed someone to take the lead on operations, and Todd’s background was exactly what we were looking for. The timing couldn’t have been better.

    A Natural Next Step for Haq and Brugger

    Both Haq and Brugger told TechCrunch that joining Diligent felt like a natural progression. The robotics company was already deploying technology similar to what they built at Cruise. Haq noted that autonomous vehicles are essentially mobile robots, just under a different label.

    Some companies generate what I call ‘vibe revenue’—initial interest that quickly fades,” Haq said. “But with Diligent, the robots are used daily and have become vital to their customers.” That makes the product stickier and more sustainable. There’s a lot that excites us about this company.

    Perguntar ao ChatGPT

    Brugger shared that he was attracted to Diligent in part because it faced many of the same operational challenges and priorities as Cruise.

    Familiar Operational Framework Draws Brugger to Diligent

    There’s a kind of priority structure—a pyramid—that we used, and I believe it applies here as well,” Brugger said. “At the foundation is safety, which is absolutely essential. From there, you focus on improving reliability. After that, you work on refining product-market fit, which often means expanding what the robots can do or how useful they are. That framework feels very familiar. The approach to deployments is also strikingly similar.

    Diligent was launched in 2017 by Thomaz and Vivian Chu. Its Moxi robots are now in use across more than 25 healthcare systems. The company has raised over $90 million in venture capital from investors such as Tiger Global, True Ventures, and Canaan Partners, among others.


    Read the original article on: Techcrunch

    Read more:Hugging Face Opens Orders for Reachy Mini Robots

  • Samsung Debuts Z Fold7, Z Flip7, and Affordable Z Flip7 FE

    Samsung Debuts Z Fold7, Z Flip7, and Affordable Z Flip7 FE

    In recent years, Samsung has consistently launched two foldable phones at its Unpacked event. This time, Samsung adds the budget-friendly Z Flip7 FE alongside the Z Fold7 and Z Flip7, expanding its foldable lineup.
    Image Credits: Techcrunch

    In recent years, Samsung has consistently launched two foldable phones at its Unpacked event. This time, Samsung adds the budget-friendly Z Flip7 FE alongside the Z Fold7 and Z Flip7, expanding its foldable lineup.

    Samsung has focused on making its latest models thinner than their predecessors. In terms of pricing, the Z Fold7 now starts at $1,999, $100 more than the previous version. Meanwhile, the Z Flip7 maintains its $1,100 starting price. The newly introduced Z Flip7 FE is priced at $899, aiming to attract consumers interested in foldables under $1,000.

    Z Fold7 Gets Lighter and Slimmer Design Upgrades

    Samsung’s latest foldable flagship, the Z Fold7, is slightly lighter than its predecessor, weighing in at 218 grams compared to the Fold6’s 239 grams. It’s also noticeably slimmer, with a folded thickness of 8.9 mm, down from 12.1 mm.

    The Fold7 features a larger cover screen—a 6.5-inch Dynamic AMOLED 2X display—while the main display expands to 8 inches when unfolded. Under the hood, it runs on Qualcomm’s top-tier Snapdragon 8 Elite processor.

    Image Credits: Samsung

    Samsung says the phone is now more durable, thanks to a redesigned hinge and hinge housing. The display is also shielded by Corning Gorilla Glass Ceramic 2 for added protection.

    The new model features a 200-megapixel main camera with an f/1.7 aperture, a significant upgrade from the 50-megapixel camera used in last year’s version.

    Image Credits: Samsung

    Samsung has enhanced its photo editing software with AI features. The new Photo Assist tool can automatically move, erase, or resize objects, and adjust angles. It also uses generative AI to fill in gaps in images. Additionally, users can view both the original and edited versions side by side on the unfolded screen.

    The Z Flip7 is slimmer this year, with a larger 4.1-inch cover display and a 6.9-inch main screen. For durability, Samsung has equipped it with Corning Gorilla Glass Victus 2 on both the front and back.

    Z Flip7 Packs Largest Battery Yet, Powered by Samsung’s Exynos Chip

    The Z Flip7 also includes a 4,300 mAh battery—the biggest yet in the Flip lineup. Unlike the Fold7, which runs on Qualcomm’s 3nm chip, the Flip7 is powered by Samsung’s own 3nm Exynos 2500 processor.

    The Z Flip7 FE, on the other hand, closely resembles the previous Z Flip6, with a 4,000 mAh battery, a 3.4-inch cover screen, a 6.7-inch main display, and the Exynos 2400 chip.

    Samsung is introducing DeX support to the Flip series for the first time, allowing users to connect the phone to a monitor and use a Bluetooth keyboard and mouse for a desktop-like experience.

    New ‘Now Bar’ Brings Real-Time Updates to Foldable Cover Screens

    The new foldables also include a feature called the Now Bar, which appears on the cover screen and functions similarly to iOS’s Live Activities—displaying real-time updates such as podcast progress or delivery status.

    Another addition, Now Brief, provides a snapshot of traffic, calendar reminders, fitness data, and events, along with personalized music and video suggestions based on your subscriptions.

    Samsung has introduced support for Gemini Live on the Z Flip7’s cover screen, allowing users to access the assistant without unfolding the device. It also works seamlessly with Samsung Notes.

    Gemini Live Brings AI-Powered Camera and Video Features to New Foldables

    Additionally, all the new Z Flip and Z Fold models will support Gemini Live’s camera and video AI capabilities, enabling users to capture photos or videos and ask the AI questions about them.

    These devices will also feature an enhanced version of Google’s Gemini assistant, which now includes an AI-powered mode for more conversational, Q&A-style interactions.

    Pre-orders for the Galaxy Z Flip7 and Galaxy Z Fold7 begin today, with general availability starting July 25. The Z Flip7 offers 256GB or 512GB storage, 12GB RAM, and comes in Jetblack, Blue Shadow, and Coralred.

    The more affordable Z Flip7 FE is available in 128GB and 256GB versions with 8GB of RAM, and is offered in white and black.

    As for the Z Fold7, it comes in three storage configurations: 256GB and 512GB (both with 12GB RAM), and a 1TB model with 16GB RAM. Color choices include Jetblack, Blue Shadow, and Silver Shadow.


    Read the original article on: Techcrunch

    Read more:China Launches a Wild New Robot Soccer League

  • AI Robots Replace Weed Killers and Farm Workers

    AI Robots Replace Weed Killers and Farm Workers

    Unaffected by the scorching midday sun, a solar-powered, AI-driven wheeled robot methodically navigates a California cotton field, removing weeds with precision.
    Image Credits: Techxplore

    Unaffected by the scorching midday sun, a solar-powered, AI-driven wheeled robot methodically navigates a California cotton field, removing weeds with precision.

    Facing U.S. farm labor shortages and herbicide-resistant weeds, startup Aigen’s AI robot Element provides a cost-effective, eco-friendly alternative that cuts harmful chemicals in food.

    A Healthier Future Through AI Farming

    This is the best way to improve human health,” said Aigen CTO Richard Wurden at Bowles Farm.

    Everyone is eating food treated with chemicals,” he added.

    Wurden, a former Tesla mechanical engineer, was inspired to create the robot after hearing from farming relatives in Minnesota about the high cost of manual weeding.

    As herbicide resistance spreads and worker shortages persist, chemicals often become the only option, Wurden explained.

    No farmer we’ve spoken to loves chemicals,” said Kenny Lee, Aigen’s co-founder and CEO, who has a background in software. “They’re just tools—we want to offer a better one.

    Image Credits:techxplore

    Element, the robot, looks like a large, wheeled table topped with solar panels. Beneath it, metal arms fitted with small blades reach down to weed between crop rows.

    It actually works like a human,” said CEO Kenny Lee, as the temperature climbed to 90°F (32°C) under a cloudless sky. “When the sun sets, it powers down to rest, and when the sun rises, it starts up again.

    Smart Navigation: AI and Cameras Guide Precision Weeding

    The robot uses AI and onboard cameras to navigate crop rows and detect weeds.

    If you think humans should be doing this job, just spend two hours weeding in the field,” added CTO Richard Wurden.

    Aigen aims to shift farm laborers from grueling fieldwork to higher-skilled roles managing and troubleshooting robots. The machines also stay connected via wireless links to local control hubs, reporting issues as they arise.

    Image Credits: Techxplore

    Aigen currently has its robots operating in tomato, cotton, and sugar beet fields, promoting their precision weeding technology that avoids harming crops.

    According to CEO Kenny Lee, about five robots can cover 160 acres (65 hectares) of farmland. Each robot, built by the 25-person startup based in Redmond, Washington, costs $50,000.

    The company aims to appeal to traditionally conservative farmers by offering a solar-powered alternative to diesel-fueled machinery—providing both economic and environmental benefits.

    “‘Climate’ has become a politicized term, but at the end of the day, farmers care deeply about their land,” Lee said.

    AWS Backs Aigen in Climate-Focused Tech Fellowship

    Aigen’s innovation drew the attention of Amazon Web Services (AWS), which selected the company for its “Compute for Climate” fellowship. The program provides startups with AI tools, cloud resources, and technical support to address environmental challenges.

    Aigen is set to become a major industry player,” said Lisbeth Kaufman of AWS. “It’s like watching the early days of Ford or Edison—that’s what Kenny and Rich are building with Aigen.


    Read the original article on: Techxplore

    Read more:Natural Compound Replicates Exercise’s Anti-Aging effects – Without the Need for a Workout