Tag: Tech

  • Boom Repurposes Supersonic Engine Tech to Power AI Data Centers

    Boom Repurposes Supersonic Engine Tech to Power AI Data Centers

    Boom Supersonic says it has repurposed the core technology from its Mach-1-plus Symphony jet engine to serve as a new source of revenue—using it to power energy-intensive AI data centers, bringing together the rising trends of civilian supersonic travel and artificial intelligence.
    The Boom Superpower gas turbine generator
    Image Credits: Boom Supersonic

    Boom Supersonic says it has repurposed the core technology from its Mach-1-plus Symphony jet engine to serve as a new source of revenue—using it to power energy-intensive AI data centers, bringing together the rising trends of civilian supersonic travel and artificial intelligence.

    Supersonic Startup Seeks Additional Revenue as Overture Advances

    Boom Supersonic has made significant progress in its effort to bring the Overture supersonic passenger jet into service. The challenge, however, is that building aircraft capable of exceeding the speed of sound is enormously expensive, and investor funding can only go so far. As a result, the company’s backup strategy is to repurpose its aerospace technology to generate some down-to-earth revenue in the meantime.

    Conveniently, another cutting-edge sector is hungry for new solutions—this time for sheer power. AI data centers are cropping up everywhere, as abundantly as weeds in an untended yard. But unlike weeds, these facilities demand massive amounts of electricity for both operation and cooling. Data centers are set to at least double their energy use in the coming years and will likely become the largest single electricity consumer in the United States by 2035.

    Image Credits: A Superpower installation
    Boom Supersonic

    It’s no surprise, then, that numerous tech companies are racing to lock in steady, around-the-clock power supplies—so vital that some are even restarting retired nuclear plants or investing in building new reactors to keep their operations running without interruption.

    To help meet this soaring demand for energy, Boom is reengineering its Symphony engine into a natural-gas–powered turbogenerator that can also operate on diesel in a pinch.

    Symphony Engine Reengineered to Generate Electricity

    The redesigned system, called Superpower, retains about 80% of Symphony’s original parts. The key change replaces the turbofan used for propulsion with extra compressor stages and adds a free power turbine to generate electricity.

    Image Credits: Superpower can fit in a standard shipping container
    Boom Supersonic

    Superpower also requires no external cooling and can function reliably in ambient temperatures up to 110 °F (43 °C). Despite occupying no more space than a standard shipping container, it can produce 42 MW of power, and Boom claims the unit can be installed within about two weeks once its foundation is prepared.

    Superpower Orders Boost Boom’s Finances and Supersonic Ambitions

    According to the company, AI infrastructure provider Crusoe has already ordered 29 Superpower units, totaling 1.21 GW of capacity. Boom aims to scale up production to deliver 4 GW annually by 2030.

    The company believes this new revenue stream will help solidify its plans for a supersonic passenger aircraft.

    “Supersonic technology is a catalyst—not just for faster travel, but now for AI as well,” said Blake Scholl, Boom Supersonic’s founder and CEO. “With this funding and our first Superpower order, Boom is financially positioned to deliver both our engine and our airliner.”


    Read the original article on: New Atlas

    Read more: China Develops The First Robot that Can Run Autonomously Indefinitely

  • Chinese Robot Breaks 106 km World Record in Tech race

    Chinese Robot Breaks 106 km World Record in Tech race

    Once the stuff of science fiction, long-distance walking robots are now making headlines. The humanoid robot A2, created by Chinese company AgiBot, walked 106.286 km over three days, earning a Guinness World Record for the longest continuous walk by a bipedal robot. The feat showcases impressive technological progress but also raises questions about autonomy, transparency, and the hype surrounding robotics.
    Image Credits:© Photo by Shi Bufa/VCG via Getty Images

    Once the stuff of science fiction, long-distance walking robots are now making headlines. The humanoid robot A2, created by Chinese company AgiBot, walked 106.286 km over three days, earning a Guinness World Record for the longest continuous walk by a bipedal robot. The feat showcases impressive technological progress but also raises questions about autonomy, transparency, and the hype surrounding robotics.

    A2 Walks 66 Miles Nonstop from Jiangsu to Shanghai

    From November 10 to 13, 2025, A2 journeyed 66 miles from Jinji Lake in Jiangsu province to Shanghai’s Bund district. Guinness confirmed that the robot remained powered on throughout, with only its batteries swapped while it kept moving.

    Videos show A2 navigating sidewalks, ramps, and various flooring under different lighting conditions. According to AgiBot, the robot used two GPS modules, lidar sensors, infrared depth cameras, and navigation systems capable of handling traffic lights, urban traffic, and environmental changes. Guinness recognized the walk as autonomous.

    However, a gray area remains: the video footage is heavily edited and does not clearly show how much human supervision was involved. Even if operators were present to monitor the robot, it wouldn’t necessarily invalidate the test—but there is still no independent verification that A2 acted entirely on its own from start to finish.

    China-U.S. Robotics Rivalry Faces Scrutiny

    This milestone comes amid a growing rivalry in robotics between China and the United States. Yet history advises caution. Elon Musk, for instance, has released videos of Tesla robots performing complex tasks like folding clothes or serving drinks, which later turned out to be teleoperated—controlled off-screen by humans.

    Such demonstrations are common in robotics: they generate excitement but often blur the true level of machine autonomy.

    It is possible that A2 completed its journey without direct human control. Technology is advancing quickly, especially in Asia, where significant investment is going into bipedal robots capable of operating outdoors beyond controlled labs. The current challenge isn’t walking itself—it’s sustaining long-distance walking with durable batteries and reliable autonomous systems.

    While the record is remarkable, science reporting demands caution: major achievements need verification and transparency.

    For more than a century, humans have dreamed of robots serving them. Yet many celebrated milestones were illusions—including well-known “robot” demonstrations that were simply actors in disguise. The takeaway is clear: genuine progress exists, but it must be examined with a critical eye.

    Whether under supervision or fully autonomous, A2 has elevated the China–U.S. robotics competition, and this achievement may mark just the beginning of the humanoid era’s long-distance milestones.


    Read the original article on: Gizmodo

    Read more:In Japan, a New Appliance that Washes People has Hit the Market

  • Odor Tech Nears a Breakthrough

    Odor Tech Nears a Breakthrough

    A new Nature review highlights olfactory chips—tiny devices that can sniff like a human nose, or even better. The researchers say the real key to cracking this tech is neuromorphic architecture, which mimics how our brains process smells. If it pans out, we could soon have machines that match our own scent-detecting superpowers.
    Image Credits:Artpartner-images/Getty Images

    A new Nature review highlights olfactory chips—tiny devices that can sniff like a human nose, or even better. The researchers say the real key to cracking this tech is neuromorphic architecture, which mimics how our brains process smells. If it pans out, we could soon have machines that match our own scent-detecting superpowers.

    Crazy to think about, right? The human nose is kind of an unsung hero—it can pick up roughly a trillion different odors, even at super low concentrations, all while using next to no energy. Compare that to the power-hungry lab sensors we’ve got now, and, yeah… our noses are seriously next-level.

    The Challenge of Replicating Nature’s Nose

    Dogs and bees can detect scents, but replicating the human nose in artificial sensors remains a major challenge, unlike advanced camera light sensors.

    To develop a true “electronic nose,” researchers are increasingly exploring neuromorphic computing. This approach mimics the nose, using sensor networks to identify odors via activity patterns rather than single signals.

    Image Credits:This wonderfully silly graphic really lets you know what an e-nose is all about. Credit: Genia Brodsky and Noam Sobel (The Weizmann Institute)

    Earlier this year, a Korean team revealed a prototype olfactory neuron, citing demand for mobile gas sensors—likely more for detecting garlic breath than metabolic issues.

    The potential applications extend far beyond personal hygiene. Beyond gas detection, an e-nose could monitor food freshness and aid medical diagnostics by detecting infection-related odors.

    The Next Frontier of Scent Technology

    The Nature review also mentions “emotional communication” as a possible future application, suggesting devices that can detect or convey mood through scent.

    Ultimately, a functional e-nose approaching the sensitivity of the human nose would transform both industry and consumer technology. Beyond matching humans, it could exceed them—detecting toxic gases, spotting hidden biological threats, and boosting medical diagnostics.

    The researchers conclude that future progress will depend on biohybrid materials and brain-inspired architectures and algorithms—areas of active and promising research.


    Read the original article on: Extreme Tech

    Read more: New Blood Type Discovered After 50 Years

  • Japanese Scientists Developed Tech to Record and View Dreams

    Japanese Scientists Developed Tech to Record and View Dreams

    Japanese scientists are developing an algorithm to record and display dreams. Here’s how the device works, though it’s still being refined.
    Image Credits:http://tempo.com/

    Japanese scientists are developing an algorithm to record and display dreams. Here’s how the device works, though it’s still being refined.

    Kyoto Scientists Use AI and Brain Imaging to Decode Dreams

    This pioneering technology, developed by researchers at ATR Computational Neuroscience Laboratories in Kyoto, merges brain imaging and AI. Using functional magnetic resonance imaging (fMRI), they recorded neural activity linked to volunteers’ dreams.

    Researchers tracked participants’ brain activity as they fell asleep, waking them during REM sleep to describe their dreams.

    Using brain data and reports, scientists built an AI that predicts dreams with 70% accuracy. Scientists trained the AI to recognize neural patterns and match them with related images from participants’ descriptions.

    Image Credits:Pesquisadores usaram imagens de ressonância magnética funcional (fMRI) para registrar a atividade cerebral durante o sono.

    We succeeded in decoding dream content from brain activity during sleep, matching participants’ verbal descriptions,” said Professor Yukiyasu Kamitani, part of the research team.

    The algorithm remains in its early development phase, with scientists working to enhance the precision and clarity of the reconstructed dream images.

    Advancing Mental Health Through Dream Decoding

    This cutting-edge technology could deepen our understanding of mental health, allowing for more precise personality analysis and better diagnosis of psychological disorders.

    Scientists aim to reconstruct approximate visual representations of dreams, small fragments of what people dreamed. And although the images generated are still rudimentary and imprecise, the advances indicate a promising future for decoding the human subconscious.


    Read the original article on: Tempo

    Read more: Smelling a Partner’s Clothes Reduces Stress and Loneliness

  • Revealing the Planck time Limit Unlocks New Quantum Tech

    Revealing the Planck time Limit Unlocks New Quantum Tech

    A Japanese team observed "heavy fermions"—massive electrons—exhibiting quantum entanglement governed by Planckian time. This breakthrough, published in npj Quantum Materials, could lead to a new class of quantum computers using solid-state materials.
    Image Credits: Pixabay

    A Japanese team observed “heavy fermions”—massive electrons—exhibiting quantum entanglement governed by Planckian time. This breakthrough, published in npj Quantum Materials, could lead to a new class of quantum computers using solid-state materials.

    Heavy fermions form when conduction electrons in a solid strongly interact with localized magnetic electrons, significantly increasing their effective mass. This interaction leads to unique properties like unconventional superconductivity, making heavy fermions central to condensed matter physics. The studied material, CeRhSn, belongs to a heavy fermion class with a quasi-kagome lattice known for geometric frustration.

    CeRhSn Shows Persistent Non-Fermi Liquid Behavior and Signs of Quantum Entanglement

    In this study, researchers explored CeRhSn’s electronic state, which shows non-Fermi liquid behavior even at relatively high temperatures. Detailed reflectance measurements revealed that this behavior persists up to near room temperature, with heavy electron lifetimes nearing the Planckian limit. The spectral response followed a single functional form, strongly suggesting quantum entanglement among the heavy electrons.

    Dr. Shin-ichi Kimura from the University of Osaka, who led the study, stated, “Our results show that heavy fermions in this quantum critical state are entangled, with the entanglement governed by the Planckian time. This is a crucial step toward unraveling the complex link between quantum entanglement and heavy fermion systems.

    Image Credits:a) Crystal structure of CeRhSn. (b) Inverse lifetime divided by the temperature and the Planckian time a

    Quantum entanglement is essential for quantum computing, and the ability to harness it in solid-state materials like CeRhSn could lead to innovative quantum computing designs. The observed Planckian time limit offers valuable insight for building such systems.

    Entangled States Could Drive the Future of Quantum Information and Technology

    Continued exploration of these entangled states could transform quantum information processing and open up new avenues in quantum technology. This discovery not only deepens our understanding of strongly correlated electron systems but also sets the stage for future breakthroughs in next-generation quantum applications.


    Read the original article on: Phys Org

    Read more:Will Artificial Intelligence Need a Physical Body to Achieve Human-Like Intelligence?

  • LinkedIn Cuts 281 California Jobs Amid Tech Layoffs

    LinkedIn Cuts 281 California Jobs Amid Tech Layoffs

    LinkedIn, the job-focused social network owned by Microsoft, is reducing its workforce. A recent filing with the California Employment Development Department shows the company is laying off 281 employees in the state.
    Image Credits: Pixabay

    LinkedIn, the job-focused social network owned by Microsoft, is reducing its workforce. A recent filing with the California Employment Development Department shows the company is laying off 281 employees in the state.

    Earlier this month, Microsoft announced 6,000 job cuts, about 3% of its workforce, affecting LinkedIn and California staff.

    LinkedIn Follows Tech Industry Trend of Layoffs Amid Restructuring and AI Investment

    LinkedIn joins other major tech firms like Meta, Google, and Autodesk in making staff reductions this year, citing reasons such as restructuring, increased investment in AI, and employee performance.

    LinkedIn, based in Sunnyvale and Mountain View, informed employees of the layoffs on May 13. Many affected workers took to the platform to announce their job status and signal availability to recruiters.

    The company has not issued a public statement. According to its website, LinkedIn employs around 18,400 people and operates in over 30 cities worldwide.

    Layoffs hit California staff in San Francisco, Mountain View, Carpinteria, and Sunnyvale, with Mountain View hardest hit.

    Software Engineers Most Affected, State Data Shows

    Software engineers were among the most impacted, though roles such as talent account directors and senior product managers were also affected, based on state data.

    The layoffs coincide with a wave of AI-driven tools from tech companies, many of which can now generate code—raising concerns about the future of engineering roles.

    In April, Microsoft CEO Satya Nadella revealed during a conversation with Meta CEO Mark Zuckerberg that AI is already responsible for writing up to 30% of Microsoft’s code.

    AI Drive Prompts Microsoft to Reduce Redundancies

    As Microsoft accelerates its push into AI, the company has stated it’s aiming to boost efficiency by reducing managerial layers and eliminating redundancies.

    This marks LinkedIn’s latest round of cost-cutting.”In 2023, the company cut nearly 700 jobs to boost agility and accountability.

    Microsoft acquired LinkedIn in 2016 for $26 billion.”In April, it reported $4.3B revenue for Q3, up 7% year-over-year.


    Read the original article on: Techxplore

    Read more:A Complete Guide to the AI Chatbot App

  • Visa, Mastercard Launch AI Shopping Tech

    Visa, Mastercard Launch AI Shopping Tech

    Artificial intelligence is expanding beyond startups, with major credit card companies like Visa and Mastercard joining the movement. On Wednesday, Visa introduced “Intelligent Commerce,” a system that allows AI to assist with shopping and purchases based on users’ preset preferences.
    Credit: Pixabay

    Artificial intelligence is expanding beyond startups, with major credit card companies like Visa and Mastercard joining the movement. On Wednesday, Visa introduced “Intelligent Commerce,” a system that allows AI to assist with shopping and purchases based on users’ preset preferences.

    Consumer-Controlled, Visa-Managed Shopping Experience

    According to Visa’s Chief Product and Strategy Officer Jack Forestell, consumers control the boundaries while Visa manages the rest.

    Visa is partnering with a range of tech leaders and startups—including Anthropic, IBM, Microsoft, Mistral AI, OpenAI, Perplexity, Samsung, and Stripe—to create AI-powered shopping experiences that are more personalized, secure, and convenient.

    Mastercard’s Agent Pay Brings Payments to AI-Powered Conversations

    This follows Mastercard’s Tuesday announcement that it will empower AI agents to make online purchases for users. The company’s new “Agent Pay” feature aims to enhance generative AI interactions by seamlessly integrating payments into personalized recommendations and insights delivered via conversational platforms.

    In an example, Mastercard illustrated how a soon-to-be 30-year-old planning a birthday party could use an AI agent to curate outfits and accessories tailored to her style, event setting, and the weather. The AI could then complete the purchase and suggest the best payment method, such as Mastercard One Credential.

    Mastercard announced it is partnering with Microsoft to develop new applications for scaling “agentic commerce,” and is also collaborating with IBM, Braintree, and Checkout.com on other aspects of AI-driven shopping.

    Visa and Mastercard aren’t alone in embracing AI for commerce. Earlier this month, Amazon began testing a new AI shopping assistant called “Buy for Me” with select users. OpenAI, Google, and Perplexity have also introduced similar tools that browse websites and assist users in making purchases. On Monday, OpenAI revealed it was enhancing its ChatGPT search feature to improve the online shopping experience.


    Read the original article on: Techcrunch

    Read more: Traditional Card Networks Distance from Binance

  • Meta’s Inaugural LlamaCon Reveals the Tech Giant Is Still Trying to Catch Up

    Meta’s Inaugural LlamaCon Reveals the Tech Giant Is Still Trying to Catch Up

    If you were hoping, like I was, that Meta's LlamaCon keynote would unveil the reasoning model it hinted at earlier this month or its teacher model, Behemoth, get ready for disappointment. At its first AI developer conference today, Meta didn’t release any new models. While there were a few updates that help it close the gap in the fast-moving generative AI race, none of them seemed poised to give Meta a real edge.
    Credit: Pixabay

    If you were hoping, like I was, that Meta’s LlamaCon keynote would unveil the reasoning model it hinted at earlier this month or its teacher model, Behemoth, get ready for disappointment. At its first AI developer conference today, Meta didn’t release any new models. While there were a few updates that help it close the gap in the fast-moving generative AI race, none of them seemed poised to give Meta a real edge.

    Every major tech company is in a race to create AI models that can handle complex tasks efficiently, without demanding excessive computing power—and cost. Meta’s strategy has centered on open-source development, giving developers insight into how its models are built and trained. During LlamaCon, Chief Product Officer Chris Cox shared that Llama models have been downloaded 1.2 billion times. Combined with Meta AI integrations across Facebook, Instagram, and WhatsApp, the company remains a significant force in AI—even if it often arrives late or takes a different route.

    Here’s a breakdown of today’s Meta announcements and what they mean for its future in AI.

    Meta View Rebrands as AI App Before LlamaCon

    Meta is rebranding its Meta View smart glasses app into a standalone app for its AI, a move CEO Mark Zuckerberg confirmed on Instagram just hours before the LlamaCon keynote.

    The app is available for download now. If searching “Meta AI” doesn’t work—as was the case for me—try looking up “Meta View” instead.

    This new app builds on Meta’s chatbot, adding voice interaction features and a “social discovery feed.” Unlike Instagram or Facebook, it doesn’t let you follow friends. Instead, it showcases user-generated content featuring interactions with Meta AI, such as created images, prompts, and responses.

    CNBC had hinted at a standalone Meta AI app back in February, but transforming the Meta View app raises broader questions about Meta’s direction in AI and virtual reality. As my colleague and smart glasses expert Scott Stein put it, “Meta making a play for another compelling phone app looks like a way to try to draw more people into the ecosystem faster than making a pitch to get glasses.”

    Meta at LlamaCon: No New Llama 4 Models, Behemoth and Reasoning Model Updates Awaited

    Meta didn’t unveil the full range of Llama 4 models at LlamaCon; instead, Cox mostly reiterated details we already knew about Scout and Maverick. CNET reached out to Meta for the latest updates on the release of Behemoth and the Llama 4 reasoning model that Zuckerberg introduced earlier this month, but Meta chose not to comment.

    Currently, the available models in the Llama 4 family are Scout and Maverick. Scout is a smaller model built to run on a single Nvidia H100 GPU, with a 10-million-token context window, while Maverick offers more power as the next tier up.

    There was some confusion when Meta first released the benchmarking scores for Llama 4. The company initially claimed that Maverick outperformed OpenAI’s GPT-4o. Sharp-eyed experts noticed—and the benchmarking organization confirmed—that the tested Maverick model wasn’t the same as the one available; it had been “optimized for conversationality.” Meta denied it had been trained on post-testing data, which could unfairly skew benchmarking results.

    Meta’s AI policy states that it does train its models on data shared on Meta Platforms and through content users share with the chatbot. The company also recently removed the opt-out option for European users, so this now applies to them as well. For more details, you can refer to Meta’s full privacy policy.

    Meta Previews Llama API for Developer Access to Llama 4

    On Tuesday, Meta announced that it will begin previewing its Llama API, a new platform for developers to build Llama applications. Developers can now request early, experimental access to Llama 4 fast inference.

    You’ll be able to take these custom models with you, no locking, ever,” said Manohar Paluri, Meta’s VP of AI. He emphasized that the Llama API will focus on speed, ease of use, and customization. The new Llama 4 models, Scout and Maverick, will be part of the API.

    Angela Fan, a research scientist in generative AI at Meta, also pointed out that the API’s privacy policy differs from Meta’s standard AI policy. When using the API, Meta will not train on your inputs (prompts or uploads) or outputs (the generated results), which is beneficial for developers building models for businesses that need to keep data secure.

    LlamaCon Helps Meta Catch Up, Lacks Competitive Edge

    The announcements at LlamaCon allow Meta to catch up with its competitors, but they don’t give it a competitive edge, which could pose challenges down the line. There’s still no update on when Meta will release Behemoth or the reasoning model it promised with Llama 4.

    The Meta View app is a decent effort, but it primarily helps Meta stay on par with other major AI players, such as OpenAI, Claude, and Perplexity, which already have mobile apps. For users of Meta smart glasses, the app’s development could hint at how AI will play a key role in future products.

    After the keynote, I felt that Meta is often late to the AI game—OpenAI, Google, and DeepSeek have already released reasoning models. As I mentioned in my review of Meta AI last year, being behind isn’t necessarily a problem if the company makes a strong impact, but so far, that doesn’t seem to be happening.

    The most surprising feature was the social discovery feed in the Meta AI app. Given Meta’s expertise in social platforms, the discover/explore page could potentially become a unique (though unlikely) alternative for users to fill their feed with AI content instead of Facebook or Instagram posts. It’s definitely something to keep an eye on as Meta continues to update the app and advance its AI initiatives.


    Read the original article on: CNET

    Read more: Meta Rolls O ut Limited Teen Accounts on Facebook and Messenger

  • Particle Accelerator Tech from CERN now Treats Brain Tumors

    Particle Accelerator Tech from CERN now Treats Brain Tumors

    Transitioning from colossal 26 km (16 miles) particle accelerators to operating rooms for brain surgeries, a particle detector initially engineered by physicists at CERN is now employed by researchers in Germany to enhance the precision and safety of brain tumor treatments.
    Timepix3 was originally designed for particle detection for giant accelerators like the one at CERN
    CERN

    Transitioning from colossal 26 km (16 miles) particle accelerators to operating rooms for brain surgeries, a particle detector initially engineered by physicists at CERN is now employed by researchers in Germany to enhance the precision and safety of brain tumor treatments.

    Eliminating tumors in the head and neck region may seem straightforward: administer appropriate chemicals or deliver sufficiently potent radiation. However, the challenge lies in eradicating cancer cells while preserving the patient’s well-being.

    Leveraging Ion Beams for Tumor Treatment

    An efficient method for treating such tumors involves utilizing ion beams. By accelerating charged particles to speeds reaching three quarters of the speed of light, they can penetrate living tissue up to a foot deep. To safeguard healthy cells, the conventional approach entails moving the ion projector along a curved path with the tumor positioned at the focal point. Consequently, the tumor receives continuous bombardment while minimizing exposure to healthy tissue.

    Preparing a patient for ion beam therapy
    CERN

    The conventional method is effective but not flawless, especially in brain tumor cases. There’s a risk of exposing nearby healthy cells to secondary radiation from the ion beam, leading to potential memory loss, optic nerve damage, and other complications.

    To mitigate this risk, X-ray computed tomography (CT) scans are employed to precisely pinpoint the tumor’s location for treatment planning. However, pre-operative scans may be inaccurate due to brain movement within the skull.

    Utilizing Advanced Imaging Technology to Enhance Treatment Accuracy

    To address this challenge, researchers from the German National Center for Tumor Diseases (NCT), the German Cancer Research Center (DKFZ), and the Heidelberg Ion Beam Therapy Center (HIT) at Heidelberg University Hospital have employed a new imaging device developed by Czech company ADVACAM. This device incorporates the Timepix3 pixel detector technology originally developed at CERN.

    The Timepix3 chip
    CERN

    Crafted to function with both semiconductor and gas-filled detectors, the Timepix3 is a versatile integrated circuit capable of processing sparse detection data and delivering high-resolution outputs swiftly. This enables ADVACAM to utilize secondary radiation from the ion beam to update tissue maps, employing the radiation as a tracking signal.

    Our cameras can capture every charged particle emitted from the patient’s body,” explained Lukáš Marek from ADVACAM. “It’s akin to observing billiard balls scatter after a shot. If the ball trajectory aligns with the CT image, we confirm accurate targeting. Otherwise, it indicates a deviation from the ‘map,’ prompting the need for treatment reevaluation.”

    Enhancing Tumor Targeting Precision while Minimizing Patient Radiation Exposure

    The objective is to refine tumor targeting while minimizing unintended radiation exposure to the patient by delivering elevated radiation levels precisely to the tumor.

    Currently, the detector necessitates treatment interruption for re-planning. However, future phases of the program will enable real-time beam path corrections.

    When we initiated the development of pixel detectors for the LHC, our primary goal was to detect and image each particle interaction, aiding physicists in unraveling Nature’s mysteries at high energies,” remarked Michael Campbell, Spokesperson of the Medipix Collaborations.

    The Timepix detectors, developed by the multidisciplinary Medipix Collaborations, aim to extend this technology to new domains. This application exemplifies the unforeseen potential of the technology.”


    Read the original article on: New Atlas

    Read more: Science Made Simple: What Is Exascale Computing?

  • Video-to-Sound Tech Helps Visually Impaired Recognize Faces

    Video-to-Sound Tech Helps Visually Impaired Recognize Faces

    Neuroscientists have demonstrated that blind individuals use the same brain regions as sighted people to recognize basic faces, even when the facial information is presented through audio rather than the visual cortex. This offers an intriguing insight into neuroplasticity.
    Sensory substitution devices translated basic faces and other shapes into auditory waveforms that blind and sighted subjects were trained to recognize
    Generated by DALL-E

    Neuroscientists have demonstrated that blind individuals use the same brain regions as sighted people to recognize basic faces, even when the facial information is presented through audio rather than the visual cortex. This offers an intriguing insight into neuroplasticity.

    The capacity to identify faces is a fundamental trait shared by humans and some distant, socially inclined primate relatives. Notably, certain regions in the brain, such as the fusiform face area (FFA) located at the lower back of the brain in the inferior temporal cortex, become active specifically when faces are observed.

    Insights from FFA Activation

    Interestingly, the fusiform face area (FFA), as discovered in a 2009 study, is activated not only when people see actual faces but also when they perceive things that somewhat resemble faces, contributing to the phenomenon of pareidolia where faces are perceived in non-living objects. Moreover, this same region becomes active when individuals gain expertise in a specific domain, aiding, for instance, car enthusiasts in visually distinguishing between different models or assisting chess experts in recognizing familiar board configurations.

    In a remarkable finding, the FFA also responds in individuals blind from birth. A 2020 MIT study used fMRI scans on blind participants who explored 3D-printed shapes, including faces, hands, chairs, and mazes. Surprisingly, touching these miniature faces activated the FFA in a manner similar to visual stimulation.

    Visual activation of the fusiform face area in subjects viewing schematic faces
    Georgetown University

    So, in a way, the fusiform face area (FFA) appears indifferent to the sensory system providing facial information, and recent research from a neuroscience team at Georgetown University Medical Center supports this notion.

    The team enlisted six blind and ten sighted participants and initiated training with a “sensory substitution device.” This device included a head-mounted video camera, blindfold eyepieces, a set of headphones, and a processing computer. The system would receive input from the video camera, translating it into audio. It segmented the field of view into a 64-pixel grid, assigning each pixel a distinct auditory pitch.

    Translating Visual Data into Stereo Soundscapes

    These pitches were also presented in a stereo soundstage, as detailed in the research paper. For instance, if the image appeared as a dot in the superior right corner of the camera’s field of view, the corresponding sound would be a high-frequency tone mainly delivered through the right headphone. If the dot was in the top middle of the field of view, the sound would be a high-frequency tone, distributed equally through both headphones. In the case of a line at the bottom left corner, the sound would be a blend of low frequencies primarily delivered through the left headphone.

    Over ten one-hour sessions, the participants trained with these devices, adapting to “see” with their ears while moving their heads. They were presented with cards featuring simple shapes, including horizontal and vertical lines, various houses, geometric shapes, and basic emoji-style happy and sad faces. Although the training was challenging, by the end of it, all subjects were able to recognize simple shapes with an accuracy exceeding 85%.

    The sensory substitution devices had a resolution of just 64 pixels. At the lower right are some of the shapes shown to the subjects
    Georgetown University

    During shape recognition testing in an fMRI machine, both the sighted and visually impaired participants exhibited activation of the fusiform face area (FFA) when presented with a basic face shape. Some blind individuals were even able to accurately discern whether the face displayed a happy or sad expression, as demonstrated in a 45-second audio clip from the study, providing an auditory representation of the device.

    Geometric Exposure and Fusiform Face Area Development in the Blind

    Josef Rauschecker, PhD, DSc, professor of Neuroscience and senior author of the study, remarked in a press release, “Our findings with individuals who are blind suggest that the development of the fusiform face area does not rely on exposure to actual visual faces but rather on exposure to the geometric configurations of faces, which can be conveyed through other sensory modalities.”

    Furthermore, the research team observed that sighted subjects predominantly exhibited activation in the right fusiform face area, whereas blind subjects showed activation in the left FFA.


    Read the original article on: New atlas

    Read more: China’s Ambitious Mars Sample Return Mission and its Potential Impact on the Space Race