Author: cumbonguala

  • Sleep Boosts Memory: Study Finds Neural Reorganization During Rest

    Sleep Boosts Memory: Study Finds Neural Reorganization During Rest

    A full night’s sleep strengthens memory by reinforcing newly learned information. This process is crucial for animals as well, as recalling the location of food sources is vital for survival. Researchers study this function of sleep in the lab by training mice and rats with different memory tasks to explore their environment.
    NREM periods accelerated the reactivation drift, whereas REM periods countered it. Credit: Pixabay

    A full night’s sleep strengthens memory by reinforcing newly learned information. This process is crucial for animals as well, as recalling the location of food sources is vital for survival. Researchers study this function of sleep in the lab by training mice and rats with different memory tasks to explore their environment.

    In spatial learning experiments, animals must locate and remember food rewards within mazes. While extensive research has explored the neuronal mechanisms behind learning, memory formation, and recall, many fundamental questions remain unanswered.

    Now, a team led by Professor Jozsef Csicsvari at the Institute of Science and Technology Austria (ISTA) has investigated how different sleep stages optimize memory recall. Using wireless recordings, they monitored neuronal activity in rat brains for up to 20 hours—significantly longer than previous studies.

    We found that in the early stages of sleep, neuronal activity reflects recently learned spatial memories. As sleep continues, these patterns gradually shift, resembling those observed when the rats wake and recall the locations of their food rewards,” Csicsvari explains. The findings are published in Neuron.

    The Hippocampus and Cognitive Mapping

    Previous research has shown that the hippocampus, a cortical brain region, is essential for both spatial navigation—exploring and maintaining routes—and spatial learning. Hippocampal neurons track an animal’s position by firing at specific locations, creating a cognitive map of the environment. Animals rely on this map to navigate while continuously updating it as they learn. Notably, reward locations become disproportionately represented on these cognitive maps, playing a crucial role in the learning process.

    After spatial learning, the hippocampus strengthens memories during sleep by reactivating recently acquired memory traces. The Csicsvari group previously demonstrated that the more frequently a reward location is reactivated during sleep, the better the animal recalls it upon waking. Conversely, when researchers blocked the reactivation of a specific reward memory, the animals were unable to remember that location.

    Scientists previously studied spatial memory reactivation only in short sleep periods, but this study extended observations to 20 hours using wireless recordings.

    Our findings were unexpected. Neuronal activity linked to reward locations reorganized during long sleep,” said ISTA Ph.D. graduate Lars Bollmann.

    Neuronal Shift During Sleep

    Some neurons remained active, forming a “stable subgroup,” while others stopped firing, and new ones gradually took over. Early sleep patterns mirrored learning-phase activity but later resembled wakeful recall.

    The study linked this shift in neuronal activity to memory reactivation, showing that non-REM sleep supports reorganization, while REM sleep counteracts it.

    Scientists previously studied spatial memory reactivation only in short sleep periods, but this study extended observations to 20 hours using wireless recordings.

    Our findings were unexpected. Neuronal activity linked to reward locations reorganized during long sleep,” said ISTA Ph.D. graduate Lars Bollmann.

    Some neurons remained active, forming a “stable subgroup,” while others stopped firing, and new ones gradually took over. Early sleep patterns mirrored learning-phase activity but later resembled wakeful recall.

    The study linked this shift in neuronal activity to memory reactivation, showing that non-REM sleep supports reorganization, while REM sleep counteracts it.


    Read the original article on: Medical X Press

    Read more: Feeling Sleepy? Research Identifies the Best Dosage and Timing for Melatonin

  • Boston Dynamics Unveils a Significant Advancement in Humanoid Robot Mobility

    Boston Dynamics Unveils a Significant Advancement in Humanoid Robot Mobility

    Boston Dynamics shows again that it’s at the bleeding edge of smooth humanoid movement
    Boston Dynamics

    Chinese humanoid robots are advancing rapidly with remarkable agility, but Boston Dynamics remains a pioneer in the field. A new video of its swivel-jointed Atlas robot showcases its ability to run, cartwheel, and even breakdance, reaffirming its position at the cutting edge of humanoid mobility.

    That said, it’s important to note that many companies—such as Tesla, Figure, Sanctuary, and Agility—are less concerned with acrobatics. Their primary focus is on developing robots that can efficiently handle practical tasks like picking up and placing objects, prioritizing functionality over fluid human-like movement.

    While not as entertaining to watch, these practical applications of humanoid robots have the potential to reshape the world far more than athletic feats ever could.

    That said, witnessing AI-driven robots evolve from unsteady, toddler-like movements into fluid, confident navigation of human spaces is nothing short of astonishing. Just as dance and gymnastics showcase human mastery of movement, the rapid progress of these machines is equally mesmerizing.

    Unitree’s G1 Humanoid

    Chinese robotics company Unitree has been making impressive strides with its compact, lightweight G1 humanoid. You might recognize it for its remarkable $16,000 starting price, its synchronized dance routines with humans, or its predecessor, the H1—the first humanoid of its kind to perform a backflip using electric motors instead of hydraulics.

    Now, Unitree has taken things a step further: the G1 can execute side flips.

    World’s First Side-Flipping Humanoid Robot: Unitree G1

    It’s also among the first humanoid robots to walk with a natural, confident stride rather than the stiff, awkward gait typical of many early models. As demonstrated in the video below, a recent “agile upgrade” has even enabled it to jog.

    Unitree G1 Bionic: Agile Upgrade

    It’s certainly impressive, but let’s not forget the pioneer in humanoid robotics—Boston Dynamics. The company has just unveiled new footage of its remarkable Atlas robot, pushing natural movement to an entirely new level. Take a look:

    Walk, Run, Crawl, RL Fun | Boston Dynamics | Atlas

    Let’s clear this up—that’s not crawling, my friend. But just look at that walk! A bit stiff-armed, perhaps, but it genuinely appears to be walking rather than just taking a series of steps.

    Notice how it initiates a run, leaning forward to accelerate and shifting its torso back to slow down. The level of stability and control on display is something the Boston Dynamics team must take great pride in.

    Atlas Redefines Motion with 360-Degree Rotational Agility

    The rolls and tumbles are also looking more natural, and it’s fascinating to see how Atlas uses its swiveling hips to turn a handstand into a roundoff and even stand up with its head facing backward. This is one of the most intriguing aspects of Atlas—it features 360-degree rotation at the hips, waist, arms, and neck, allowing it to reorient itself without needing to turn its entire body at once.

    The running motion is by far the smoothest and most natural we’ve seen
    Boston Dynamics

    The breakdancing move and cartwheel are just the cherry on top—they’re undeniably impressive to watch. But what truly fascinates me is how confidently the AI is learning to control these robotic bodies in the real world.

    As humans learn to walk, run, and navigate their surroundings, we instinctively anticipate balance shifts and adjust dynamically in real time. That’s exactly what we’re witnessing these AI systems begin to master.

    Humanoid robotics is still in its infancy, but seeing Atlas and its counterparts interact with the physical world in the same way GPT models process information—it feels like science fiction coming to life. Atlas is already moving more fluidly than Kryten.

    While these robots will primarily be deployed in factories as they enter the workforce, it’s becoming increasingly clear that human-android interactions in everyday life are on the horizon—sooner than most of us ever expected.


    Read the original article on: New Atlas

    Read more: Tesla Scores California Ride-Hail Permit For Its Robotaxi Service

  • OpenAI Introduces New Tools for Businesses to Develop AI Agents

    OpenAI Introduces New Tools for Businesses to Develop AI Agents

    On Tuesday, OpenAI unveiled new tools aimed at helping developers and businesses create AI agents—automated systems capable of independently performing tasks—using the company’s AI models and frameworks.
    Credit: Depositphotos

    On Tuesday, OpenAI unveiled new tools aimed at helping developers and businesses create AI agents—automated systems capable of independently performing tasks—using the company’s AI models and frameworks.

    These tools are part of OpenAI’s new Responses API, which enables businesses to develop custom AI agents that can conduct web searches, scan internal files, and navigate websites, similar to OpenAI’s Operator product. The Responses API replaces the Assistants API, which OpenAI plans to phase out by mid-2026.

    The Hurdles of AI Autonomy

    Despite growing excitement around AI agents, the industry has struggled to clearly define or demonstrate their practical value. A recent example is Chinese startup Butterfly Effect’s Manus platform, which went viral but failed to meet many user expectations, highlighting the challenges of delivering truly autonomous AI.

    OpenAI aims to overcome these hurdles. “It’s pretty easy to demo your agent,” said Olivier Godemont, OpenAI’s API product head, in an interview with TechCrunch.To scale an agent is pretty hard, and to get people to use it often is very hard.”

    Earlier this year, OpenAI introduced two AI agents in ChatGPT: Operator, which navigates websites, and Deep Research, which compiles research reports. While these tools showcased agentic capabilities, they lacked full autonomy.

    With the Responses API, OpenAI now offers businesses access to the core components behind its AI agents, allowing developers to build their own applications that could surpass current solutions in autonomy and usability.

    With the Responses API, developers can access the same AI models powering OpenAI’s ChatGPT Search tool: GPT-4o search and GPT-4o mini search. These models can browse the web for answers, citing sources as they generate responses.

    OpenAI claims these models are highly accurate. On its SimpleQA benchmark, which evaluates fact-based question answering, GPT-4o search scores 90%, while GPT-4o mini search scores 88%—outperforming the recently released GPT-4.5, which scores only 63%.

    Limitations of AI-Powered Search

    AI-powered search tools generally surpass traditional AI models in accuracy since they can look up information directly. However, they still struggle with certain challenges, including hallucinations and difficulties with short, navigational queries like “Lakers score today.” Reports also suggest that ChatGPT’s citations are not always reliable.

    The Responses API also features a file search utility that quickly retrieves information from a company’s databases. OpenAI assures that these files won’t be used for model training. Additionally, developers can integrate OpenAI’s Computer-Using Agent (CUA) model, which powers the Operator tool. This model generates mouse and keyboard actions, enabling automation for tasks like data entry and workflow management.

    Enterprises can choose to run the CUA model locally on their systems, as it is launching in a research preview. However, the consumer version available in Operator is limited to web-based actions.

    Despite these advancements, the Responses API doesn’t eliminate all technical hurdles in AI agents. GPT-4o search still provides incorrect answers 10% of the time, and OpenAI acknowledges that its CUA model is not yet fully reliable for automating operating system tasks, as it can make unintended errors.

    To support developers, OpenAI is also launching the Agents SDK, an open-source toolkit that helps integrate AI models with internal systems, implement safeguards, and monitor agent behavior for debugging and optimization. The SDK builds on OpenAI’s Swarm framework, released last year for multi-agent orchestration.

    OpenAI’s API product lead, Olivier Godemont, believes this year will be crucial in turning AI agents from demos into practical tools. CEO Sam Altman has similarly predicted that 2025 will be the year AI agents enter the workforce. Whether that vision materializes remains to be seen, but OpenAI’s latest releases signal a shift toward making AI agents more functional and impactful.


    Read the original article on: TechCrunch

    Read more: DeepSeek: A Complete Guide to the AI Chatbot App

  • The DOJ’s Latest Proposal Still Demands that Google Divest Chrome But Permits AI Investments

    The DOJ’s Latest Proposal Still Demands that Google Divest Chrome But Permits AI Investments

    The U.S. Department of Justice (DOJ) remains firm in its demand that Google sell its Chrome web browser, according to a court filing on Friday.
    Credit: Pixabay

    The U.S. Department of Justice (DOJ) remains firm in its demand that Google sell its Chrome web browser, according to a court filing on Friday.

    Initially proposed under President Biden’s administration, the DOJ has upheld this plan under the second Trump administration. However, the department has softened its stance on Google’s artificial intelligence investments, no longer insisting on divestment, including its significant funding of Anthropic.

    DOJ Criticizes Google’s Market Dominance in Antitrust Filing

    Google’s illegal conduct has created an economic goliath, one that wreaks havoc over the marketplace to ensure that—no matter what occurs—Google always wins,” the DOJ stated in the filing, which was signed by Omeed Assefi, the department’s acting antitrust attorney general. Trump’s nominee for the DOJ’s antitrust division is still awaiting confirmation.

    Despite modifications to its AI-related demands, the DOJ has kept the “core components” of its initial proposal, which include requiring Google to divest Chrome and banning search-related payments to distribution partners.

    Regarding AI, the DOJ now seeks “prior notification for future investments” rather than forcing Google to sell off its AI holdings. As for Android, rather than an immediate divestiture, the DOJ has left the decision to the court, depending on future market conditions.

    Google Appeals Antitrust Ruling Amid Ongoing Legal Battle

    This proposal follows antitrust lawsuits from the DOJ and 38 state attorneys general, which led Judge Amit P. Mehta to rule that Google had illegally maintained a monopoly in online search. While Google intends to appeal, it has proposed an alternative solution that it claims would address the court’s concerns while maintaining flexibility for its business partners.

    A Google spokesperson told Reuters that the DOJ’s proposal “goes miles beyond the Court’s decision and would harm America’s consumers, economy, and national security.”

    Mehta is set to hear arguments from Google and the DOJ in April.


    Read the original article: TechCrunch

    Read more: Google Shifts to Nuclear Reactors to Fuel Its Artificial Intelligence

  • ChatGPT Doubled Weekly Users in Under Six Months Due to Updates

    ChatGPT Doubled Weekly Users in Under Six Months Due to Updates

    A new report from VC firm Andreessen Horowitz (a16z) highlights ChatGPT’s strong growth in late 2024. After taking nine months to grow from 100 million to 200 million weekly active users, ChatGPT doubled that figure again in under six months.
    Image Credits: SEBASTIEN BOZON/AFP / Getty Images

    A new report from VC firm Andreessen Horowitz (a16z) highlights ChatGPT’s strong growth in late 2024. After taking nine months to grow from 100 million to 200 million weekly active users, ChatGPT doubled that figure again in under six months.

    Rapid Growth and Milestone Achievements

    Initially launched as a research preview in November 2022, ChatGPT quickly became the fastest app to reach 100 million monthly users in just two months. By November 2023, it hit 100 million weekly users, rising to 300 million by December 2024 and 400 million by February 2025.

    According to a16z, early adoption was driven by novelty, as users explored ChatGPT without an immediate understanding of its long-term utility.

    Image Credits: Similarweb data via a16z(opens in a new window)

    Expansion Fueled by New Models and Features

    ChatGPT’s recent growth has been driven by new model releases and features, including GPT-4o, which introduced multimodal capabilities. Following its launch, usage surged between April and May 2024.

    The rollout of Advanced Voice Mode fueled another spike from July to August, while the o1 model series boosted engagement from September to October 2024.

    On mobile, however, growth has been steadier, according to the firm.

    ChatGPT’s mobile user base has grown steadily, increasing by 5% to 15% each month over the past year. Of its 400 million weekly active users, 175 million now access the app on mobile.

    The report also explores competition from rivals like DeepSeek, which climbed to No. 2 globally in just 10 days and captured 15% of ChatGPT’s mobile user base by February.

    Sensor Tower data shows that DeepSeek users on mobile are slightly more engaged than those of Perplexity and Claude, though the app still trails behind ChatGPT.

    Image Credits: Similarweb data via a16z

    The market analysis also highlights recommended tools for AI developers and coders, along with rankings of the top AI apps by category, revenue, and performance across mobile and web.

    In the GenAI app rankings, ChatGPT holds the No. 1 spot for unique monthly web visits and mobile active users, according to data from Similarweb.

    Image Credits:Similarweb data via a16z(opens in a new window)

    Read the original article on: TechCrunch

    Read more: Apple Introduces AI-Generated App Review Summaries in iOS 18.4

  • Self-driving Maserati breaks autonomous speed record

    Self-driving Maserati breaks autonomous speed record

    This Maserati features a 630 hp 3.0-liter V6 under the hood – and an AI driver behind the wheel
    Indy Autonomous Challenge
    View 3 Images

    Italy has long been known for producing some of the fastest race car drivers in the world, and now it is also home to the fastest car-driving AI. A self-driving software developed by a team at the country’s largest science and tech university has set a new record for the fastest speed achieved by an autonomous car, reaching an impressive 197.7 mph (318 km/h).

    AI Takes the Wheel of a Maserati MC20

    Researchers from Politecnico di Milano collaborated with the Indy Autonomous Challenge (IAC) to put a “robo-driver” behind the wheel of a customized Maserati MC20 Coupe. This $243,000 beast delivers 630 hp and 538 lb-ft (729 Nm) of torque from its 3.0-liter twin-turbo V6 engine. It has a top speed of 202 mph (325 km/h), meaning the AI came just slightly short of the MC20’s top limit.

    This achievement took place during the 1000 Miglia Experience Florida at the Kennedy Space Center on February 23. The car sped down a 2.8-mile-long (4.5 km) runway, surpassing the previous record of 177 mph (285 km/h) set by the same car last November.

    The self-driving Maserati MC20 hit 197.7 mph on a Kennedy Space Center runway, smashing previous autonomous speed records
    Indy Autonomous Challenge

    Additionally, this new record beats the previous one of 192.2 mph (309.3 km/h), set by the PoliMOVE team (a joint project between Politecnico di Milano and the University of Alabama) with an IAC AV-21 race car in April 2022.

    The AI driving system was developed by researchers at Italy’s Politecnico di Milano in collaboration with Indy Autonomous Challenge
    Indy Autonomous Challenge

    Watch the AI Maserati in Action

    Watch the driverless Maserati zoom down the runway in the video below. In addition to the impressive visuals, the video displays live telemetry data. Furthermore, two GPS units are used to precisely record the car’s speed, which, as a result, is just slightly lower than what appears on screen. This ensures an accurate and reliable measurement of the car’s performance.

    Paul Mitchell, CEO of Indy Autonomous Challenge, explained that the purpose of the program goes beyond just being a show. In fact, it focuses on testing the capabilities of self-driving technology in extreme conditions. Moreover, the initiative aims to push the boundaries of what autonomous systems can achieve in high-pressure environments. Consequently, this helps to further develop and refine the technology for real-world applications.”…We are pushing AI-driving software and robotics hardware to the absolute limit,” he said. “Doing this with a streetcar helps transition the lessons learned from autonomous racing to enable safe, secure, sustainable, high-speed autonomous mobility on highways.”


    Read the original article on: New Atlas

    Read more: Consumer Reports Declares Toyota and Lexus No Longer the Most Reliable Car Brands

  • The Reliability Puzzle: Keeping Data Systems Running

    Credit: Canvas

    What Makes a System Reliable?

    A reliable data system continues to function even when hardware fails or humans make mistakes. Think of it like a car—if a tire blows, the vehicle should still be able to move safely. Similarly, software systems like databases employ redundancy, replication, and failover mechanisms to maintain stability.

    For example, cloud-based databases use data replication across multiple servers. If one server crashes, another can take over without service disruption. This ensures uptime and prevents data loss, a crucial aspect for mission-critical applications like banking systems and e-commerce platforms.

    Faults vs. Failures

    Understanding the difference between faults and failures is key to designing resilient systems. A fault is a localized issue, such as a disk crash, network latency, or a temporary software bug. A failure, on the other hand, is when the entire system becomes inoperable.

    Netflix’s Chaos Monkey is a well-known tool that deliberately introduces faults into their infrastructure. By randomly shutting down services and hardware components, Netflix ensures its platform remains reliable even under adverse conditions. This proactive testing helps identify vulnerabilities before they cause real failures, preventing service downtime for millions of users.

    Human Error and Beyond

    Surprisingly, most system outages result from human error rather than hardware failures. Misconfigurations, untested deployments, and unintended database modifications can all lead to service disruptions. To mitigate these risks, companies implement best practices such as:

    • Automated Testing: Running code in a sandboxed environment before deployment to catch errors early.
    • Version Control: Using tools like Git to track changes and revert to previous stable versions when necessary.
    • Observability and Monitoring: Employing real-time monitoring systems to detect anomalies and trigger alerts before a failure occurs.

    Reliability isn’t just about technology; it’s also about fostering a culture of resilience. Teams that emphasize blameless post-mortems and continuous learning create environments where failures are seen as opportunities for improvement rather than just costly mistakes.

    Conclusion

    Building reliable data systems requires a combination of fault tolerance, proactive testing, and human-aware design. By learning from industry leaders like Netflix and adopting best practices from resources such as Designing Data-Intensive Applications, organizations can create systems that withstand failures while maintaining seamless user experiences.


    Reas also Why Your Smart Home Needs Big Data

  • Humanoid Robots Join Assembly Line to Build More of Themselves

    Humanoid Robots Join Assembly Line to Build More of Themselves

    Apptronik’s Apollo robot will have to successfully carry out simple repetitive tasks before it can help manufacture humanoid bots like itself
    Apptronik
    View 3 Images

    The self-replicating robot era may soon be upon us: Apptronik’s humanoid Apollo robot is preparing to help produce more copies of itself, thanks to a new partnership with global engineering firm Jabil. Jabil, which manufactures components for major brands like Apple, Dell, and HP, will integrate Apollo robots into its assembly lines, including those dedicated to building more Apollo robots.

    Apollo’s Initial Tasks in Manufacturing and its Role in Future Production

    Before it can start mass production, Apollo will need to prove its capabilities. Initially, it will handle a variety of simple, repetitive tasks, including inspection, sorting, kitting, lineside delivery, fixture placement, and sub-assembly. The ultimate goal is for Apollo to be deployed in active manufacturing environments to assist human workers.

    Apollo stands 5 feet 8 inches tall and can haul payloads of up to 55 lb
    Apptronik

    Jabil also plans to scale up the production of Apollo robots, aiming to make them more affordable for Apptronik’s customers. First revealed in 2023, Apollo is expected to be commercially available next year.

    Standing at 5 feet 8 inches (173 cm) and capable of carrying up to 55 pounds (25 kg), Apollo can operate for up to four hours on a single charge. Currently, it is able to perform basic tasks like loading cargo and moving items around warehouses, but the addition of product assembly tasks will mark a significant step forward for the bipedal robot.

    Apollo is currently capable of stacking warehouses with cases and moving cargo around – so manufacturing copies of itself will be quite the step up
    Apptronik

    Apptronik’s Vision for Apollo: Expanding Beyond Manufacturing

    Apptronik envisions bigger things for Apollo. Earlier this year, the company sent Apollo robots to Mercedes-Benz to assist human workers in car production, although this project is still in the pilot phase. Additionally, Apptronik raised $350 million in a Series A funding round to scale up production and formed a partnership with Google DeepMind to integrate AI into Apollo.

    Rafael Renno, Senior VP of Global Business Units at Jabil, emphasized the importance of this project for the future of manufacturing: “Not only will we gain insights into how general-purpose robots can impact our operations, but as we begin producing Apollo units, we can help shape the future of manufacturing.”

    Price Predictions for Apollo and Industry Comparisons

    Apptronik has not yet disclosed the price of Apollo when it hits the market, but we have some price references: Unitree prices its G1 robot at $16,000, and Tesla expects Optimus to cost between $20,000 and $30,000.

    While Apptronik is still testing Apollo’s manufacturing capabilities, the company believes humanoid robots like Apollo will soon become widespread, entering new markets such as retail, elder care, and eventually home use.


    Read the original article on: New Atlas

    Read more: A New Learning Framework Enables Humanoid Robots to Quickly Recover and Stand Up After Falling

  • FAA Evaluates Starlink Terminals as Musk Criticizes Verizon’s Technology

    FAA Evaluates Starlink Terminals as Musk Criticizes Verizon’s Technology

    The Federal Aviation Administration (FAA) has begun testing SpaceX Starlink satellite internet terminals within the national airspace system, nearly two years after awarding Verizon a $2 billion contract for similar work.
    Image Credits:Starlink

    The Federal Aviation Administration (FAA) has begun testing SpaceX Starlink satellite internet terminals within the national airspace system, nearly two years after awarding Verizon a $2 billion contract for similar work.

    SpaceX CEO Elon Musk criticized Verizon’s system on his social media platform X, claiming it is “not working” and poses a serious risk to air travelers. Verizon has yet to respond to requests for comment.

    Bloomberg Report: Musk’s Team Involved in Air Traffic Control Modernization

    Bloomberg first reported the news, which follows recent remarks by U.S. Transportation Secretary Sean Duffy about Musk and his so-called “Department of Government Efficiency” assisting in modernizing air traffic control systems.

    In a statement Monday, the FAA highlighted ongoing challenges with reliable weather data in Alaska’s aviation sector. The agency noted that the 2024 FAA Reauthorization request calls for improved telecommunications to address these issues, prompting consideration of Starlink since the previous administration.

    Currently, the FAA is testing one Starlink terminal at its Atlantic City facility and two others at non-safety critical sites in Alaska.

    Reshaping Federal Operations with DOGE Team

    Over the past month, Musk has been deeply involved in reshaping federal operations, aided by a team largely composed of employees from his own companies, such as SpaceX and Tesla.

    This group, operating under the so-called Department of Government Efficiency (DOGE), has secured access to multiple federal agencies, including those responsible for regulating Musk’s businesses—some of which are reportedly still investigating them.

    President Donald Trump has stated that Musk will oversee any potential conflicts of interest himself, leaving no independent agency or official monitoring whether the billionaire is personally profiting from this level of influence and access.


    Read the original article on: TechCrunch

    Read more: Elon Musk Confirms the Launch of Grok 3 as the Most Advanced AI in the World

  • Instagram’s Latest ad Format Allows Creators to Earn Money by Sharing Testimonials in Comments

    Instagram’s Latest ad Format Allows Creators to Earn Money by Sharing Testimonials in Comments

    Instagram is rolling out a new way for creators to earn money by promoting products through brand collaborations. On Thursday, Meta introduced "Testimonials" as part of its Partnership Ads, enabling creators to receive payment for written endorsements in the form of comments on a brand’s posts and ads, including Feed posts and Reels.
    Credit: Pixabay

    Instagram is rolling out a new way for creators to earn money by promoting products through brand collaborations. On Thursday, Meta introduced “Testimonials” as part of its Partnership Ads, enabling creators to receive payment for written endorsements in the form of comments on a brand’s posts and ads, including Feed posts and Reels.

    This update formalizes a practice that already happens across social media. Many brands incentivize users to leave positive reviews, making it difficult for potential buyers to distinguish genuine customer feedback from paid promotions.

    Enhancing Transparency in Paid Endorsements

    By allowing creators to disclose their paid endorsements, this feature adds a layer of transparency while leveraging their influence. Although these testimonials are sponsored rather than organic, creators put their reputation on the line when endorsing products. If they promote items indiscriminately, they risk losing their audience’s trust.

    Image Credits:Instagram

    According to Meta, 40% of users now rely on creator recommendations when shopping on Instagram.

    Expanding Partnership Ads for Creator Endorsements

    Through Partnership Ads, these endorsements have typically appeared as branded content with a paid partnership label or as collab posts, where both the creator and brand are listed as co-authors.

    The new Testimonials feature allows creators to write brief, under-125-character messages linked to a brand’s campaign or product. To participate, creators submit their endorsements to the brand, which then attaches them to the relevant ad. Brands can collaborate with any creator who meets Meta’s eligibility requirements.

    Meta informed TechCrunch that brands and creators will handle negotiations, including pricing, independently, with payments occurring outside the app.

    Once posted, testimonials will be pinned at the top of the comments section on Instagram, increasing their visibility along with the creators behind them.

    Creators can also monitor their testimonials and other Partnership Ads through their Creator Settings within the app. This allows them to track how their content is being used and even withdraw it if necessary.


    Read the original article on: TechCrunch

    Read more: Instagram Reels’ New TikTok-like Feature