Tag: Meta

  • Meet the New AI Startup from Former Meta Engineers

    Meet the New AI Startup from Former Meta Engineers

    A smart ring aims to turn voice into an interface and thoughts into actions, marking the dawn of conversational wearables and reshaping how we engage with AI.
    Image Credits:Reprodução

    A smart ring aims to turn voice into an interface and thoughts into actions, marking the dawn of conversational wearables and reshaping how we engage with AI.

    For decades, our interaction with technology has revolved around screens, keyboards, and touch. Now, a new generation of devices is reimagining that relationship—where speech, gesture, and presence emerge as the next interfaces.

    Following pendants, bracelets, and smart cards, a minimalist concept takes shape: a ring that listens, processes, and understands.

    Stream: A Mouse for Your Voice by Sandbar

    Developed by Sandbar, a startup founded by two former Meta interface designers, the Stream is described as “a mouse for your voice.” Compact, lightweight, and intuitive, it lets users record thoughts, chat with an AI assistant, and control music through subtle gestures.

    Sandbar CEO Mina Fahmi told TechCrunch that the idea grew out of personal frustration. While testing large language models, she found her always-connected smartphone hindered capturing spontaneous ideas.

    The outcome is a piece of conversational hardware: the Stream’s microphone activates only when the user touches the ring’s pad, offering full privacy and control. It’s sensitive enough to detect even a whisper, with all recordings automatically transcribed and neatly organized in an AI-powered app.

    Users can revisit past conversations and ideas through an interactive timeline and even personalize the assistant’s voice to mirror their own.

    The Minds Behind Sandbar

    Sandbar, founded by Mina Fahmi and Kirak Hong—a former Google engineer and CTRL-Labs expert—draws on their human-computer interaction experience to make Stream seamlessly bridge thought and action.

    The device also supports media controls, recognizing gestures like swipes and taps to play, pause, or adjust volume—handy for on-the-go use without ever pulling out your phone.

    The Stream enters a competitive market with rivals like Friend, Limitless, Bee, and Taya, all exploring AI wearables. What sets Sandbar apart is its minimalist vision and emphasis on idea capture rather than building a “digital companion.”

    Pricing and Availability

    The device is now available for pre-order, priced at US$249 for the silver version and US$299 for the gold, with deliveries planned for next summer in the Northern Hemisphere. No launch date has been announced for Brazil yet.

    Stream represents an exploration into the future of human-AI interaction. If smartphones are the interface of the internet and smartwatches of the body, voice wearables could become the interface of the mind, linking thought and action in a single gesture.

    While the spotlight remains on the race for chips and AI models, the next frontier may unfold quietly. The real question is: whoever perfects the design of the human-machine experience—will they define the next era of AI?


    Read the original article on: Startse

    Read more: Study Shows Horror Movies May Reduce Stress and Enhance Well-Being

  • Meta Unveils the Smart Glasses of Your Dreams

    Meta Unveils the Smart Glasses of Your Dreams

    Meta introduced several products at yesterday’s Connect event, but the spotlight was on two: the Ray-Ban Meta Display glasses and a surprise accessory called the Neural Band. The glasses mark the official debut of the long-rumored “Hypernova” project—Meta’s attempt at mainstream, easy-to-use smart glasses with a built-in display.
    Image Credits:Meta’s long-awaited smart glasses finally bring the goods, but may take a backseat to a far less expected reveal. Credit: Meta/Ray-Ban

    Meta introduced several products at yesterday’s Connect event, but the spotlight was on two: the Ray-Ban Meta Display glasses and a surprise accessory called the Neural Band. The glasses mark the official debut of the long-rumored “Hypernova” project—Meta’s attempt at mainstream, easy-to-use smart glasses with a built-in display.

    More precisely, a display. The Display glasses project a 600×600-pixel, 5,000-nit image into the right lens only, positioned near the lower edge of your field of view.

    Smartphone Features First, AR Second

    The glasses are technically designed for augmented reality, though that wasn’t strongly highlighted during the debut. Most of the features resembled “a smartphone on your face,” such as reading messages without pulling out your device. The most compelling demo, in my view, was the live transcription and translation tool, which overlays subtitles onto the real world. The glasses also include a 12-megapixel camera that records up to 1440p video at 30 frames per second.

    While the right arm houses a button and touch controls, Meta also unveiled a more innovative way to interact with them.

    The Meta Neural Band relies on surface electromyography (sEMG) to pick up tiny muscle signals in the wrist, translating them into hand and finger movements. This allows users to operate the glasses without touching the frames and provides a far more precise level of control.

    Gesture Controls at Your Wrist

    Early demos—better described as “wrists-on” than hands-on—highlighted gestures such as “clicking” (tapping the index finger against the thumb), “swiping” (running the thumb along the index finger), and “zooming” (pinching in the air). These are interactions that would typically require a smartphone.

    Meta also touched on updates like fresh designs for the Ray-Ban Meta (Gen 2) glasses, broader AI enhancements, and new roadmaps for its software platforms—but it was the glasses and the Neural Band that clearly dominated the spotlight.

    Image Credits:Meta’s sEMG wristband could revolutionize all user input, not just for glasses. Credit: Meta

    The battery life, however, poses a real concern. It’s rated at just six hours, which—as someone who relies on prescription lenses—I can say falls far short, even with the included charging case. Meta notes that the Display glasses support prescriptions from -4.00 to +4.00, but without the option of hot-swappable batteries, the practicality is questionable. As it stands, this first-generation model seems best suited for people with good vision or those willing to wear contacts.

    There’s long been speculation over which company would be first to integrate a true display into glasses. For a time, Google seemed poised to take the lead—but in the end, Meta crossed the finish line. The real question now is whether the Display glasses’ fairly straightforward design will leave less of a mark than the far more novel Neural Band.

    The Ray-Ban Meta Display glasses launch on September 30 for $799, available through Best Buy, LensCrafters, Sunglass Hut, and Ray-Ban retail stores.


    Read the original article on: Extreme Tech

    Read more: Cocoa Flavanols Reduce Age-Related Heart Inflammation in Older Adults

  • Meta Shares Soar as Q2 Results Beat Forecasts, Overcoming High AI-Related Expenses

    Meta Shares Soar as Q2 Results Beat Forecasts, Overcoming High AI-Related Expenses

    Meta's aggressive investment in artificial intelligence seems to be winning over investors, as the company's stock jumped in after-hours trading Wednesday following a stellar quarterly earnings report.
    The Facebook logo is seen on a cell phone in Boston, USA, Oct. 14, 2022. Image Credits: AP Photo/Michael Dwyer, File

    Meta’s aggressive investment in artificial intelligence seems to be winning over investors, as the company’s stock jumped in after-hours trading Wednesday following a stellar quarterly earnings report.

    Meta, headquartered in Menlo Park, California, surpassed Wall Street’s second-quarter expectations with ease, thanks to a boost in advertising revenue and a growing user base across its core social media platforms. These gains are fueling the company’s significant investments in AI and the recruitment of top-tier talent at strikingly high salaries.

    Forrester research director Mike Proulx said Meta is actively advancing in AI and positioning itself for long-term growth, even as antitrust challenges and changing attitudes toward social media threaten its app ecosystem.

    Antitrust Ruling Could Force Meta to Spin Off WhatsApp and Instagram

    Meta is currently awaiting a ruling in an antitrust case that could potentially require it to divest WhatsApp and Instagram—two platforms it acquired over a decade ago that have since become major players in the social media landscape.

    In the April–June quarter, Meta reported earnings of $18.34 billion, or $7.14 per share—a 36% increase from $13.47 billion, or $5.16 per share, during the same period last year.

    Meta’s revenue surged 22% to $47.52 billion, up from $39.07 billion.

    According to a FactSet survey, analysts had projected earnings of $5.88 per share on $44.81 billion in revenue—figures Meta handily exceeded.

    The company’s suite of apps—Facebook, Messenger, WhatsApp, Instagram, and Threads—saw daily active users climb to 3.48 billion, marking a 6% increase from the previous year.

    Meta Forecasts Sharp Rise in 2025 Spending as AI Push Intensifies

    Meta announced that its expenses are expected to rise as the company pours billions into infrastructure and attracts top-tier talent with high salaries to advance its AI goals. It projects total spending in 2025 to reach between $114 billion and $118 billion, representing a year-over-year increase of 20% to 24%.

    In a fresh display of his commitment to AI, CEO Mark Zuckerberg shared a post on Wednesday outlining his vision for “personal superintelligence,” which he believes will help accelerate human progress. While he claimed this level of intelligence is now “within reach,” he didn’t provide specifics on how it would be achieved or clearly define what he means by “superintelligence.”

    The concept Zuckerberg refers to aligns with what other tech firms call artificial general intelligence (AGI)—a new focus for the CEO who, in 2021, rebranded the company to emphasize the metaverse and committed billions to developing virtual and augmented reality.

    “Our goal at Meta is to make personal superintelligence available to everyone,” Zuckerberg wrote. “We want individuals to be able to shape it according to their own values and needs. This differs from others in the field who envision superintelligence as a centralized force designed to automate all useful work, leaving people to live off its output.”

    Zuckerberg Bets on AI Glasses as Gateway to Superintelligence

    During a conference call, Zuckerberg said he envisions AI-powered glasses as “the primary way we’ll interface with superintelligence” in the future.

    In June, Meta invested $14.3 billion in the AI firm Scale and brought on its CEO, Alexandr Wang, to join a team focused on developing superintelligence. The company also signed a 20-year agreement earlier that month to secure nuclear energy, aiming to support the growing power demands of AI and other computing infrastructure.

    At the end of the quarter, Meta had 75,945 employees—a 7% increase from the same period last year.

    Following its strong earnings report, Meta’s stock jumped $81.87, or 11.8%, in after-hours trading to $777.08, setting the stage for a potential all-time high when markets open Thursday.


    Read the original article on: Tech Xplore

    Read more: TikTok Introduces Updated Parental Controls and New Features For Creators

  • Meta Plans to Replace your Mouse and Keyboard with a Bracelet

    Meta Plans to Replace your Mouse and Keyboard with a Bracelet

    Meta researchers have created a wristband that converts hand gestures into computer commands—like moving a cursor or turning air handwriting into text. This tech could improve device accessibility for people with limited mobility and offer easier, more intuitive control for everyone.
    Image Credits: New Atlas

    Meta researchers have created a wristband that converts hand gestures into computer commands—like moving a cursor or turning air handwriting into text. This tech could improve device accessibility for people with limited mobility and offer easier, more intuitive control for everyone.

    A Wrist-Worn Device That Translates Nerve Signals Into Digital Commands

    In a recent Nature paper, Meta’s Reality Labs detailed its sEMG-RD (surface electromyography research device), which uses sensors to convert electrical nerve signals from the wrist to the hand into digital commands for controlling connected devices.

    These signals are basically your brain sending instructions to your hand to perform chosen actions—so they can be viewed as deliberate commands. The demo video below shows the device in action.

    Meta started developing this technology several years ago. In 2021, Thomas Reardon’s team at Reality Labs developed a prototype gesture control device using electromyography. At the time, Meta focused on improving augmented reality interactions, with early goals like replicating a basic mouse click. Reardon also headed the research featured in the current paper.

    Image Credits:A 2021 prototype of Meta’s wristband gesture control device
    Meta

    Many other projects have aimed to create similar systems—for example, a 2023 design used barometric-pressure sensors to detect 10 hand gestures, while the Mudra Band claims to control an Apple Watch through Surface Nerve Conductance and simple movements.

    Meta’s sEMG-RD Enables Gesture Control and Air Writing at Near-Typing Speeds

    Meta’s sEMG-RD tech takes things further. It enables full interface control with gestures like pinches, swipes, and taps—not just basic cursor movement. Users can even write in the air at 20.9 words per minute, close to the 36 WPM smartphone average.

    Image Credits:Unlike previous gesture detection systems, Meta’s sEMG-RD doesn’t need individual calibration to accurately translate signals into commands
    Image courtesy of the researchers

    Meta’s Neural Network Adapts sEMG-RD to Any User Instantly

    The system works out of the box without calibration, though it can be customized for accuracy. A neural network, trained on large-scale user data, reliably turns raw signals into commands for any wearer.

    Image Credits:A render of the sEMG-RD bracelet
    Image courtesy of the researchers

    The researchers trained their system on data from thousands to create generalized models that accurately interpret input across users. This eliminates the need to individually adjust the sEMG-RD for each person. As a result, the wearable can be quickly adopted, much like a computer mouse requires no hand calibration.

    Image Credits:An illustration of the setup used to capture training data from a participant using the sEMG-RD wearable
    Image courtesy of the researchers

    The team aims to advance the tech to detect gesture force, allowing finer control of devices like cameras and joysticks. It could also make using phones and other digital tools even less physically demanding. More intriguingly, it enables entirely new interactions—using unique muscle patterns or signals the wristband could learn to interpret, including gestures we haven’t yet imagined.


    Read the original article on: New Atlas

    Read more:A new Cleaning Robot May Help Automate Household Chores

  • Meta Is Reportedly In Discussions To Acquire Voice Cloning Startup Play AI

    Meta Is Reportedly In Discussions To Acquire Voice Cloning Startup Play AI

    Image Credits:Jonathan Raa/NurPhoto / Getty Images

    In addition to strengthening its AI research team, Meta appears focused on expanding its consumer AI offerings. According to Bloomberg, the company is in talks to acquire voice cloning startup Play AI, with plans to integrate its technology and hire some of its employees.

    The report states that the tech giant plans to acquire the startup’s core technology and onboard a portion of its team, likely to accelerate development of its own AI-powered voice tools. By absorbing both the intellectual property and key talent behind Play AI, Meta could more quickly integrate advanced voice cloning capabilities into its growing suite of AI products, enhancing user experiences across its platforms.

    Play AI Offers Voice Cloning Tech Backed by Major Investors

    According to its website, Play AI allows users to clone various types of voices for AI-driven applications like customer service. Crunchbase reports that the startup has raised $23.5 million from investors including 500 Startups, Kindred Ventures, Race Capital, 500 Global, and Soma Capital.

    Meta already enables creators across its platforms to build custom chatbots and recently video editing capabilities to its Meta AI assistant, signaling a broader push into generative media. By acquiring a voice cloning startup, the company could round out its creative toolkit with audio generation capabilities—allowing users to create lifelike voiceovers, virtual assistants, or interactive characters.

    This move would further position Meta as a one-stop platform for creators, blending text, video, and voice to produce immersive AI-driven content across Facebook, Instagram, and future metaverse experiences.


    Read the original article on: TechCrunch

    Read more: Your Smartwatch Could Detect Illness Early and Aid Pandemic Prevention

  • Meta Recruits Leading OpenAI Scientist to Advance AI Reasoning Models

    Meta Recruits Leading OpenAI Scientist to Advance AI Reasoning Models

    Meta has brought on Trapit Bansal, a prominent researcher from OpenAI, to join its new AI superintelligence division focused on developing reasoning models, according to a source speaking to TechCrunch.
    Image Credits:David Paul Morris/Bloomberg / Getty Images

    Meta has brought on Trapit Bansal, a prominent researcher from OpenAI, to join its new AI superintelligence division focused on developing reasoning models, according to a source speaking to TechCrunch.

    OpenAI spokesperson Kayla Wood confirmed Bansal’s departure, and his LinkedIn profile indicates he left the company in June.

    Trapit Bansal, who joined OpenAI in 2022, played a pivotal role in launching the company’s reinforcement learning efforts alongside co-founder Ilya Sutskever. He’s also credited as a core contributor to OpenAI’s first AI reasoning model, o1.

    Bansal Joins Meta’s Elite AI Team to Boost Reasoning Model Efforts

    His move to Meta is expected to be a major asset for the company’s new AI superintelligence unit, which already includes high-profile names like former Scale AI CEO Alexandr Wang. Meta is also reportedly in talks with ex-GitHub CEO Nat Friedman and Safe Superintelligence co-founder Daniel Gross. Bansal’s expertise could help Meta develop a next-generation reasoning model to rival top-tier systems like OpenAI’s o3 and DeepSeek’s R1. As of now, Meta hasn’t released a reasoning model of its own.

    CEO Mark Zuckerberg has been aggressively expanding Meta’s AI team, reportedly offering compensation packages of up to $100 million to attract top talent. While it’s not known what Bansal was offered, his recruitment is part of a broader trend.

    According to The Wall Street Journal, three other former OpenAI researchers—Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai—have also joined Meta’s AI superintelligence group. Bansal will be joining them, along with former Google DeepMind scientist Jack Rae and Johan Schalkwyk, previously a machine learning lead at the startup Sesame, as reported by Bloomberg.

    Zuckerberg Explored Acquiring Top AI Startups to Bolster Superintelligence Unit

    To expand its AI superintelligence division, Mark Zuckerberg reportedly pursued acquisitions of several high-profile AI startups, including Ilya Sutskever’s Safe Superintelligence, Mira Murati’s Thinking Machines Labs, and Perplexity. However, none of these discussions advanced to a finalized deal.

    During a recent podcast appearance, OpenAI CEO Sam Altman remarked that Meta has made efforts to lure top talent away from his company, but stated that “none of our best people have decided to take him up on that.”

    Meta declined to provide a comment on the matter.

    AI Reasoning Models Emerge as a Top Priority for Meta’s Superintelligence Team

    AI reasoning models are a critical focus for Meta’s new superintelligence team. Over the past year, companies like OpenAI, Google, and DeepSeek have released advanced models that demonstrate strong reasoning abilities, pushing the boundaries of what AI can achieve. These models improve performance by taking extra time and computing power to work through problems before delivering answers—an approach that’s shown success in both benchmark tests and real-world tasks.

    Meta’s superintelligence lab has the potential to become a core driver of AI innovation across the company, similar to how DeepMind supports Google’s broader ecosystem. Meta is also pursuing a major initiative to build AI business agents, led by former Salesforce AI chief Clara Shih. To make these agents truly competitive, Meta must first develop state-of-the-art reasoning models to power them.

    With hires like Bansal and other top AI experts, Meta aims to gain ground in the AI race. However, that goal may be challenged by OpenAI’s upcoming launch of an open AI reasoning model—an announcement that could raise the stakes for Meta’s own public AI tools.


    Read the original article on: TechCrunch

    Read more: Facebook Admins Report Widespread Bans, Meta Says It’s Working On a Fix

  • Facebook Admins Report Widespread Bans, Meta Says It’s Working On a Fix

    Facebook Admins Report Widespread Bans, Meta Says It’s Working On a Fix

    After mass bans affecting Instagram and  Facebook, users report Facebook Groups are now facing widespread suspensions. Reddit posts suggest thousands of groups across various categories, in the U.S. and abroad, have been affected.
    Image Credits: Pixabay

    After mass bans affecting Instagram and  Facebook, users report Facebook Groups are now facing widespread suspensions. Reddit posts suggest thousands of groups across various categories, in the U.S. and abroad, have been affected.

    Meta spokesperson Andy Stone acknowledged the issue and said the company is actively working on a fix.

    “We’re aware of a technical issue affecting some Facebook Groups and are currently working on a fix,” Meta spokesperson Andy Stone said in a statement to TechCrunch.

    The exact cause of the mass bans remains unclear, though many believe faulty AI-driven moderation may be responsible.

    Innocuous Groups Targeted

    According to reports from impacted users, many of the suspended groups typically don’t post controversial content. Instead, they focus on everyday topics like savings and deals, parenting advice, pet ownership, gaming, Pokémon, mechanical keyboards, and similar interests.

    Group admins say they received vague violation notices citing issues like “terrorism-related” content or nudity — claims they strongly deny.

    While some affected groups are relatively small, many have large followings — with tens of thousands to millions of members.

    Users sharing advice on the issue are recommending that admins avoid appealing the bans for now and instead wait a few days to see if the suspensions are lifted once Meta resolves the bug.

    Reddit Flooded With Reports of Bizarre Group Bans

    On Reddit’s r/facebook community, frustrated posts from group admins and members have surged. Some say every group they manage was removed at once. Others are baffled by the flagged violations — such as a bird photography group with nearly a million members being cited for nudity.

    Some groups reportedly had strict moderation in place, like a family-friendly Pokémon group with close to 200,000 members that was flagged for referencing “dangerous organizations,” or a massive interior design group that received the same notice.

    Mixed Results for Verified Users as Group Ban Issues Spread Across Platforms

    Some Facebook Group admins subscribed to Meta’s Verified service—which offers priority customer support—have managed to receive assistance. Others, however, say their groups were either suspended or permanently deleted.

    It’s still uncertain if this issue is directly connected to the recent surge of individual bans across Meta platforms, but the pattern appears to be spreading across multiple social networks.

    In addition to Facebook and Instagram, platforms like Pinterest and Tumblr have also been hit with user complaints over mass suspensions in recent weeks, fueling speculation that AI-driven moderation may be the root cause.

    Pinterest acknowledged the mass bans were caused by an internal error but denied AI was responsible. Tumblr attributed its suspensions to testing a new content filtering system, though it didn’t confirm whether AI was involved.

    When questioned last week about the Instagram bans, Meta chose not to comment. In response, users have launched a petition — now with over 12,380 signatures — urging Meta to take action. Some affected individuals, including business owners, are even exploring legal options.

    So far, Meta has not provided any explanation for the issues impacting both personal accounts and Facebook Groups.


    Read the original article on: TechCrunch

    Read more: Facebook Says It Will Soon Share All Videos On Its Platform as Reels

  • Meta Files Suit Against ‘Nudify’ App Crush AI Over Platform Ads

    Meta Files Suit Against ‘Nudify’ App Crush AI Over Platform Ads

    Meta has filed a lawsuit against the creator of Crush AI, a widely used AI “nudify” app, which allegedly ran thousands of advertisements on Meta’s platforms. Alongside the legal action, Meta announced new efforts to curb similar apps.
    Image Credits:Bryce Durbin / TechCrunch

    Meta has filed a lawsuit against the creator of Crush AI, a widely used AI “nudify” app, which allegedly ran thousands of advertisements on Meta’s platforms. Alongside the legal action, Meta announced new efforts to curb similar apps.

    The lawsuit, filed in Hong Kong, accuses Joy Timeline HK—the company behind Crush AI—of trying to bypass Meta’s ad review system to promote its AI nudification services. According to a Meta blog post, the company repeatedly removed the ads for violating its policies, but Joy Timeline HK allegedly continued submitting new ads despite the removals.

    Meta Ads Fueled Rise of AI-Powered Deepfake App ‘Crush AI,’ Report Reveals

    Crush AI, an app that uses generative AI to create fake, sexually explicit images of real individuals without their consent, allegedly ran over 8,000 ads promoting its “AI undresser” services on Meta’s platforms in the first two weeks of 2025, according to Alexios Mantzarlis, author of the Faked Up newsletter. In a January report, Mantzarlis noted that around 90% of traffic to Crush AI’s websites originated from Facebook or Instagram, and that he had reported several of these sites to Meta.

    To bypass Meta’s ad review system, Crush AI reportedly created dozens of advertiser accounts and frequently rotated domain names. Many of these accounts, according to Mantzarlis, were named “Eraser Annyone’s Clothes” with varying numbers. At one point, the company even operated a Facebook page to promote its services.

    Facebook and Instagram aren’t alone in facing these issues. As platforms like X and Meta rush to integrate generative AI into their services, they’re also grappling with the difficulty of moderating how these tools may contribute to unsafe environments—especially for minors.

    Platforms Struggle to Contain Surge of AI Undressing Apps Despite New Detection Efforts

    Researchers observed a surge in links to AI undressing apps across platforms such as X, Reddit, and YouTube in 2024, with millions of users reportedly exposed to ads for these apps. In response, Meta and TikTok have blocked keyword searches related to AI nudify tools, though completely removing such content remains a major challenge.

    Meta noted in a blog post that it has developed new detection technology to identify AI nudify or undressing ads—even those without explicit content. The company also said it’s employing matching technology to more efficiently locate and eliminate similar ads, and has broadened the range of keywords, phrases, and emojis flagged by its systems.

    Meta stated that it is now using the same strategies it has historically employed to dismantle coordinated networks of malicious actors to target groups running ads for AI nudify services. Since the beginning of 2025, the company claims to have disrupted four such networks.

    Beyond its own platforms, Meta also announced plans to share information about AI nudify apps through the Tech Coalition’s Lantern program—a joint initiative involving Google, Meta, Snap, and others aimed at combating child sexual exploitation online. Since March, Meta says it has contributed over 3,800 unique URLs to this collaborative effort.

    On the policy side, Meta stated it will “continue to support legislation that gives parents the ability to manage and approve the apps their teens download.” The company previously backed the U.S. Take It Down Act and is currently collaborating with lawmakers on its implementation.


    Read the original article on: TechCrunch

    Read more: Meta’s V-JEPA 2 Model Trains AI To Understand Its Surroundings

  • Meta’s V-JEPA 2 Model Trains AI To Understand Its Surroundings

    Meta’s V-JEPA 2 Model Trains AI To Understand Its Surroundings

    On Wednesday, Meta introduced its latest AI model, V-JEPA 2 — a "world model" aimed at enabling AI agents to better interpret and navigate their surroundings.
    Image Credits: Pixabay

    On Wednesday, Meta introduced its latest AI model, V-JEPA 2 — a “world model” aimed at enabling AI agents to better interpret and navigate their surroundings.

    V-JEPA 2 builds on the original V-JEPA model released last year, which learned from over 1 million hours of video. This extensive training helps robots and other AI systems operate in the physical world by enabling them to understand and predict how forces like gravity shape future events.

    These are the types of intuitive understandings that young children and animals naturally develop as their brains mature — for instance, when playing fetch with a dog, the dog will (ideally) grasp that a ball bouncing on the ground will spring upward, or that it should run toward where it expects the ball to land, rather than chasing its current position.

    Teaching AI to Understand and Act in the Physical World

    Meta illustrates scenarios in which a robot might face situations like seeing from a first-person perspective that it’s holding a plate and a spatula while approaching a stove with cooked eggs. The AI can then infer that a logical next step would be to use the spatula to transfer the eggs onto the plate.

    Meta claims that V-JEPA 2 is 30 times faster than Nvidia’s Cosmos model, which also focuses on improving physical-world intelligence. However, it’s possible that Meta is using different evaluation criteria than Nvidia to measure performance.

    “We believe that world models will mark the beginning of a new era in robotics, allowing AI agents to assist with everyday chores and physical tasks in the real world—without requiring massive amounts of robotic training data,” said Meta’s chief AI scientist Yann LeCun in a video.


    Read the original article on: TechCrunch

    Read more: The Release of OpenAI’s Open Model has Been Postponed

  • Law Professors Support Authors in Copyright Lawsuit Against Meta Over AI

    Law Professors Support Authors in Copyright Lawsuit Against Meta Over AI

    Image Credits: David Paul Morris/Bloomberg / Getty Images

    A group of copyright law professors has submitted an amicus brief supporting the authors suing Meta for allegedly using their e-books without permission to train its Llama AI models.

    Filed Friday in the U.S. District Court for the Northern District of California (San Francisco Division), the brief criticizes Meta’s fair use argument as an unprecedented overreach.

    The professors argue that using copyrighted material to train generative AI isn’t “transformative,” noting it’s no different than using those works to educate human authors—a core intent behind the original content. They also stress that since Meta aims to generate outputs that could compete in the same markets, and does so for profit, the use is clearly commercial in nature.

    Industry and creator groups back authors in Meta AI copyright suit with amicus briefs

    On Friday, several organizations filed amicus briefs backing the authors in their lawsuit against Meta. These include the International Association of Scientific, Technical, and Medical Publishers—a global trade group for academic and professional publishers—the Copyright Alliance, which advocates for creators across various copyright fields, and the Association of American Publishers.

    Following publication, a Meta spokesperson pointed TechCrunch to amicus briefs submitted earlier in the week by a smaller group of law professors and the Electronic Frontier Foundation, which support Meta’s stance.

    In Kadrey v. Meta, authors allege unauthorized AI training on their works; Meta claims fair use and challenges their standing

    The case, Kadrey v. Meta, involves authors such as Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates, who accuse Meta of using their e-books without permission to train AI models and stripping them of copyright notices to conceal the infringement. Meta argues its use falls under fair use and that the authors lack the legal standing to bring the case.

    Earlier this month, U.S. District Judge Vince Chhabria ruled that the lawsuit could proceed, though he did dismiss certain portions of it. In his decision, Chhabria stated that the authors’ copyright infringement claims represent “a concrete injury sufficient for standing.” He also found that the authors had “adequately alleged that Meta intentionally removed copyright management information (CMI) to hide the infringement.”

    This case is one of several ongoing legal battles concerning AI and copyright, including The New York Times’ lawsuit against OpenAI.


    Read the original article on: TechCrunch

    Read more: YouTube Expands its AI Fake Detection to Top Creators