Tag: App

  • Meta Files Suit Against ‘Nudify’ App Crush AI Over Platform Ads

    Meta Files Suit Against ‘Nudify’ App Crush AI Over Platform Ads

    Meta has filed a lawsuit against the creator of Crush AI, a widely used AI “nudify” app, which allegedly ran thousands of advertisements on Meta’s platforms. Alongside the legal action, Meta announced new efforts to curb similar apps.
    Image Credits:Bryce Durbin / TechCrunch

    Meta has filed a lawsuit against the creator of Crush AI, a widely used AI “nudify” app, which allegedly ran thousands of advertisements on Meta’s platforms. Alongside the legal action, Meta announced new efforts to curb similar apps.

    The lawsuit, filed in Hong Kong, accuses Joy Timeline HK—the company behind Crush AI—of trying to bypass Meta’s ad review system to promote its AI nudification services. According to a Meta blog post, the company repeatedly removed the ads for violating its policies, but Joy Timeline HK allegedly continued submitting new ads despite the removals.

    Meta Ads Fueled Rise of AI-Powered Deepfake App ‘Crush AI,’ Report Reveals

    Crush AI, an app that uses generative AI to create fake, sexually explicit images of real individuals without their consent, allegedly ran over 8,000 ads promoting its “AI undresser” services on Meta’s platforms in the first two weeks of 2025, according to Alexios Mantzarlis, author of the Faked Up newsletter. In a January report, Mantzarlis noted that around 90% of traffic to Crush AI’s websites originated from Facebook or Instagram, and that he had reported several of these sites to Meta.

    To bypass Meta’s ad review system, Crush AI reportedly created dozens of advertiser accounts and frequently rotated domain names. Many of these accounts, according to Mantzarlis, were named “Eraser Annyone’s Clothes” with varying numbers. At one point, the company even operated a Facebook page to promote its services.

    Facebook and Instagram aren’t alone in facing these issues. As platforms like X and Meta rush to integrate generative AI into their services, they’re also grappling with the difficulty of moderating how these tools may contribute to unsafe environments—especially for minors.

    Platforms Struggle to Contain Surge of AI Undressing Apps Despite New Detection Efforts

    Researchers observed a surge in links to AI undressing apps across platforms such as X, Reddit, and YouTube in 2024, with millions of users reportedly exposed to ads for these apps. In response, Meta and TikTok have blocked keyword searches related to AI nudify tools, though completely removing such content remains a major challenge.

    Meta noted in a blog post that it has developed new detection technology to identify AI nudify or undressing ads—even those without explicit content. The company also said it’s employing matching technology to more efficiently locate and eliminate similar ads, and has broadened the range of keywords, phrases, and emojis flagged by its systems.

    Meta stated that it is now using the same strategies it has historically employed to dismantle coordinated networks of malicious actors to target groups running ads for AI nudify services. Since the beginning of 2025, the company claims to have disrupted four such networks.

    Beyond its own platforms, Meta also announced plans to share information about AI nudify apps through the Tech Coalition’s Lantern program—a joint initiative involving Google, Meta, Snap, and others aimed at combating child sexual exploitation online. Since March, Meta says it has contributed over 3,800 unique URLs to this collaborative effort.

    On the policy side, Meta stated it will “continue to support legislation that gives parents the ability to manage and approve the apps their teens download.” The company previously backed the U.S. Take It Down Act and is currently collaborating with lawmakers on its implementation.


    Read the original article on: TechCrunch

    Read more: Meta’s V-JEPA 2 Model Trains AI To Understand Its Surroundings

  • Adobe Releases Beta Version of Photoshop App for Android

    Adobe Releases Beta Version of Photoshop App for Android

    Android users can now access Photoshop on their mobile devices. On Tuesday, Adobe announced the release of the beta version of its Photoshop app for Android, arriving four months after the iPhone version debuted.
    ImageCredits: Techcrunch

    Android users can now access Photoshop on their mobile devices. On Tuesday, Adobe announced the release of the beta version of its Photoshop app for Android, arriving four months after the iPhone version debuted.

    The app includes many of the same editing tools found on the desktop version—such as layering and masking—adapted for mobile screens. During the beta phase, users can try out the app’s features at no cost.

    Powerful Editing Tools and AI Features Optimized for Mobile Use

    The app’s features include tools for combining and blending images using selections, layers, and masks. Users can also take advantage of AI-powered tools like “Generative Fill” to add or modify elements. The “Tap Select” tool allows for quick removal or replacement of image parts, and users can access a library of free Adobe Stock assets, along with advanced layer and effect controls using blend modes and adjustment layers.

    Additional tools like the “Spot Healing Brush” help remove unwanted elements, while the “Remove” and “Clone Stamp” tools allow for more detailed refinements. “Object Select” and “Magic Wand” features enable precise selections.

    Image Credits: Techcrunch

    Adobe stated that it plans to add more features to the app in the near future.

    Adobe Targets New and Existing Users with Mobile-Friendly Photoshop Experience

    With the mobile launch of Photoshop, Adobe aims to reach new audiences—particularly younger users who rely on their phones for creative work. For existing Photoshop users, the app offers a more convenient way to work directly from their mobile devices.

    The beta version of Photoshop for Android is now available for devices running Android 11 or higher, with at least 6GB of RAM. Adobe recommends 8GB or more for the best experience.

    In contrast to the Android version, the iPhone app launched with a mix of free and premium features. It also offers integration with Photoshop on the web, available through a paid plan. Adobe has not yet announced when or if this feature will be added to the Android app.


    Read the original article on: Techcrunch

    Read more:Stellantis Moves to Android after ending Amazon Partnership

  • Google Has Subtly Rolled Out An App That Lets Users Download And Execute AI Models On Their Own Devices

    Google Has Subtly Rolled Out An App That Lets Users Download And Execute AI Models On Their Own Devices

    Last week, Google quietly launched an app that enables users to run various open-source AI models from the Hugging Face platform directly on their smartphones.
    Image Credits: Pixabay

    Last week, Google quietly launched an app that enables users to run various open-source AI models from the Hugging Face platform directly on their smartphones.

    Named  Google AI Edge Gallery, the app is currently available on Android, with an iOS version on the way. It lets users browse, download, and run supported models that can generate images, answer questions, write or edit code, and more — all offline, using the device’s built-in processor.

    Cloud-based AI models are typically more powerful than those running locally, but they come with trade-offs. Some users may be concerned about sharing personal or sensitive data with remote servers, or prefer access to AI models without relying on Wi-Fi or mobile networks.

    Image Credits: Google

    Early Access to On-Device AI Tools and Customizable Prompts

    Google AI Edge Gallery—described by the company as an “experimental Alpha release”—is available for download on GitHub, with setup instructions provided. The app’s main screen features shortcuts to AI functions like “Ask Image” and “AI Chat.” Selecting a function displays a list of compatible models, including Google’s Gemma 3n.

    The app also includes a “Prompt Lab,” which allows users to perform single-step tasks such as text summarization and rewriting. This feature offers preset task templates and adjustable settings to customize model behavior.

    Google cautions that performance may differ depending on your device. Newer phones with stronger hardware will naturally handle models more efficiently, but model size also plays a role — larger models typically take longer to complete tasks like image-based question answering compared to smaller ones.

    The company is encouraging developers to share feedback on the Google AI Edge Gallery experience. The app is released under the Apache 2.0 license, allowing for broad usage, including commercial applications, with minimal restrictions.


    Read the original article on: TechCrunch

    Read more: YouTube Will Soon let Viewers use Google Lens to Search Items in Shorts

  • A Complete Guide to the AI Chatbot App

    A Complete Guide to the AI Chatbot App

    Chinese AI lab DeepSeek gained widespread attention this week as its chatbot app surged to the top of both the Apple App Store and Google Play charts. The company’s AI models, developed with compute-efficient methods, have prompted Wall Street analysts and tech experts to question the U.S.'s ability to maintain its leadership in AI and whether the demand for AI chips will remain strong.
    Image Credits: Depositphotos

    Chinese AI lab DeepSeek gained widespread attention this week as its chatbot app surged to the top of both the Apple App Store and Google Play charts. The company’s AI models, developed with compute-efficient methods, have prompted Wall Street analysts and tech experts to question the U.S.’s ability to maintain its leadership in AI and whether the demand for AI chips will remain strong.

    So, what are DeepSeek’s origins, and how did it achieve global recognition so rapidly?

    DeepSeek is supported by High-Flyer Capital Management, a Chinese quantitative hedge fund that leverages AI for its trading strategies.

    From Student Trader to AI-Driven Hedge Fund Founder

    AI enthusiast Liang Wenfeng co-founded High-Flyer in 2015. Having started exploring trading as a student at Zhejiang University, Wenfeng established High-Flyer Capital Management as a hedge fund in 2019, focusing on creating and implementing AI algorithms.

    In 2023, High-Flyer launched DeepSeek as a separate lab dedicated to AI research, distinct from its financial operations. With High-Flyer as an investor, DeepSeek eventually became an independent company under the same name.

    Building Infrastructure Amid U.S. Hardware Export Restrictions

    From the beginning, DeepSeek developed its own data center clusters for model training. However, like many Chinese AI firms, it has faced challenges due to U.S. export restrictions on hardware. For training one of its latest models, DeepSeek had to rely on Nvidia H800 chips—a less powerful alternative to the H100 chips available to U.S. companies.

    DeepSeek’s technical team is reportedly quite young. The company is known for actively recruiting PhD-level AI researchers from leading Chinese universities. Additionally, DeepSeek hires individuals without computer science backgrounds to help its technology gain a broader understanding of various topics, according to The New York Times.

    DeepSeek introduced its initial models—DeepSeek Coder, DeepSeek LLM, and DeepSeek Chat—in November 2023. However, it wasn’t until last spring, with the release of its next-generation DeepSeek-V2 models, that the AI community began to take serious notice.

    DeepSeek-V2, a versatile system for analyzing text and images, performed strongly across various AI benchmarks and was significantly more cost-effective to operate than comparable models at the time. This pressured domestic competitors like ByteDance and Alibaba to lower prices on some of their models and offer others for free.

    The launch of DeepSeek-V3 in December 2024 further boosted the company’s reputation.

    DeepSeek V3 Outperforms Leading Open and Closed AI Models

    According to internal tests, DeepSeek V3 surpasses both downloadable open models like Meta’s Llama and closed API-only models such as OpenAI’s GPT-4o.

    Another standout is DeepSeek’s R1 “reasoning” model, released in January, which DeepSeek claims matches the performance of OpenAI’s o1 model on key benchmarks.

    As a reasoning model, R1 can effectively fact-check itself, helping it avoid common errors that typically challenge AI models. Although reasoning models take longer—usually seconds to minutes more—to reach conclusions compared to standard models, they offer greater reliability in fields like physics, science, and math.

    Regulatory Restrictions Limit DeepSeek’s AI Responses

    There is a drawback to R1, DeepSeek V3, and the company’s other models. As Chinese-developed AI, they undergo evaluation by China’s internet regulator to ensure their responses align with “core socialist values.” For instance, DeepSeek’s chatbot won’t address questions about Tiananmen Square or Taiwan’s autonomy.

    In March, DeepSeek recorded over 16.5 million visits. “[F]or March, DeepSeek ranks second, despite a 25% drop in traffic compared to February, based on daily visits,” David Carr, editor at Similarweb, told TechCrunch. However, this is still far behind ChatGPT, which surpassed 500 million weekly active users in March.

    In May, DeepSeek released an updated version of its R1 reasoning AI model on the developer platform Hugging Face.

    If DeepSeek has a business model, it’s not entirely clear what it is. The company offers its products and services at prices well below market rates—and even provides some for free. Despite significant interest from venture capitalists, DeepSeek is not currently accepting investor funding.

    Efficiency Claims Drive Low Costs, but Experts Remain Skeptical

    DeepSeek claims that breakthroughs in efficiency allow it to keep costs extremely low, though some experts question the accuracy of these claims.

    Regardless, developers have embraced DeepSeek’s models. While not open source in the traditional sense, they are available under permissive licenses that permit commercial use. Clem Delangue, CEO of Hugging Face—a platform hosting DeepSeek’s models—reported that developers have created over 500 “derivative” models based on R1, collectively downloaded 2.5 million times.

    DeepSeek’s rapid rise against larger, established competitors has been described as “upending AIby some and “overhyped” by others. Its success contributed to an 18% drop in Nvidia’s stock price in January and prompted a public response from OpenAI CEO Sam Altman. According to Reuters, U.S. Commerce Department agencies announced in March that DeepSeek would be banned on government devices.

    Integration, Investment, and Controversy Surrounding DeepSeek

    Microsoft has integrated DeepSeek into its Azure AI Foundry service, which consolidates AI services for enterprises. When asked about DeepSeek’s effect on Meta’s AI investments during a first-quarter earnings call, CEO Mark Zuckerberg affirmed that AI infrastructure spending remains a “strategic advantage” for Meta. Meanwhile, in March, OpenAI labeled DeepSeek as “state-subsidized” and “state-controlled,” recommending that the U.S. government consider banning its models.

    During Nvidia’s fourth-quarter earnings call, CEO Jensen Huang praised DeepSeek’s “excellent innovation,” noting that reasoning models like DeepSeek’s require significant computing power, benefiting Nvidia.

    At the same time, some organizations, countries, and governmentsincluding South Korea and New York State—have banned DeepSeek on official devices. In May, Microsoft Vice Chairman and President Brad Smith testified before the Senate that Microsoft employees are prohibited from using DeepSeek due to concerns over data security and propaganda.

    As for DeepSeek’s future, it remains uncertain. Improved models are expected, but the U.S. government appears increasingly cautious about potential foreign influence. The Wall Street Journal reported in March that the U.S. will likely ban DeepSeek on government devices.


    Read the original article on: Techcrunch

    Read more: Google and Duolingo think AI can transform language learning. Do they?

  • WhatsApp Finally Rolls out App for IPad Users

    WhatsApp Finally Rolls out App for IPad Users

    Meta announced on Tuesday that WhatsApp is now officially available on iPad. The newly released app lets users make audio and video calls with up to 32 participants, share their screen, and switch between the front and rear cameras. Until now, iPad users had to access WhatsApp through a web browser.
    Image Credits: Pixabay

    Meta announced on Tuesday that WhatsApp is now officially available on iPad. The newly released app lets users make audio and video calls with up to 32 participants, share their screen, and switch between the front and rear cameras. Until now, iPad users had to access WhatsApp through a web browser.

    Enhanced Multitasking with iPadOS Features

    According to the company, users can make use of iPadOS multitasking tools like Stage Manager, Split View, and Slide Over. These features enable activities such as running multiple apps simultaneously, chatting while browsing, or planning a group trip during a call. On a phone, doing these tasks would typically require switching out of WhatsApp.

    WhatsApp says the new iPad app is also compatible with the Magic Keyboard and Apple Pencil.

    Seamless Syncing Across Devices with End-to-End Encryption

    The new app lets users sync their chats, calls, and media across iPhone, Mac, and other devices, while still ensuring end-to-end encryption for privacy and security, WhatsApp says.

    The release of the standalone iPad app wasn’t entirely unexpected, as WhatsApp’s official account on X (formerly Twitter) hinted at the news with a subtle post the day before.

    It’s also worth mentioning that Instagram, another app owned by Meta, is reportedly developing its own iPad version.

    The standalone iPad app is now ready for download on the App Store.


    Read the original article on: Techcrunch

    Read more:Samsung May Invest $100M in Medical Imaging Startup Exo

  • The App Acquires Advanced Motion Data at Just 1% of the Regular Cost

    The App Acquires Advanced Motion Data at Just 1% of the Regular Cost

    The OpenCap app allows clinicians to gain the ‘superpower’ of seeing below the surface, without expensive equipment
    OpenCap

    By utilizing synchronized video captured using a pair of smartphones, scientists have developed an open-source motion-capture application. This app collects data on human movements and employs artificial intelligence for swift analysis, making it suitable for clinical applications like rehabilitation, pre-surgery planning, and disease diagnostics. Remarkably, it accomplishes this at a mere 1% of the cost associated with conventional technology.

    However, Stanford University researchers, supported by funding from the US National Institutes of Health, introduced OpenCap. This innovative system relies on two precisely calibrated iPhones working in tandem to measure human motion and the intricate musculoskeletal processes that underlie movement.

    Moreover, it outpaces traditional technologies in data gathering speed and represents a small fraction of the expense incurred by specialized clinics using elaborate setups of approximately $150,000, which typically involve around eight advanced cameras.

    Making Human Movement Analysis Inclusive with OpenCap

    Senior author Scott Delp, a professor of bioengineering and mechanical engineering at Stanford, mentioned, “OpenCap makes human movement analysis accessible to all. Our aspiration is to make these formerly inaccessible tools available to a broader audience.”

    However, the data obtained from this analysis can offer insights for the treatment of individuals dealing with movement-related concerns, aid healthcare professionals in surgical planning, and assess the effectiveness of different therapies. Furthermore, it holds the potential for use in disease screening, particularly in cases where alterations in gait or balance might not be readily apparent during routine medical examinations.

    This explainer shows the relative simplicity of the capture and analysis process
    Uhlrich, S et al/(CC BY 4.0)

    They conducted trials using OpenCap with 100 participants, capturing videos that were subsequently scrutinized by web-based artificial intelligence to evaluate muscle activation, joint load, and joint movement.

    The entire data collection process for all 100 participants was completed in under 10 hours, and the analysis results were returned within 31 hours. Each individual’s data collection took approximately 10 minutes, with processing being automatically initiated within the freely accessible cloud platform for researchers.

    Co-first author Scott Uhlrich, the director of research in Stanford’s Human Performance Lab, remarked, “What OpenCap accomplishes in minutes would take a skilled engineer days to collect and analyze in terms of biomechanical data. We managed to gather data from 100 individuals in under 10 hours, a task that would have previously taken us a year to complete.”

    Exploring Body Landmarks and Forces with OpenCap

    The data examines crucial anatomical points on the body, including the knees, hips, shoulders, and other joints, observing their movement within a three-dimensional space. It then utilizes intricate models based on the principles of physics and biology related to the musculoskeletal system to evaluate the body’s motion and the forces involved. This analysis yields significant information about joint angles and the forces exerted on them.

    As Delp explained, this system can even identify the specific muscles that are engaged in the process.

    In fact, the researchers anticipate that this type of data collection, combined with deep-learning analysis, represents a groundbreaking development in biomechanics research.

    The Quantitative ‘Motion-Genome’ of Human Movement

    Delp commented, “We have the human genome, but this is essentially going to be the comprehensive ‘motion-genome’ of human movement, captured in a quantitative manner.

    However, he added, “Our aspiration is that by making human movement analysis more accessible through OpenCap, it will expedite the integration of vital biomechanical metrics into an increasing number of research studies, clinical trials, and medical practices, ultimately enhancing outcomes for patients worldwide.”

    In fact, the study has been published in PLOS Computational Biology. For more details, you can watch the video below, in which the Stanford team demonstrates the capabilities of OpenCap.

    Sophisticated human biomechanics from smartphone video

    Read the original article on: New Atlas

    Read more: Millisign Tech Guides Drones with Battery-Less Ground Tags