Tag: Siri

  • Apple May Tap Anthropic Or OpenAI For Siri In a Major Strategy Shift

    Apple May Tap Anthropic Or OpenAI For Siri In a Major Strategy Shift

    Apple Inc. is exploring the use of AI from Anthropic PBC or OpenAI to revamp Siri, potentially bypassing its own models in a bold attempt to revive its struggling AI strategy.
    Image Credits: Unsplash/CC0 Public Domain

    Apple Inc. is exploring the use of AI from Anthropic PBC or OpenAI to revamp Siri, potentially bypassing its own models in a bold attempt to revive its struggling AI strategy.

    Apple has held talks with Anthropic and OpenAI about integrating their large language models into Siri, according to sources familiar with the matter. The company has reportedly asked both firms to train versions of their models to run on Apple’s cloud infrastructure for testing.

    If Apple proceeds, it would mark a major shift from its current strategy. The tech giant now relies on its proprietary Apple Foundation Models for most AI functions and had planned to release a Siri upgrade based on that technology in 2026.

    Apple Weighs Third-Party AI as It Pushes Forward With In-House “LLM Siri” Effort

    Adopting models like Claude or ChatGPT would signal Apple’s acknowledgment of its challenges in keeping up with the generative AI boom. While Siri can already route web queries to ChatGPT, the assistant itself still runs on Apple’s own tech.

    Sources say the evaluation of third-party models is still in the early stages, and no final decision has been made. Meanwhile, an internal project called “LLM Siri,” based on Apple’s own models, is still under active development.

    Switching strategies—an option being considered for next year—could enable Apple to bring Siri closer to the capabilities of AI assistants on Android devices, potentially helping it shake off its image as an AI underperformer.

    Spokespeople for Apple, Anthropic, and OpenAI declined to comment. Following Bloomberg’s report on the talks, Apple’s stock rose more than 2%.

    Siri Faces Challenges

    The effort to explore third-party AI models was initiated by Siri chief Mike Rockwell and software engineering head Craig Federighi, who took over Siri responsibilities after they were removed from Apple’s AI chief, John Giannandrea. Giannandrea was sidelined following lackluster reception to Apple Intelligence and delays in Siri improvements.

    Rockwell, who previously led the Vision Pro headset project, assumed control of Siri engineering in March. He directed his team to evaluate whether Siri would perform better with Apple’s own AI models or with external technologies like Claude, ChatGPT, and Google’s Gemini.

    After extensive testing, Rockwell and other executives reportedly found Anthropic’s Claude to be the most promising fit for Siri. This led Adrian Perica, Apple’s VP of corporate development, to begin talks with Anthropic about integrating the technology.

    Siri, which debuted in 2011, has lagged behind leading AI chatbots. Apple’s efforts to upgrade the assistant have been hindered by technical challenges and delays.

    Last year, Apple introduced new Siri features, including the ability to access personal data and analyze on-screen content, as well as enhanced app control. These updates, originally slated for early 2025, were postponed indefinitely and are now expected next spring, according to Bloomberg.

    Ambiguity Surrounding AI

    Sources familiar with Apple’s AI team say it is currently navigating significant uncertainty, with leadership still evaluating various strategic options. Although Apple has committed a multibillion-dollar budget for 2026 to support its proprietary cloud-based models, its longer-term roadmap remains unclear.

    However, executives including Craig Federighi and Mike Rockwell are increasingly open to incorporating external technologies as a way to accelerate short-term progress. They reportedly believe there’s no urgent need to rely solely on Apple’s in-house models—which they view as lagging—when third-party partnerships could offer better results.

    This strategy would be similar to Samsung Electronics’ approach, which brands its capabilities under the Galaxy AI name, despite relying heavily on Google’s Gemini. Likewise, Amazon is using Anthropic’s technology to enhance its Alexa+ assistant.

    Looking ahead, Apple executives believe the company should ultimately control its own AI models, given their growing role in powering products. Apple is already exploring several AI-driven initiatives, including a tabletop robot and smart glasses.

    To strengthen its AI capabilities, Apple has reportedly explored acquiring Perplexity and briefly engaged in talks with Thinking Machines Lab, the AI startup founded by ex-OpenAI CTO Mira Murati.

    Declining Team Spirit

    A team of about 100 engineers is developing Apple’s AI models under the leadership of Ruoming Pang, a distinguished engineer who came from Google in 2021 to lead the initiative. Pang reports to Daphne Luong, a senior AI research director and a key deputy to John Giannandrea. The foundation models team is one of the few major AI groups still reporting to Giannandrea, though Craig Federighi and Mike Rockwell have increasingly taken on more influence in this area.

    The uncertainty around Apple’s AI direction has strained the team, even though it includes some of the industry’s most sought-after professionals. Some employees have expressed frustration internally over the company’s consideration of third-party AI technology, feeling that it unfairly suggests they are responsible for the company’s struggles in the space. Several have hinted they might leave for more lucrative offers being extended by Meta and OpenAI.

    Meta, which owns Facebook and Instagram, has reportedly been offering annual compensation packages ranging from $10 million to over $40 million to attract engineers for its Superintelligence Labs unit. In contrast, Apple often pays its AI staff far less—sometimes even less than half that amount.

    Recently, one of Apple’s top researchers in large language models, Tom Gunter, resigned after about eight years at the company. Colleagues find him especially hard to replace because he brings specialized expertise and rivals offer highly competitive salaries.

    Earlier this month, Apple also faced the possible departure of the team behind MLX—its open-source platform for building machine learning models on Apple silicon. After the team threatened to leave, Apple responded with counteroffers, and they have decided to remain, at least for now.

    Talks between Anthropic and OpenAI

    During talks with both Anthropic and OpenAI, Apple requested tailored versions of Claude and ChatGPT that could operate on its Private Cloud Compute servers—an infrastructure built on advanced Mac chips currently powering some of Apple’s most capable in-house AI models.

    Apple believes hosting these models on its own chip-based, Apple-managed cloud servers—rather than external infrastructure—would offer stronger privacy protections. The company has already run internal tests to assess the approach’s viability.

    Other Apple Intelligence features rely on on-device AI models. While they offer less power than cloud-based versions, these local models handle simpler tasks like summarizing short emails or generating Genmojis.

    Limited Developer Access

    Later this year, Apple plans to open these on-device models to third-party developers, allowing them to build AI-powered features using its technology.

    However, Apple has not announced plans to grant app developers access to the cloud-based models, partly because its cloud servers currently lack the scale needed to support a surge of third-party tools.

    The company has no plans to stop using its own models for on-device or developer-facing purposes. Nonetheless, engineers on the foundation models team worry that using third-party models for Siri could signal a broader shift away from Apple’s in-house systems.

    In 2023, OpenAI offered to help Apple train on-device models, but Apple declined.

    Since December 2024, Apple has been using OpenAI’s technology for some features. ChatGPT powers general knowledge responses in Siri and can generate text within the Writing Tools feature. iOS 26, due later this year, will expand ChatGPT’s role to include image generation and analysis.

    Negotiations between Apple and Anthropic have hit roadblocks over financial terms, with Anthropic reportedly asking for a multibillion-dollar annual deal that would increase significantly over time. These disagreements have led Apple to consider continuing its collaboration with OpenAI or exploring other partners if it proceeds with outsourcing.

    Leadership Changes

    If Apple finalizes a deal, it would further diminish the role of Giannandrea, who joined the company from Google in 2018 and has been a strong advocate for developing large language models in-house.

    Beyond losing oversight of Siri, Apple also removed Giannandrea from leading its robotics efforts. In a series of previously undisclosed changes, Apple moved the Core ML and App Intents teams—who enable AI integration in third-party apps—under Craig Federighi’s software engineering division.

    Apple’s foundation models team had also been working on large language models to support code generation in Xcode, its development environment. However, the company shut down that initiative—known as Swift Assist and introduced last year—about a month ago.

    Instead, Apple plans to launch a new version of Xcode later this year that will support third-party coding assistants, allowing developers to choose between ChatGPT and Claude.


    Read the original article on: Tech Xplore

    Read more: No AI-Powered, Personalized Siri Revealed At WWDC 2025

  • No AI-Powered, Personalized Siri Revealed At WWDC 2025

    No AI-Powered, Personalized Siri Revealed At WWDC 2025

    At this year’s WWDC 2025, Apple rolled out a range of updates across its operating systems, services, and software—highlighted by a fresh design called “Liquid Glass” and a revamped naming system. However, one much-anticipated feature was missing: a more personalized, AI-driven Siri, first teased at last year’s event.
    Image Credits: Brian Heater/TechCrunch

    At this year’s WWDC 2025, Apple rolled out a range of updates across its operating systems, services, and software—highlighted by a fresh design called “Liquid Glass” and a revamped naming system. However, one much-anticipated feature was missing: a more personalized, AI-driven Siri, first teased at last year’s event.

    Siri Update Delayed Again, With No Major Changes Expected Until 2026

    Apple’s SVP of Software Engineering, Craig Federighi, gave the update only a brief mention during the keynote, stating, “As we’ve shared, we’re continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year.”

    Apple’s reference to the “coming year” suggests there likely won’t be any major Siri updates before 2026—a notable delay in the fast-moving AI landscape, where advancements and releases happen rapidly.

    Originally unveiled at WWDC 2024, the revamped Siri promised to bring AI enhancements to Apple’s long-struggling virtual assistant across iPhones and other devices. Apple had positioned it as a major leap forward, touting new capabilities like understanding personal context—such as relationships, communication patterns, and daily routines.

    The assistant was also expected to become more functional, enabling users to perform actions within and across apps more seamlessly.

    Internal Struggles and Leadership Shake-Up Delay Siri Overhaul

    According to Bloomberg, the in-development version of the more personalized Siri was functional but unreliable, working correctly only about two-thirds of the time—falling short of Apple’s standards and not ready for release.

    In March, Apple confirmed the delay, acknowledging the update would take longer than expected. As part of the shake-up, the company reassigned SVP of Machine Learning and AI Strategy John Giannandrea and put Mike Rockwell, formerly of the Vision Pro team, in charge of the Siri overhaul.

    The leadership change signaled an attempt to course-correct after setbacks, and underscored concerns that Apple’s AI efforts were lagging behind competitors like OpenAI, Google, and Anthropic—raising red flags for investors.

    To help bridge its AI gap, Apple partnered with OpenAI—enabling Siri to hand off questions it couldn’t answer to ChatGPT. Apple’s upcoming iOS 26 also integrates ChatGPT into its AI image generation tool, Image Playground.

    At WWDC 2025, Apple made additional AI announcements, including developer access to on-device foundation models, live translation, enhancements to Genmoji and Visual Intelligence, a new Workout Buddy for Apple WatchAI in Xcode, AI integration in Xcode, and an upgraded,  AI-powered version of its Shortcuts app for improved scripting and automation.


    Read the oiginal artcle on: TechCrunch

    Read more: Apple Wallet Introduces New Travel-focused Features in iOS 26