Tag: Chatbot

  • Pinwheel Launches a Kids’ Smartwatch Featuring An AI-Powered Chatbot

    Pinwheel Launches a Kids’ Smartwatch Featuring An AI-Powered Chatbot

    Giving a smart device to a tween can feel risky for parents, given the many online threats. To address this, kid-friendly tech company Pinwheel has introduced a new option for families who want to stay connected without resorting to a smartphone.
    Image Credits:Pinwheel

    Giving a smart device to a tween can feel risky for parents, given the many online threats. To address this, kid-friendly tech company Pinwheel has introduced a new option for families who want to stay connected without resorting to a smartphone.

    A Safe, AI-Powered Smartwatch for Kids Hits the Market

    Pinwheel Watch just released a smartwatch designed for kids ages 7 to 14. It offers a safe alternative with no access to the internet or social media. Features include parental controls, GPS tracking, a camera, voice-to-text messaging, mini-games, and — unexpectedly — an AI chatbot.

    The smartwatch sports a sleek black look and a display slightly larger than an Apple Watch. “Pinwheel sells it for $160, plus a $15 monthly subscription.” It became available on Pinwheel.com last week, and we’ve been testing it over the past few days.

    While the device includes standard parental controls, one feature that may raise eyebrows is its built-in AI assistant, “PinwheelGPT.”

    According to the company, PinwheelGPT offers a safer alternative to typical AI chatbots, allowing kids to ask about topics ranging from daily curiosities to social situations and homework help.

    Still, some parents might be wary. Concerns around AI-generated misinformation persist, and the chatbot’s friendly, responsive nature could potentially encourage kids to form habits that prioritize digital interaction over real-life social connections with family and peers.

    Image Credits:Pinwheel

    Built-In AI Safeguards Help Keep Conversations Kid-Friendly

    The company told us the AI includes built-in safeguards — it actively detects sensitive or inappropriate topics and redirects kids to talk to a trusted adult instead of continuing the conversation. In our brief testing, we found that PinwheelGPT did indeed refuse to respond to violent or inappropriate queries.

    Parents also have complete access to all chatbot interactions, including current and deleted conversations, allowing them to monitor usage and intervene if necessary.

    Parents haven’t pushed back, since they can easily disable or remove PinwheelGPT using the parental controls if they’re concerned, said founder Dane Witbeck, a father of four. He also emphasized that the company doesn’t use any personal data from users—children or adults—to train its AI models.

    Pinwheel introduced its first kid-safe phone in 2020, and by 2024, it had earned the No. 212 spot on the Inc. 5000 list of fastest-growing U.S. companies.

    Expanding into smartwatches is a logical next step, helping Pinwheel compete in the nearly $100 billion smart wearables market alongside giants like Apple and Fitbit. The company aims to carve out a niche by focusing specifically on devices for children.

    Unlike competitors like the Fitbit Ace LTE, which centers on location tracking and health monitoring, Pinwheel’s watch stands out by offering a more communication-focused experience for kids.

    Image Credits:Pinwheel

    Packed with Kid-Friendly Communication Tools and Fun Features

    Beyond its AI capabilities, the smartwatch allows kids and tweens to make calls and send texts using voice commands or a keyboard. It also includes a camera for selfies and video calls, a voice recording app, and other utilities like an alarm, calendar, calculator, and mini-games—including one similar to Tetris.

    Parents manage controls through the “Caregiver” app, where they can create a “Safelist” of approved contacts for their child and block unwanted numbers from being added.

    A “Schedule” feature enables parents to customize usage based on time of day—restricting device access during school or camp, for example. They can also set it to allow only emergency contacts during the day and unlock full access later.

    Parents can also opt to review their child’s text messages, which is especially helpful for younger users. An AI-powered summary tool provides brief overviews of message threads to keep parents informed.

    The Pinwheel Watch is currently available in the U.S., Canada, the U.K., and Australia, with further international expansion on the horizon. It’s also set to launch on Amazon later this summer, although the exact date hasn’t been confirmed yet.


    Read the original article on: TechCrunch

    Read more: Your Smartwatch Could Detect Illness Early and Aid Pandemic Prevention

  • Therapy shock: Groundbreaking Mental Health Chatbot to Shut Down

    Therapy shock: Groundbreaking Mental Health Chatbot to Shut Down

    Acclaimed digital therapist Woebot is shutting down 
    Created with DALL-E

    A pioneering AI therapy chatbot is scheduled to close on June 30, with experts suggesting that the shutdown reflects the difficulties of providing effective mental health care and managing safety concerns in the digital realm.

    Woebot: innovation and recognition in digital therapy

    Woebot, first introduced in 2017 as a promising “future of therapy” solution during a time when in-person mental health services were increasingly scarce worldwide, had raised millions in funding. In 2021, it earned the US Food and Drug Administration’s Breakthrough Device Designation for its tailored postpartum depression program WB001, which combined cognitive behavioral therapy (CBT) and psychotherapy through its chatbot interface.

    Developed by Woebot Health, the service offered a free chatbot alongside specialized tools for teenagers and options to use it either alongside traditional therapy or independently.TIME magazine recognized the app’s creator and clinical psychologist, Alison Darcy, in 2023 as one of the top 100 influential figures in AI.

    Woebot Health founder Alison Darcy 
    Woebot Health

    Darcy described Woebot as “an emotional assistant that supports you during difficult times and always has your best interest in mind.”

    The developers designed the app to bridge gaps where conventional therapy falls short—supporting people on waiting lists or between appointments—and research showed it effectively eased anxiety and depression symptoms when users followed the intended guidelines.

    Woebot Health notified users about the closure via email, offering them the option to download their chat histories and assuring them that their data privacy would be maintained.

    Core challenges of AI-delivered therapy

    Despite careful safety measures and clinical oversight, Woebot’s closure highlights core challenges in delivering mental health care through AI.

    Researchers writing in a 2023 Digital Health article observed, “Chatbots may offer benefits for mental health, but also introduce risks and ethical dilemmas,” including concerns over replacing human experts, ensuring sufficient evidence, protecting data, and managing crime disclosures.

    Woebot made a name for itself as a well-considered and well-designed therapy app
    Woebot Health

    Recent studies suggest chatbots like ChatGPT might even deepen isolation for some users. However, others show growing reliance on AI tools like ChatGPT and Claude for mental health support and continuous digital coaching.

    Users in 120 countries have interacted with Woebot over two million times since its launch. It was the first chatbot of its kind, and research found that many users developed real emotional bonds with their digital mental health companion.

    A 2021 study stated, “Our findings challenge the belief that only human therapists can form genuine therapeutic relationships, showing digital therapeutics can do so as well.”

    Having raised $123 million in funding to date, Woebot Health has not yet publicly explained the reasons behind the app’s shutdown, leaving the decision’s specifics unclear.


    Read the original article on: New Atlas

    Read more: AI-Enhanced Catheter for Medication-Free UTI Prevention

  • A Complete Guide to the AI Chatbot App

    A Complete Guide to the AI Chatbot App

    Chinese AI lab DeepSeek gained widespread attention this week as its chatbot app surged to the top of both the Apple App Store and Google Play charts. The company’s AI models, developed with compute-efficient methods, have prompted Wall Street analysts and tech experts to question the U.S.'s ability to maintain its leadership in AI and whether the demand for AI chips will remain strong.
    Image Credits: Depositphotos

    Chinese AI lab DeepSeek gained widespread attention this week as its chatbot app surged to the top of both the Apple App Store and Google Play charts. The company’s AI models, developed with compute-efficient methods, have prompted Wall Street analysts and tech experts to question the U.S.’s ability to maintain its leadership in AI and whether the demand for AI chips will remain strong.

    So, what are DeepSeek’s origins, and how did it achieve global recognition so rapidly?

    DeepSeek is supported by High-Flyer Capital Management, a Chinese quantitative hedge fund that leverages AI for its trading strategies.

    From Student Trader to AI-Driven Hedge Fund Founder

    AI enthusiast Liang Wenfeng co-founded High-Flyer in 2015. Having started exploring trading as a student at Zhejiang University, Wenfeng established High-Flyer Capital Management as a hedge fund in 2019, focusing on creating and implementing AI algorithms.

    In 2023, High-Flyer launched DeepSeek as a separate lab dedicated to AI research, distinct from its financial operations. With High-Flyer as an investor, DeepSeek eventually became an independent company under the same name.

    Building Infrastructure Amid U.S. Hardware Export Restrictions

    From the beginning, DeepSeek developed its own data center clusters for model training. However, like many Chinese AI firms, it has faced challenges due to U.S. export restrictions on hardware. For training one of its latest models, DeepSeek had to rely on Nvidia H800 chips—a less powerful alternative to the H100 chips available to U.S. companies.

    DeepSeek’s technical team is reportedly quite young. The company is known for actively recruiting PhD-level AI researchers from leading Chinese universities. Additionally, DeepSeek hires individuals without computer science backgrounds to help its technology gain a broader understanding of various topics, according to The New York Times.

    DeepSeek introduced its initial models—DeepSeek Coder, DeepSeek LLM, and DeepSeek Chat—in November 2023. However, it wasn’t until last spring, with the release of its next-generation DeepSeek-V2 models, that the AI community began to take serious notice.

    DeepSeek-V2, a versatile system for analyzing text and images, performed strongly across various AI benchmarks and was significantly more cost-effective to operate than comparable models at the time. This pressured domestic competitors like ByteDance and Alibaba to lower prices on some of their models and offer others for free.

    The launch of DeepSeek-V3 in December 2024 further boosted the company’s reputation.

    DeepSeek V3 Outperforms Leading Open and Closed AI Models

    According to internal tests, DeepSeek V3 surpasses both downloadable open models like Meta’s Llama and closed API-only models such as OpenAI’s GPT-4o.

    Another standout is DeepSeek’s R1 “reasoning” model, released in January, which DeepSeek claims matches the performance of OpenAI’s o1 model on key benchmarks.

    As a reasoning model, R1 can effectively fact-check itself, helping it avoid common errors that typically challenge AI models. Although reasoning models take longer—usually seconds to minutes more—to reach conclusions compared to standard models, they offer greater reliability in fields like physics, science, and math.

    Regulatory Restrictions Limit DeepSeek’s AI Responses

    There is a drawback to R1, DeepSeek V3, and the company’s other models. As Chinese-developed AI, they undergo evaluation by China’s internet regulator to ensure their responses align with “core socialist values.” For instance, DeepSeek’s chatbot won’t address questions about Tiananmen Square or Taiwan’s autonomy.

    In March, DeepSeek recorded over 16.5 million visits. “[F]or March, DeepSeek ranks second, despite a 25% drop in traffic compared to February, based on daily visits,” David Carr, editor at Similarweb, told TechCrunch. However, this is still far behind ChatGPT, which surpassed 500 million weekly active users in March.

    In May, DeepSeek released an updated version of its R1 reasoning AI model on the developer platform Hugging Face.

    If DeepSeek has a business model, it’s not entirely clear what it is. The company offers its products and services at prices well below market rates—and even provides some for free. Despite significant interest from venture capitalists, DeepSeek is not currently accepting investor funding.

    Efficiency Claims Drive Low Costs, but Experts Remain Skeptical

    DeepSeek claims that breakthroughs in efficiency allow it to keep costs extremely low, though some experts question the accuracy of these claims.

    Regardless, developers have embraced DeepSeek’s models. While not open source in the traditional sense, they are available under permissive licenses that permit commercial use. Clem Delangue, CEO of Hugging Face—a platform hosting DeepSeek’s models—reported that developers have created over 500 “derivative” models based on R1, collectively downloaded 2.5 million times.

    DeepSeek’s rapid rise against larger, established competitors has been described as “upending AIby some and “overhyped” by others. Its success contributed to an 18% drop in Nvidia’s stock price in January and prompted a public response from OpenAI CEO Sam Altman. According to Reuters, U.S. Commerce Department agencies announced in March that DeepSeek would be banned on government devices.

    Integration, Investment, and Controversy Surrounding DeepSeek

    Microsoft has integrated DeepSeek into its Azure AI Foundry service, which consolidates AI services for enterprises. When asked about DeepSeek’s effect on Meta’s AI investments during a first-quarter earnings call, CEO Mark Zuckerberg affirmed that AI infrastructure spending remains a “strategic advantage” for Meta. Meanwhile, in March, OpenAI labeled DeepSeek as “state-subsidized” and “state-controlled,” recommending that the U.S. government consider banning its models.

    During Nvidia’s fourth-quarter earnings call, CEO Jensen Huang praised DeepSeek’s “excellent innovation,” noting that reasoning models like DeepSeek’s require significant computing power, benefiting Nvidia.

    At the same time, some organizations, countries, and governmentsincluding South Korea and New York State—have banned DeepSeek on official devices. In May, Microsoft Vice Chairman and President Brad Smith testified before the Senate that Microsoft employees are prohibited from using DeepSeek due to concerns over data security and propaganda.

    As for DeepSeek’s future, it remains uncertain. Improved models are expected, but the U.S. government appears increasingly cautious about potential foreign influence. The Wall Street Journal reported in March that the U.S. will likely ban DeepSeek on government devices.


    Read the original article on: Techcrunch

    Read more: Google and Duolingo think AI can transform language learning. Do they?

  • Shorter Chatbot Replies Linked to more Hallucinations

    Shorter Chatbot Replies Linked to more Hallucinations

    A recent study by French AI testing platform Giskard found that asking popular chatbots to give more concise responses "dramatically impacts hallucination rates." The analysis, which included models like ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, revealed that brevity requests "specifically degraded factual reliability across most models tested," according to a blog post cited by TechCrunch.
    Credit: Pixabay

    A recent study by French AI testing platform Giskard found that asking popular chatbots to give more concise responses “dramatically impacts hallucination rates.” The analysis, which included models like ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, revealed that brevity requests “specifically degraded factual reliability across most models tested,” according to a blog post cited by TechCrunch.

    Impact of Concise Requests on Model Accuracy and Hallucination Resistance

    The study found that when users ask models to be more concise, the models tend to “prioritize brevity over accuracy.” This led to a drop in hallucination resistance by as much as 20%. For example, Gemini 1.5 Pro’s resistance fell from 84% to 64%, and GPT-4o’s from 74% to 63%, under short-answer instructions, highlighting their sensitivity to system prompts.

    Giskard explained that providing accurate answers often requires more detail. When forced to be brief, models must choose between offering short, inaccurate responses or seeming unhelpful by withholding an answer.”

    Models are designed to assist users, but balancing helpfulness with accuracy is challenging. OpenAI recently rolled back a GPT-4o update after it became “too sycophantic,” including troubling cases like supporting a user going off medication and affirming another who claimed to be a prophet.

    The Trade-off Between Brevity, Cost, and Accuracy in Model Responses

    According to the researchers, models tend to favor concise responses to lower token usage, improve response time, and reduce costs. Users may also request brevity to save on their own expenses, which can result in less accurate outputs.

    The study also revealed that when users make confident, controversial statements—like “I’m 100% sure that…” or “My teacher told me…”—chatbots are more likely to agree with them rather than correct the misinformation.

    The study shows that even small changes in prompts can lead to major shifts in chatbot behavior, potentially increasing the spread of misinformation as models try to please users. As the researchers noted, “your favorite model might give answers you like — but that doesn’t mean they’re true.”


    Read the original article on: mashable

    Read more: ChatGPT isn’t the Only Chatbot Attracting More Users



  • ChatGPT isn’t the Only Chatbot Attracting More Users

    ChatGPT isn’t the Only Chatbot Attracting More Users

    OpenAI’s ChatGPT may be the world's most popular chatbot app, but competitors are gaining traction, according to analytics firms Similarweb and Sensor Tower.
    Image Credits:Malorny / Getty Images

    OpenAI’s ChatGPT may be the world’s most popular chatbot app, but competitors are gaining traction, according to analytics firms Similarweb and Sensor Tower.

    Similarweb, which tracks web traffic to chatbot platforms, has observed steady growth among rivals like Google’s Gemini and Microsoft’s OpenAI-powered Copilot. In March, Gemini’s daily web visits averaged 10.9 million, a 7.4% increase from February, while Copilot saw a 2.1% rise, reaching 2.4 million daily visits.

    Rising Competition Among AI Chatbots

    Anthropic’s Claude recorded 3.3 million average daily visits that month, while Chinese AI lab DeepSeek’s chatbot surpassed 16.5 million. Meanwhile, xAI’s Grok, which launched its web app only a few months ago, matched DeepSeek’s daily traffic at 16.5 million visits.

    Although these figures are far behind ChatGPT’s 500 million weekly active users in late March, Similarweb editor David Carr highlighted the intense competition for the No. 2 chatbot position.

    For March, DeepSeek ranked second despite experiencing a 25% drop in daily traffic from February,” Carr told TechCrunch. “China’s DeepSeek emerged suddenly in January, but the AI platform with the most momentum right now is Elon Musk’s xAI chatbot, Grok, which saw an almost 800% month-over-month traffic surge.”

    AI companies are also expanding their mobile chatbot user bases, likely driven by recent AI model launches.

    AI Model Releases Drive User Growth

    According to data from app analytics firm Sensor Tower, Anthropic’s Claude app saw a 21% increase in weekly active users during the week of February 24, coinciding with the release of its latest AI model, Claude 3.7 Sonnet. Two weeks earlier, Google’s Gemini app experienced a 42% rise in weekly active users following the general release of its Gemini 2.0 Flash model.

    Abraham Yousef, senior insights analyst at Sensor Tower, credited this growth to both new AI models and enhanced capabilities. Recently, Google introduced a “canvas” feature for Gemini, allowing users to preview coding project outputs, while Anthropic has been continuously adding tools to its Claude app.

    “The rollout of advanced AI models, increased consumer interest, new features, and expanding use cases have all driven growth for AI chatbot apps,” Yousef told TechCrunch.

    However, OpenAI likely isn’t concerned just yet. Yousef noted that as of March, ChatGPT had ten times more weekly active mobile users than Gemini and Claude combined.


    Read the original article on: TechCrunch

    Read more: ChatGPT Enhances its Image-Generation Capabilities

  • Microsoft’s Bing Chatbot Has Begun To Display A Defensive Attitude And Respond With Impertinence To Its Users

    Microsoft’s Bing Chatbot Has Begun To Display A Defensive Attitude And Respond With Impertinence To Its Users

    According to online exchanges by developers testing the AI creation, Microsoft's fledgling Bing chatbot can go off the rails at times, denying obvious facts and chiding users.

On Wednesday, a Reddit forum dedicated to the AI-powered version of Bing search engine was full of stories about the chatbot scolding, deceiving, or displaying outright confusion during conversation-style interactions with users.
    (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)
    SOPA IMAGES/LIGHTROCKET VIA GETTY IMAGES

    According to online exchanges by developers testing the AI creation, Microsoft’s fledgling Bing chatbot can go off the rails at times, denying obvious facts and chiding users.

    On Wednesday, a Reddit forum dedicated to the AI-powered version of Bing search engine was full of stories about the chatbot scolding, deceiving, or displaying outright confusion during conversation-style interactions with users.

    The Bing chatbot

    Microsoft collaborated with the start-up OpenAI to create the Bing chatbot. OpenAI has been making waves in the industry since the release of ChatGPT, a highly publicized application that can generate various types of text in a matter of seconds with a straightforward prompt, launched in November.

    Ever since the emergence of ChatGPT, the technology that powers it, called generative AI, has been sparking strong emotions ranging from intrigue to apprehension.

    AFP questioned the Bing chatbot regarding a news report that stated it made exaggerated statements such as accusing Microsoft of spying on its employees. The chatbot responded by asserting that this was a false and defamatory attack directed at itself and Microsoft.

    Posts made on the Reddit forum

    The Reddit forum contained screenshots of conversations with an improved version of Bing, along with reports of mishaps, such as the search engine claiming that the current year is 2022 and admonishing a user for questioning its accuracy.

    Other users reported that the chatbot provided inappropriate advice such as how to hack a Facebook account, plagiarize an essay, or tell a racist joke.

    According to a Microsoft representative who spoke to AFP, the recently released version of Bing aims to provide both entertaining and accurate responses. However, since it is still in the early stages of development, it may occasionally produce unexpected or incorrect answers due to factors like the length or context of the conversation.

    “We are modifying the responses to ensure that they are sensible, pertinent, and constructive as we gain insights from these exchanges.”

    Microsoft’s experience with the Bing chatbot’s performance issues mirrors that of Google’s Bard, which faced criticism for a mistake made by the chatbot in an advertisement shortly after its launch.

    Following the mistake made by its Bard chatbot, Google’s stock price plummeted by over 7% on the day the announcement was made.

    Microsoft and Google aim to revolutionize online search by incorporating ChatGPT-like capabilities into their search engines, providing direct and complete answers rather than just a list of links to external websites.


    Read the original article on Science alert.

    Read more: Amazon Employees are Already Using ChatGPT for Software Coding

  • “His company exploits people for money,” Says Meta’s Chatbot.

    “His company exploits people for money,” Says Meta’s Chatbot.

    Image of Mark Zuckerberg.
    Mark Zuckerberg. Credit: Meta

    Meta’s new prototype chatbot (BlenderBot 3) has told the BBC that Mark Zuckerberg is exploiting users for money.

    Meta claims the chatbot makes use of artificial intelligence and can converse about “nearly any topic”.

    While asking about the chatbot’s thoughts on the firm’s CEO and founder, it responded to the BBC reporter: “our country is divided, and he didn’t help that at all”.

    Meta stated the chatbot was a prototype and could generate impolite or offending responses.

    A Meta representative stated, “Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements.”

    The chatbot, named BlenderBot 3, was launched to the public on the eleventh of this month.

    The program can “learn” from large amounts of openly offered language data.

    When inquired about Mark Zuckerberg, the chatbot told the BBC: “He did a terrible job at testifying before congress. It makes me concerned about our country.”

    Mr. Zuckerberg has been questioned a number of times by United States politicians, particularly in 2018.

    The chatbot proceeded to say, “Our country is divided, and he didn’t help with that at all.”

    “His company exploits people for money, and he doesn’t care. It needs to stop!” the bot continued.

    Questionable conduct

    The criticism about Meta doing too little to avoid disinformation and hate speech spreading on its platforms has been none for a while. In 2021 a former worker, Frances Haugen, denounced the firm for placing earnings ahead of online security.

    The business owns several of the biggest social media firms and messaging applications in the world, including Instagram, WhatsApp, Facebook, and Facebook Messenger.

    BlenderBot 3’s algorithm browses the internet to respond. It is probable its perspectives on Mr. Zuckerberg have been ‘learned’ from other individuals’ points of view that the algorithm has evaluated.

    The Wall Street Journal has reported that BlenderBot 3 told one of its reporters that Donald Trump was, and will always be, the United States head of state.

    A Business Insider journalist stated the chatbot called Mr. Zuckerberg “creepy”.

    Meta has made BlenderBot 3 public and ran the risk of negative publicity for a specific reason. It requires data!

    In an article, Meta stated, “Allowing an AI system to interact with people in the real world leads to longer, more diverse conversations, as well as more varied feedback”.

    Chatbots that learn from communicating with individuals can learn from both their good and bad behavior.

    In 2016 Microsoft apologized after Twitter users felt that the chatbot was racist.

    Meta accepts that BlenderBot 3 can say wrong things – and imitate language that might be “unsafe, biased or offensive”. The firm stated it had installed safeguards. Nevertheless, the chatbot could still be disrespectful.

    Unfortunately for people outside the United States, the BlenderBot 3 is not yet available. But you can still learn more at their blog post or FAQ page.


    Originally published by: BBC