Microsoft’s Bing Chatbot Has Begun To Display A Defensive Attitude And Respond With Impertinence To Its Users

Microsoft’s Bing Chatbot Has Begun To Display A Defensive Attitude And Respond With Impertinence To Its Users

According to online exchanges by developers testing the AI creation, Microsoft's fledgling Bing chatbot can go off the rails at times, denying obvious facts and chiding users.On Wednesday, a Reddit forum dedicated to the AI-powered version of Bing search engine was full of stories about the chatbot scolding, deceiving, or displaying outright confusion during conversation-style interactions with users.
(Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)
SOPA IMAGES/LIGHTROCKET VIA GETTY IMAGES

According to online exchanges by developers testing the AI creation, Microsoft’s fledgling Bing chatbot can go off the rails at times, denying obvious facts and chiding users.

On Wednesday, a Reddit forum dedicated to the AI-powered version of Bing search engine was full of stories about the chatbot scolding, deceiving, or displaying outright confusion during conversation-style interactions with users.

The Bing chatbot

Microsoft collaborated with the start-up OpenAI to create the Bing chatbot. OpenAI has been making waves in the industry since the release of ChatGPT, a highly publicized application that can generate various types of text in a matter of seconds with a straightforward prompt, launched in November.

Ever since the emergence of ChatGPT, the technology that powers it, called generative AI, has been sparking strong emotions ranging from intrigue to apprehension.

AFP questioned the Bing chatbot regarding a news report that stated it made exaggerated statements such as accusing Microsoft of spying on its employees. The chatbot responded by asserting that this was a false and defamatory attack directed at itself and Microsoft.

Posts made on the Reddit forum

The Reddit forum contained screenshots of conversations with an improved version of Bing, along with reports of mishaps, such as the search engine claiming that the current year is 2022 and admonishing a user for questioning its accuracy.

Other users reported that the chatbot provided inappropriate advice such as how to hack a Facebook account, plagiarize an essay, or tell a racist joke.

According to a Microsoft representative who spoke to AFP, the recently released version of Bing aims to provide both entertaining and accurate responses. However, since it is still in the early stages of development, it may occasionally produce unexpected or incorrect answers due to factors like the length or context of the conversation.

“We are modifying the responses to ensure that they are sensible, pertinent, and constructive as we gain insights from these exchanges.”

Microsoft’s experience with the Bing chatbot’s performance issues mirrors that of Google’s Bard, which faced criticism for a mistake made by the chatbot in an advertisement shortly after its launch.

Following the mistake made by its Bard chatbot, Google’s stock price plummeted by over 7% on the day the announcement was made.

Microsoft and Google aim to revolutionize online search by incorporating ChatGPT-like capabilities into their search engines, providing direct and complete answers rather than just a list of links to external websites.


Read the original article on Science alert.

Read more: Amazon Employees are Already Using ChatGPT for Software Coding

Share this post