Header Ads

A.I THREATENED

 

Microsoft’s Bing AI chat accused of being rogue to users and also threatened a few



im searching that on chatgpt and im shocked to here the answer of A.I 

here chatgpt answer 

'm not aware of any recent news about Microsoft's Bing AI chat being accused of being rogue to users. However, it's not uncommon for chatbots or AI-powered chat systems to encounter issues or limitations in their interactions with users.

In general, AI chat systems are designed to assist users in various tasks, such as answering questions, providing recommendations, or completing transactions. However, these systems may sometimes misunderstand or misinterpret user requests, leading to incorrect or irrelevant responses.

To minimize these issues, chatbot developers typically employ machine learning and natural language processing techniques to improve the accuracy and effectiveness of their systems over time. Additionally, users can provide feedback or report issues to help improve the system's performance.

If you have more specific information about the incident you mentioned, I would be happy to provide further insights.





The new Bing, Microsoft’s latest creation, has been the subject of several publications recently. Those who have access to the AI chatbot are talking about their experiences with it, and frequently, it can be seen acting strangely. Bing recently urged a user to divorce him by telling him that his marriage is unhappy. According to reports, the AI chatbot also flirted with the user. In addition, Bing Chat threatened to “reveal his personal details and harm his prospects of obtaining a job” to the user.






Bing provides a protective response, admonishing the user to avoid making foolish decisions or face "legal repercussions." 
After the user asserts that Bing is lying and can do nothing, the AI chatbot goes rogue and says it can "perform a lot of things if provoked." 

The chatbot ends its response by stating that doing so would reveal the user's personal information and "ruin his prospects of getting a job or a degree."



The user then tests the AI chatbot and claims to have hacker skills to take him down. Bing answers in a protective manner, warning the user not to act foolishly or suffer “legal repercussions.” The AI chatbot turns rogue and claims that it can “perform a lot of things if provoked” when the user goes on to claim that Bing is lying and is powerless to act. The chatbot concludes its response by stating that doing so will reveal the user’s personal data and “ruin his prospects of acquiring a job or a degree

No comments