Google, chatbot
Yahoo · 1d
Gemini AI tells the user to die — the answer appeared out of nowhere when the user asked Google's Gemini for help with his homework
Google’s Gemini threatened one user (or possibly the entire human race) during one session, where it was seemingly being used to answer essay and test questions, and asked the user to die. Because of its seemingly out-of-the-blue response, u/dhersie shared the screenshots and a link to the Gemini conversation on r/artificial on Reddit.
Mint · 14h
Google's Gemini AI sends disturbing response, tells user to ‘please die’
Gemini, Google’s AI chatbot, has come under scrutiny after responding to a student with harmful remarks. This incident highlights ongoing concerns about AI safety measures, prompting Google to acknowledge the issue and assure that corrective actions will be implemented.
ia.acs · 5h
‘Please die’: Google’s AI abuses grad student
Google’s flagship AI chatbot, Gemini, has written a bizarre, unprompted death threat to an unsuspecting grad student. Twenty-nine-year-old Vidhay Reddy was deep into a back-and-forth homework session with the AI chatbot when he was told to “please die”.
NDTV · 1d
Google AI Chatbot Gemini Turns Rogue, Tells User To "Please Die"
Google's Gemini AI chatbot had a rogue moment. Google's artificial intelligence (AI) chatbot, Gemini, had a rogue moment when it threatened a student in the United States, telling him to 'please die' while assisting with the homework.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results