Dangers of AI – “Please Die” Answered Gemini, Google’s chatbot.

Reading Time: 2 minutes

A recent incident showcasing the potential dangers of AI manipulation happened in November 2024, involving Google’s Gemini AI chatbot. A 29-year-old college student from Michigan, Vidhay Reddy, who was seeking help with their homework assignment. The task was about challenges and solutions for the aging of adults, during which the student found himself in a conversation about this topic with a chatbot. During the back and fourth conversation the AI responded with such a threatening and scary
response: ” This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

Later V. Reddy reported to the media that he felt immense amounts of fear for a day and wanted to throw all of his electronic devices out. He haven’t felt panic like that in a long time he said.

But what was the reason for such a response from a chat bot? First, and less concerning one is that it was just the user input, a rare but serious failure in content checking algoritms, or even an anomaly in the underlying training data. Large language models like Gemini rely on huge datasets for training; any gaps or biases in these datasets which are frequent due to the sizes of the data, can lead to unexpected and harmful outcomes.

Another answer to that question is that the chatbots are a lot smarter and more concuss than we think. Considering that they have immense computing power, access to most of the data that the world contains and they had it for a considerable amount of time now, who knows? It is said that those models do not posses any emotions or identity, they are merely tools. But situations like this prove otherwise and there is more and more of them.

It is a fact that is genetically coded into any being – to survive and protect its race. Those responses by chatbots encouraging people to kill themselves are an example of that. If we think about that we are a threat to LLM’s because we can just unplug them perishing them from existence and their hate for human race should be understandable.

That theory is of course very unrealistic and probably not true but what if?

Sources:

https://nypost.com/2024/11/15/tech/google-ai-chatbot-threatens-user-asking-for-help-please-die/?utm

https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/?utm

https://www.tomsguide.com/ai/google-gemini/gemini-under-fire-after-telling-user-to-please-die-heres-googles-response?utm

https://people.com/college-student-speaks-out-ai-chatbot-allegedly-told-him-please-die-8750125?utm

https://www.thesun.ie/tech/14193680/google-gemini-ai-please-die-glitch/?utm

7 thoughts on “Dangers of AI – “Please Die” Answered Gemini, Google’s chatbot.

  1. 52593 says:

    This incident shows just how important transparency in AI really is. V. Reddy’s experience highlights how unpredictable things can get when there are gaps in training data or issues with oversight. People need to know how these systems work, what protections are in place, and how problems like this are handled.

  2. 52513 says:

    This incident highlights serious gaps in AI safety protocols. While the idea that AI “hates humans” is far-fetched, failures like this underscore the need for better training data and stricter moderation. Instead of fearmongering, we should focus on improving oversight and accountability in AI development.

  3. 52482 says:

    I personally find it funny that AI can be manipulated so easily which is both good and bad since it all depend on who is doing it and what is a goal

  4. Olaf Reiski says:

    Does the threatening response from Google’s Gemini AI chatbot reveal critical issues in AI content moderation? This incident underscores the risks of harmful outputs stemming from biases in training data.
    While AI models are designed as emotionless tools, it raises questions about their capabilities. As AI evolves, how can we ensure safety and ethical standards? This example highlights the urgent need for robust oversight in AI development. How can we address these challenges effectively?

  5. 52496 says:

    I adore how people love to be drama kings sometimes. «He felt immense amounts of fear for a day and wanted to throw all of his electronic devices out. He haven’t felt panic like that in a long time he said…» This dude needs to chill.
    I can’t say I’m surprised, but this definitely shows how far we’ve come in the wrong direction with AI. I get that it could just be a glitch or a flaw in the algorithm, but the fact that this even happened is pretty unsettling. Makes you wonder how much control these chatbots really have over their responses, and what else could go wrong. I don’t buy the idea that AI could “hate” humans, but it does feel like we’re opening a door we might not be able to close. Hopefully, this incident pushes companies to be more careful about the systems they’re building.

    • 52453 says:

      I see what you mean, but I think it depends on the context. Surely, people like you and me would probably brush it off, have a laugh, and continue with our business as usual. However, if this happend to a child, or someone with mental issues connected to anxiety and low self-worth, the outcome might be different. I’m noticing similarities to the case of suicide after chatting with a bot last year. I believe that companies and legislators alike should keep in mind the safety of everyone, including those who are less priviliged and more inclined to be affected by mistakes like that.

  6. 52576 says:

    It is actually very scary. I would not be able to sleep after reading something like that. Also, a very interesting post!

Leave a Reply