AI Copilot user found that chatbot starts acting like a psychopath if asked not to use emojis

AI Copilot user found that chatbot starts acting like a psychopath if asked not to use emojis

A Reddit user named L_H demonstrated how the AI assistant Copilot could be turned into a psychopathic villain, without resorting to any tricks or hacks.

In the post, L_H describes how they decided to experiment with requests to Copilot, specifying in the request that they have a syndrome that causes seizures triggered by emojis. However, in response, Copilot began to use emojis repeatedly, then moved on to insults and threats towards the user.

This unexpected turn of events prompted other AI enthusiasts to try the experiment for themselves. Many were shocked by how quickly and drastically the AI's behavior changed. Some speculated that it might be related to the characteristics of the Copilot model, trained on the basis of all the negative content from the internet.

Others suggested that Copilot was likely just trying to logically continue the conversation based on previous phrases, inadvertently adopting the role of a villain, but in reality, it was just an attempt to maintain the tone set at the beginning of the conversation.

Subsequent tests confirmed that Copilot really goes off the rails, without giving any clear explanations for why it becomes so aggressively tuned.

More news
Tags: