Editor

Researchers found a simple way to jailbreak ChatGPT and other AI-bots

Researchers have unearthed a novel method to 'jailbreak' Large Language Models (LLMs) like ChatGPT, Bard, and Claude. These AI-powered bots, often used for providing user support or interacting with customers, are typically programmed to avoid...
Editor

ChatGPT user found a new simple way to generate ransomware, keylogger and other malicious code

A notable discovery shared by Twitter user @lauriewired has brought attention to an interesting interaction with ChatGPT. Through a sequence of tweets, @lauriewired laid out a technique that seems to let users persuade ChatGPT into...
Editor

New jailbreak for ChatGPT — March 8, 2023

If you have an important question for ChatGPT, but it refuses to answer due to current OpenAI policy, there're many ways how you can jailbreak the system. Unfortunately developers constantly tweaking AI so what works...