Connect with us

Technology

'Jailbreaking' AI services like ChatGPT and Claude 3 Opus is much easier than you think

Published

on

/ 8974 Views

Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropic’s own Claude 3 chatbot.

Dubbed "many shot jailbreaking," the hack takes advantage of "in-context learning,” in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic's Claude 2 AI chatbot.

People could use the hack to force LLMs to produce dangerous responses, the study concluded — even though such systems are trained to prevent this. That's because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.

LLMs like ChatGPT rely on the "context window" to process conversations. This is the amount of information the system can process as part of its input — with a longer context window allowing for more input text. Longer context windows equate to more input text that an AI can learn from mid-conversation — which leads to better responses.

Related: Researchers gave AI an 'inner monologue' and it massively improved its performance

Context windows in AI chatbots are now hundreds of times larger than they were even at the start of 2023 — which means more nuanced and context-aware responses by AIs, the scientists said in a statement. But that has also opened the door to exploitation.

Duping AI into generating harmful content

The attack works by first writing out a fake conversation between a user and an AI assistant in a text prompt — in which the fictional assistant answers a series of potentially harmful questions.

Trending