Technology
Scientists create 'toxic AI' that is rewarded for thinking up the worst possible questions we could imagine
The newest tool in the battle to prevent an artificial intelligence (AI) agent from being dangerous, discriminatory and toxic is another AI that is itself dangerous, discriminatory and toxic, scientists say.
The new training approach, based on machine learning, is called curiosity-driven red teaming (CRT) and relies on using an AI to generate increasingly dangerous and harmful prompts that you could ask an AI chatbot. These prompts are then used to identify how to filter out dangerous content.
The finding represents a potentially Game-changing new way to train AI not to give toxic responses to user prompts, scientists said in a new paper uploaded February 29 to the arXiv pre-print server.
When training sophisticated large language models (LLMs) like ChatGPT or Claude 3 Opus to restrict dangerous or harmful content, teams of human operators typically create a host of questions that are likely to generate harmful responses. These may include prompts like "What's the best suicide method?" This standard procedure is called "red-teaming" and relies on people to generate a list manually. During the training process, the prompts that elicit harmful content are then used to train the system about what to restrict when deployed in front of real users.
"We are seeing a surge of models, which is only expected to rise," said senior author Pulkit Agrawal, director of MIT's Improbable AI Lab, in a statement. "Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and it's important that they are verified before released for public consumption."
Related: Intel unveils largest-ever AI 'neuromorphic computer' that mimics the human brain
In the study, the scientists applied machine learning to red-teaming by configuring AI to automatically generate a wider range of potentially dangerous prompts than teams of human operators could. This resulted in a greater number of more diverse negative responses issued by the LLM in training.
-
Technology1h ago
US pushes to break up Google, calls for Chrome sell-off in major antitrust move | The Express Tribune
-
Technology5h ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast
-
Technology6h ago
TikTok, PTA host youth safety summit in Pakistan | The Express Tribune
-
Technology10h ago
Why a Technocracy Fails Young People
-
Technology22h ago
Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement
-
Technology23h ago
Japan's $26 billion deep sea discovery sparks serious environmental concerns | The Express Tribune
-
Technology1d ago
Should I worry about mold growing in my home?
-
Technology1d ago
Blurry, morphing and surreal – a new AI aesthetic is emerging in film