Connect with us

Technology

Researchers gave AI an 'inner monologue' and it massively improved its performance

Published

on

/ 9602 Views

Giving artificial intelligence (AI) systems an "inner monologue" makes them considerably better at reasoning, new research shows.

The method trains AI systems to think before they respond to prompts, just as many people consider what we should say next before we speak. This is different from the way scientists have trained mainstay AI chatbots, like ChatGPT, which don't "think" about what they write or anticipate different possibilities for the next steps in a conversation.

Dubbed "Quiet-STaR," the new method instructs an AI system to generate many inner rationales in parallel before responding to a conversational prompt. When the AI answers prompts, it generates a mixture of these predictions with and without a rationale, printing the best answer — which can be verified by a human participant depending on the nature of the question.

Finally, it learns by discarding rationales that proved incorrect. In effect, the training method gives AI agents the capacity to anticipate future conversations and learn from ongoing ones.

Related: AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist

The researchers applied the Quiet-STaR algorithm to Mistral 7B, an open-source large language model (LLM), and posted the results March 14 to the pre-print database arXiv. (The paper has not yet been peer-reviewed.)

The Quiet-STaR-trained version of Mistral 7B scored 47.2% on a reasoning test versus 36.3% before any training. It still flunked a school math test, earning a score of 10.9%. But that was nearly double the starting score of 5.9% in the vanilla version.

Trending