Connect with us

Technology

MIT gives AI the power to 'reason like humans' by creating hybrid architecture

Published

on

/ 6806 Views

MIT researchers have developed a new method to help artificial intelligence (AI) systems conduct complex reasoning tasks in three areas including coding, strategic planning and robotics.

Large language models (LLMs), which include ChatGPT and Claude 3 Opus, process and generate text based on human input, known as "prompts." These technologies have improved greatly in the last 18 months, but are constrained by their inability to understand context as well as humans or perform well in reasoning tasks, the researchers said. 

But MIT scientists now claim to have cracked this problem by creating "a treasure trove" of natural language "abstractions" that could lead to more powerful AI models. Abstractions turn complex subjects into high-level characterizations and omit non-important information — which could help chatbots reason, learn, perceive, and represent knowledge just like humans. 

Currently, scientists argue that LLMs have difficulty abstracting information in a human-like way. However, they have organized natural language abstractions into three libraries in the hope that they will gain greater contextual awareness and give more human-like responses.  

The scientists detailed their findings in three papers published on the arXiv pre-print server Oct. 30 2023, Dec. 13 2023 and Feb. 28. The first library, called the "Library Induction from Language Observations" (LILO) synthesizes, compresses, and documents computer code. The second, named "Action Domain Acquisition" (Ada) covers AI sequential decision making. The final framework, dubbed "Language-Guided Abstraction" (LGA), helps robots better understand environments and plan their movements. 

Related: 'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it

These papers explore how language can give AI systems important context so they can handle more complex tasks. They were presented May 11 at the International Conference on Learning Representations in Vienna, Austria. 

Trending