Connect with us

Technology

Novel Chinese computing architecture 'inspired by human brain' can lead to AGI, scientists say

Published

on

/ 9263 Views

Scientists in China have created a new computing architecture that can train advanced artificial intelligence (AI) models while consuming fewer computing resources — and they hope that it will one day lead to artificial general intelligence (AGI).

The most advanced AI models today — predominantly large language models (LLMs) like ChatGPT or Claude 3 — use neural networks. These are collections of machine learning algorithms layered to process data in a way that's similar to the human brain and weigh up different options to arrive at conclusions. 

LLMs are currently limited because they can't perform beyond the confines of their training data and can't reason well like humans. However, AGI is a hypothetical system that can reason, contextualize, edit its own code and understand or learn any intellectual task that a human can.

Today, creating smarter AI systems relies on building even larger neural networks. Some scientists believe neural networks could lead to AGI if scaled up sufficiently. But this may be impractical, given that energy consumption and the demand for computing resources will also scale up with it.

Other researchers suggest novel architectures or a combination of different computing architectures are needed to achieve a future AGI system. In that vein, a new study published Aug. 16 in the journal Nature Computational Science proposes a novel computing architecture inspired by the human brain that is expected to eliminate the practical issues of scaling up neural networks.

Related: 22 jobs artificial general intelligence (AGI) may replace — and 10 jobs it could create

"Artificial intelligence (AI) researchers currently believe that the main approach to building more general model problems is the big AI model, where existing neural networks are becoming deeper, larger and wider. We term this the big model with external complexity approach," the scientists said in the study. "In this work we argue that there is another approach called small model with internal complexity, which can be used to find a suitable path of incorporating rich properties into neurons to construct larger and more efficient AI models."

Trending