Technology
AI 'hallucinations' can lead to catastrophic mistakes, but a new approach makes automated decisions more reliable
Scientists have developed a new, multi-stage method to ensure artificial intelligence (AI) systems that are designed to identify anomalies make fewer mistakes and produce explainable and easy-to-understand recommendations.
Recent advances have made AI a valuable tool to help human operators detect and address issues affecting critical infrastructure such as power stations, gas pipelines and dams. But despite showing plenty of potential, models may generate inaccurate or vague results — known as "hallucinations."
Hallucinations are common in large language models (LLMs) like ChatGPT and Google Gemini. They stem from low-quality or biased training data and user prompts that lack additional context, according to Google Cloud.
Some algorithms also exclude humans from the decision-making process — the user enters a prompt, and the AI does the rest, without explaining how it made a prediction. When applying this Technology to a serious area like critical infrastructure, a major concern is whether AI’s lack of accountability and trust could result in human operators making the wrong decisions.
Some anomaly detection systems have previously been constrained by so-called "black box" AI algorithms, for example. These are characterized by opaque decision-making processes that generate recommendations difficult for humans to understand. This makes it hard for plant operators to determine, for example, the algorithm’s rationale for identifying an anomaly.
A multi-stage approach
To increase AI's reliability and minimize problems such as hallucinations, researchers have proposed four measures, outlining their proposals in a paper published July 1 at the CPSS '24 conference. In the study, they focused on AI used for critical national infrastructure (CNI), such as water treatment.
First, the scientists deploy two anomaly detection systems, known as Empirical Cumulative Distribution-based Outlier Detection (ECOD) and Deep Support Vector Data Description (DeepSVDD), to identify a range of attack scenarios in datasets taken from the Secure Water Treatment (SWaT). This system is used for water treatment system research and training.
-
Technology1h ago
TikTok, PTA host youth safety summit in Pakistan | The Express Tribune
-
Technology4h ago
Why a Technocracy Fails Young People
-
Technology16h ago
Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement
-
Technology17h ago
Japan's $26 billion deep sea discovery sparks serious environmental concerns | The Express Tribune
-
Technology21h ago
Should I worry about mold growing in my home?
-
Technology21h ago
Blurry, morphing and surreal – a new AI aesthetic is emerging in film
-
Technology1d ago
SpaceX’s Starship advances in spaceflight despite booster landing failure | The Express Tribune
-
Technology1d ago
Great Barrier Reef faces 'significant coral deaths' following recent climate events | The Express Tribune