Connect with us

Technology

32 times artificial intelligence got it catastrophically wrong

Published

on

/ 2145 Views

The fear of artificial intelligence (AI) is so palpable, there's an entire school of technological philosophy dedicated to figuring out how AI might trigger the end of humanity. Not to feed into anyone's paranoia, but here's a list of times when AI caused — or almost caused — disaster.

Air Canada chatbot's terrible advice

Air Canada planes grounded at the Toronto's Pearso

(Image credit: THOMAS CHENG via Getty Images)

Air Canada found itself in court after one of the company's AI-assisted tools gave incorrect advice for securing a bereavement ticket fare. Facing legal action, Air Canada's representatives argued that they were not at fault for something their chatbot did.

Aside from the huge reputational damage possible in scenarios like this, if chatbots can't be believed, it undermines the already-challenging world of airplane ticket purchasing. Air Canada was forced to return almost half of the fare due to the error.

NYC website's rollout gaffe

A man steals cash out of a register

(Image credit: Fertnig via Getty Images)

Welcome to New York City, the metropolis that never sleeps and the city with the largest AI rollout gaffe in recent memory. A chatbot called MyCity was found to be encouraging business owners to perform illegal activities. According to the chatbot, you could steal a portion of your workers' tips, go cashless and pay them less than minimum wage. 

Microsoft bot's inappropriate tweets

Microsoft's sign from the street

(Image credit: Jeenah Moon via Getty Images)

In 2016, Microsoft released a Twitter bot called Tay, which was meant to interact as an American teenager, learning as it went. Instead, it learned to share radically inappropriate tweets. Microsoft blamed this development on other users, who had been bombarding Tay with reprehensible content. The account and bot were removed less than a day after launch. It's one of the touchstone examples of an AI project going sideways.

Sports Illustrated's AI-generated content

Covers of Sports Illustrated magazines

(Image credit: Joe Raedle via Getty Images)

In 2023, Sports Illustrated was accused of deploying AI to write articles. This led to the severing of a partnership with a content company and an investigation into how this content came to be published.

Mass resignation due to discriminatory AI 

A view of the Dutch parliament plenary room

(Image credit: BART MAAT via Getty Images)

In 2021, leaders in the Dutch parliament, including the prime minister, resigned after an investigation found that over the preceding eight years, more than 20,000 families were defrauded due to a discriminatory algorithm. The AI in question was meant to identify those who had defrauded the government's social safety net by calculating applicants' risk level and highlighting any suspicious cases. What actually happened was that thousands were forced to pay with funds they did not have for child care services they desperately needed.

Medical chatbot's harmful advice 

A plate with a fork, knife, and measuring tape

(Image credit: cristinairanzo via Getty Images)

The National Eating Disorder Association caused quite a stir when it announced that it would replace its human staff with an AI program. Shortly after, users of the organization's hotline discovered that the chatbot, nicknamed Tessa, was giving advice that was harmful for those with an eating disorder. There have been accusations that the move toward the use of a chatbot was also an attempt at union busting. It's further proof that public-facing medical AI can cause disastrous consequences if it's not ready or able to help the masses.

Amazon's discriminatory AI recruiting tool

Amazon's logo on a cell phone against a background that says "AI"

(Image credit: SOPA Images via Getty Images)

In 2015, an Amazon AI recruiting tool was found to discriminate against women. Trained on data from the previous 10 years of applicants, the vast majority of whom were men, the machine learning tool had a negative view of resumes that used the word "women's" and was less likely to recommend graduates from women's colleges. The team behind the tool was split up in 2017, although identity-based bias in hiring, including racism and ableism, has not gone away.

Google Images' racist search results

An old image of Google's search home page

(Image credit: Scott Barbour via Getty Images)

Google had to remove the ability to search for gorillas on its AI software after results retrieved images of Black people instead.  Other companies, including Apple, have also faced lawsuits over similar allegations.

Bing's threatening AI 

Bing's logo and home screen on a laptop

(Image credit: NurPhoto via Getty Images)

Normally, when we talk about the threat of AI, we mean it in an existential way: threats to our job, data security or understanding of how the world works. What we're not usually expecting is a threat to our safety.

When first launched, Microsoft's Bing AI quickly threatened a former Tesla intern and a philosophy professor, professed its undying love to a prominent tech columnist, and claimed it had spied on Microsoft employees.

Driverless car disaster

A photo of GM's Cruise self driving car

(Image credit: Smith Collection/Gado via Getty Images)

While Tesla tends to dominate headlines when it comes to the good and the bad of driverless AI, other companies have caused their own share of carnage. One of those is GM's Cruise. An accident in October 2023 critically injured a pedestrian after they were sent into the path of a Cruise model. From there, the car moved to the side of the road, dragging the injured pedestrian with it.

That wasn't the end. In February 2024, the State of California accused Cruise of misleading investigators into the cause and results of the injury.

Deletions threatening war crime victims

A cell phone with icons for many social media apps

(Image credit: Matt Cardy via Getty Images)

An investigation by the BBC found that social media platforms are using AI to delete footage of possible war crimes that could leave victims without the proper recourse in the future. Social media plays a key part in war zones and societal uprisings, often acting as a method of communication for those at risk. The investigation found that even though graphic content that is in the public interest is allowed to remain on the site, footage of the attacks in Ukraine published by the outlet was very quickly removed.

Discrimination against people with disabilities

Man with a wheelchair at the bottom of a large staircase

(Image credit: ilbusca via Getty Images)

Research has found that AI models meant to support natural language processing tools, the backbone of many public-facing AI tools, discriminate against those with disabilities. Sometimes called techno- or algorithmic ableism, these issues with natural language processing tools can affect disabled people's ability to find employment or access social services. Categorizing language that is focused on disabled people's experiences as more negative — or, as Penn State puts it, "toxic" — can lead to the deepening of societal biases.

Faulty translation

A line of people at an immigration office

(Image credit: Joe Raedle via Getty Images)

AI-powered translation and transcription tools are nothing new. However, when used to assess asylum seekers' applications, AI tools are not up to the job. According to experts, part of the issue is that it's unclear how often AI is used during already-problematic immigration proceedings, and it's evident that AI-caused errors are rampant.

Apple Face ID's ups and downs

The Apple face ID icon on an iphone

(Image credit: NurPhoto via Getty Images)

Apple's Face ID has had its fair share of security-based ups and downs, which bring public relations catastrophes along with them. There were inklings in 2017 that the feature could be fooled by a fairly simple dupe, and there have been long-standing concerns that Apple's tools tend to work better for those who are white. According to Apple, the technology uses an on-device deep neural network, but that doesn't stop many people from worrying about the implications of AI being so closely tied to device security.

Fertility app fail

An assortment of at-home pregnancy tests

(Image credit: Catherine McQueen via Getty Images)

In June 2021, the fertility tracking application Flo Health was forced to settle with the U.S. Federal Trade Commission after it was found to have shared private health data with Facebook and Google.

With Roe v. Wade being struck down in the U.S. Supreme Court and with those who can become pregnant having their bodies scrutinized more and more, there is concern that these data might be used to prosecute people who are trying to access reproductive health care in areas where it is heavily restricted.

Unwanted popularity contest 

A man being recognized in a crowd by facial recognition software

(Image credit: John M Lund Photography Inc via Getty Images)

Politicians are used to being recognized, but perhaps not by AI. A 2018 analysis by the American Civil Liberties Union found that Amazon's Rekognition AI, a part of Amazon Web Services, incorrectly identified 28 then-members of Congress as people who had been arrested. The errors came with images of members of both main parties, affecting both men and women, and people of color were more likely to be wrongly identified.

While it's not the first example of AI's faults having a direct impact on law enforcement, it certainly was a warning sign that the AI tools used to identify accused criminals could return many false positives.

Worse than "RoboCop" 

A hand pulling Australian cash out of a wallet

(Image credit: chameleonseye via Getty Images)

In one of the worst AI-related scandals ever to hit a social safety net, the government of Australia used an automatic system to force rightful welfare recipients to pay back those benefits. More than 500,000 people were affected by the system, known as Robodebt, which was in place from 2016 to 2019. The system was determined to be illegal, but not before hundreds of thousands of Australians were accused of defrauding the government. The government has faced additional legal issues stemming from the rollout, including the need to pay back more than AU$700 million (about $460 million) to victims.

AI's high water demand

A drowning hand reaching out of a body of water

(Image credit: mrs via Getty Images)

According to researchers, a year of AI training takes 126,000 liters (33,285 gallons) of water — about as much in a large backyard swimming pool. In a world where water shortages are becoming more common, and with climate change an increasing concern in the tech sphere, impacts on the water supply could be one of the heavier issues facing AI. Plus, according to the researchers, the power consumption of AI increases tenfold each year.

AI deepfakes

A deepfake image of Volodymyr Zelenskyy

(Image credit: OLIVIER DOULIERY via Getty Images)

AI deep fakes have been used by cybercriminals to do everything from spoofing the voices of political candidates, to creating fake Sports news conferences,, to producing Celebrity images that never happened and more. However, one of the most concerning uses of deep fake Technology is part of the Business sector. The World Economic Forum produced a 2024 report that noted that "...synthetic content is in a transitional period in which ethics and trust are in flux." However, that transition has led to some fairly dire monetary consequences, including a British company that lost over $25 million after a worker was convinced by a deepfake disguised as his co-worker to transfer the sum

Zestimate sellout

A computer screen with the Zillow website open

(Image credit: Bloomberg via Getty Images)

In early 2021, Zillow made a big play in the AI space. It bet that a product focused on house flipping, first called Zestimate and then Zillow Offers, would pay off. The AI-powered system allowed Zillow to offer users a simplified offer for a home they were selling. Less than a year later, Zillow ended up cutting 2,000 jobs — a quarter of its staff.

Age discrimination

An older woman at a teacher's desk

(Image credit: skynesher via Getty Images)

Last fall, the U.S. Equal Employment Opportunity Commission settled a lawsuit with the remote language training company iTutorGroup. The company had to pay $365,000 because it had programmed its system to reject job applications from women 55 and older and men 60 and older. iTutorGroup has stopped operating in the U.S., but its blatant abuse of U.S. employment law points to an underlying issue with how AI intersects with human resources.

Election interference

A row of voting booths

(Image credit: MARK FELIX via Getty Images)

As AI becomes a popular platform for learning about world news, a concerning trend is developing. According to research by Bloomberg News, even the most accurate AI systems tested with questions about the world's elections still got 1 in 5 responses wrong. Currently, one of the largest concerns is that deepfake-focused AI can be used to manipulate election results.

AI self-driving vulnerabilities

A person sitting in a self-driving car

(Image credit: Alexander Koerner via Getty Images)

Among the things you want a car to do, stopping has to be in the top two. Thanks to an AI vulnerability, self-driving cars can be infiltrated, and their Technology can be hijacked to ignore road signs. Thankfully, this issue can now be avoided. 

AI sending people into wildfires

A car driving by a raging wildfire

(Image credit: MediaNews Group/Orange County Register via Getty Images)

One of the most ubiquitous forms of AI is car-based navigation. However, in 2017, there were reports that these digital wayfinding tools were sending fleeing residents toward wildfires rather than away from them. Sometimes, it turns out, certain routes are less busy for a reason. This led to a warning from the Los Angeles Police Department to trust other sources.

Lawyer's false AI cases

A man in a suit sitting with a gavel

(Image credit: boonchai wedmakawand via Getty Images)

Earlier this year, a lawyer in Canada was accused of using AI to invent case references. Although his actions were caught by opposing counsel, the fact that it happened is disturbing.

Sheep over stocks

The floor of the New York Stock Exchange

(Image credit: Michael M. Santiago via Getty Images)

Regulators, including those from the Bank of England, are growing increasingly concerned that AI tools in the business world could encourage what they've labeled as "herd-like" actions on the stock market. In a bit of heightened language, one commentator said the market needed a "kill switch" to counteract the possibility of odd technological behavior that would supposedly be far less likely from a human. 

Bad day for a flight

The Boeing sign

(Image credit: Smith Collection/Gado via Getty Images)

In at least two cases, AI appears to have played a role in accidents involving Boeing aircraft. According to a 2019 New York Times investigation, one automated system was made "more aggressive and riskier" and included removing possible safety measures. Those crashes led to the deaths of more than 300 people and sparked a deeper dive into the company.

Retracted medical research

A man sitting at a microscope

(Image credit: Jacob Wackerhausen via Getty Images)

As AI is increasingly being used in the medical research field, concerns are mounting, In at least one case, an academic journal mistakenly published an article that used generative AI. Academics are concerned about how generative AI could change the course of academic publishing.

Political nightmare

Swiss Parliament in session

(Image credit: FABRICE COFFRINI via Getty Images)

Among the myriad issues caused by AI, false accusations against politicians are a tree bearing some pretty nasty fruit. Bing’s AI chat tool has at least one Swiss politician of slandering a colleague and another of being involved in corporate espionage, and it has also made claims connecting a candidate to Russian lobbying efforts. There is also growing evidence that AI is being used to sway the most recent American and British elections. Both the Biden and Trump campaigns have been exploring the use of AI in a legal setting. On the other side of the Atlantic, the BBC found that young UK voters were being served their own pile of misleading AI-led videos

Alphabet error

The silhouette of a man in front of the Gemini logo

(Image credit: SOPA Images via Getty Images)

In February 2024, Google restricted some portions of its AI chatbot Gemini's capabilities after it created factually inaccurate representations based on problematic generative AI prompts submitted by users. Google's response to the tool, formerly known as Bard, and its errors signify a concerning trend: a business reality where speed is valued over accuracy. 

An artist drawing with pencils

(Image credit: Carol Yepes via Getty Images)

An important legal case involves whether AI products like Midjourney can use artists' content to train their models. Some companies, like Adobe, have chosen to go a different route when training their AI, instead pulling from their own license libraries. The possible catastrophe is a further reduction of artists' career security if AI can train a tool using art they do not own.

Google-powered drones

A soldier holding a drone

(Image credit: Anadolu via Getty Images)

The intersection of the Military and AI is a touchy subject, but their collaboration is not new. In one effort, known as Project Maven, Google supported the development of AI to interpret drone footage. Although Google eventually withdrew, it could have dire consequences for those stuck in war zones.

Trending