Human extinction. Think about that for a second. The erasure of the human race from planet Earth.
That is what dozens of AI industry leaders, academics and even some celebrities sounded the alarm about on Tuesday.
They signed a one-sentence open letter to the public which called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.
Watch the latest News on Channel 7 or stream for free on 7plus >>
WATCH THE VIDEO ABOVE: World’s top artificial intelligence experts issue stark warning on artificial intelligence.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement published by the Centre for AI Safety said.
The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Microsoft’s chief technology officer Kevin Scott; internet security and cryptography pioneer Bruce Schneier; climate advocate Bill McKibben; and the musician Grimes, among others.
The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence.
AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction — today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves.
Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur.
In response, a growing number of politicians, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.
Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN he decided to leave his role at Google and “blow the whistle” on the Technology after “suddenly” realising “that these things are getting smarter than us”.
Centre for AI Safety director Dan Hendrycks said the statement first proposed by David Krueger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.
“Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted.
“From a risk management perspective, just as it would be reckless to exclusively prioritise present harms, it would also be reckless to ignore them as well.”
Will Artificial Intelligence Search Kill Website Traffic? 5 Things To Know
CEO Of Largest U.S. Bank Jamie Dimon Predicts AI-Driven 3.5-Day Work Week
Facebook Owner Meta Received 450,000 Government Requests for User Data in 2022
Report: VC Advises To Not Go To Law School And Businesses Are Already Using ChatGPT To Cut Legal Bills
Can Artificial Intelligence Help Knock Out America’s Inflation Problem? 5 Things To Know
Fund Manager Cathie Wood Optimistic About Convergence Of Bitcoin And Artificial Intelligence
Prompt Engineer: How To Ride The AI Wave and Level Up Your Career
Top Engineer Kelsey Hightower Announces Retirement From Google: 5 Things To Know