Technology
Hackers Could Use ChatGPT to Target 2024 Elections
The rise of generative AI tools like ChatGPT has increased the potential for a wide range of attackers to target elections around the world in 2024, according to a new report by cybersecurity giant CrowdStrike.
Both state-linked hackers and allied so-called “hacktivists” are increasingly experimenting with ChatGPT and other AI tools, enabling a wider range of actors to carry out cyberattacks and scams, according to the company’s annual global threats report. This includes hackers linked to Russia, China, North Korea, and Iran, who have been testing new ways to use these technologies against the U.S., Israel, and European countries.
With half the world’s population set to vote in 2024, the use of generative AI to target elections could be a “huge factor,” says Adam Meyers, head of counter-adversary operations at CrowdStrike. So far, CrowdStrike analysts have been able to detect the use of these models through comments in the scripts that would have been placed there by a tool like ChatGPT. But, Meyers warns, “this is going to get worse throughout the course of the year.”
If state-linked actors continue to improve their use of AI, “it’s really going to democratize the ability to do high-quality disinformation campaigns” and speed up the tempo at which they’re able to carry out cyberattacks, Meyers says.
“Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct [information operations] against elections in 2024,” the report’s authors say. “Politically active partisans within those countries holding elections will also likely use generative AI to create disinformation to disseminate within their own circles.”
Read More: How Tech Giants Turned Ukraine Into an AI War Lab.
The CrowdStrike report highlights how the digital battleground has expanded beyond active coNFLict zones like Ukraine and Gaza. In 2023, groups linked to Yemen, Pakistan, Indonesia and Turkey targeted entities in the U.S. and Europe “in retaliation against real or perceived support of Israel.” In October, a Yemeni group claimed credit for a DDoS attack against an unidentified U.S. airport, according to the CrowdStrike report. A South Asian hacktivist group claimed a similar attack against a British Military website, which was “accompanied by references to U.K. support for Israel.” And an Indonesian group claimed to have breached the personal data of 790,000 doctors in the U.S. “reportedly in retaliation against U.S. support for Israel as well as to show support for Palestinians,” according to the report.
Some of the tech companies developing AI tools have been sounding the alarm themselves. Last month, OpenAI announced it would be rolling out new policies meant to combat disinformation and the misuse of its tools ahead of the 2024 elections,including verified news and image-authenticity programs. Microsoft has warned that state-backed hackers from China, Iran, and Russia have been using OpenAI’s large language models to improve their cyberattacks, refining scripts and improving their targeting techniques. While Microsoft has not yet found evidence of “significant attacks’” employing their large language models, cybercrime groups, nation-state threat actors, and other adversaries “are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said.
Read More: Election Workers Face Surge of Cyberattacks.
In one recent case, Microsoft and OpenAI analysts say they detected attempts from attackers working with Russia’s Military intelligence to use their tools to understand satellite communication protocols and radar imaging technologies. ”These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities,” Microsoft said in a statement. One China-affiliated actor known as “Salmon Typhoon” used OpenAI tools to “translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system,” the company said in a post on Feb. 14.
While it’s not clear to what extent these attacks will succeed in iNFLuencing upcoming elections, they have already caused disruptions. Taiwan’s elections last month saw a sharp spike in cyberattacks targeting government offices from suspected China-linked actors, according to an analysis shared with TIME by U.S.-based cybersecurity firm Trellix. “Malicious cyber activity rose significantly from 1,758 detections on January 11 to over 4,300 on January 12,” the day before the election, according to Trellix analysts, before dropping dramatically again. “The timing suspiciously [suggests] a goal of iNFLuencing election outcomes.”
-
Technology2h ago
Lahore's smog: lessons to be learnt | The Express Tribune
-
Technology1d ago
Breaking up Google? What a Chrome sell-off could mean for the digital world | The Express Tribune
-
Technology2d ago
AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond
-
Technology2d ago
Newborn planet found orbiting young star, defying planet formation timeline | The Express Tribune
-
Technology2d ago
Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace
-
Technology2d ago
No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners
-
Technology2d ago
Teslas are deadliest road vehicles despite safety features: study | The Express Tribune
-
Technology3d ago
US pushes to break up Google, calls for Chrome sell-off in major antitrust move | The Express Tribune