World News
Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say
A quarter of the way into the most consequential election year in living memory, tech companies are failing their biggest test. Such is the charge that has been leveled by at least 160 rights groups across 55 countries, which are collectively calling on tech platforms to urgently adopt greater measures to safeguard people and elections amid rampant online disinformation and hate speech.
“Despite our and many others’ engagement, tech companies have failed to implement adequate measures to protect people and democratic processes from tech harms that include disinformation, hate speech, and iNFLuence operations that ruin lives and undermine democratic integrity,” reads the organizations’ joint letter, shared exclusively with TIME by the Global Coalition for Tech Justice, a consortium of civil society groups, activists, and experts. “In fact, tech platforms have apparently reduced their investments in platform safety and have restricted data access, even as they continue to profit from hate-filled ads and disinformation.”
In July, the coalition reached out to leading tech companies, among them Meta (which owns Facebook and Instagram), Google (which owns YouTube), TikTok, and X (formerly known as Twitter), and asked them to establish transparent, country-specific plans for the upcoming election year, in which more than half of the world’s population would be going to the polls across some 65 countries. But those calls were largely ignored, says Mona Shtaya, the campaigns and partnerships manager at Digital Action, the convenor of the Global Coalition for Tech Justice. She notes that while many of these firms have published press releases on their approach to the election year, they are often vague and lack country-specific details, such as the number of content moderators per country, language, and dialect. Crucially, some appeared to disproportionately focus on the U.S. elections.
“Because they are legally and politically accountable in the U.S., they are taking more strict measures to protect people and their democratic rights in the U.S.,” says Shtaya, who is also the Corporate Engagement Lead at Digital Action. “But in the rest of the world, there are different contexts that could lead to the spread of disinformation, misinformation, hateful content, gender-based violence, or smear campaigns against certain political parties or even vulnerable communities.”
When reached for comment, TikTok pointed TIME to a statement on its plans to protect election integrity, as well as separate posts on its plans for the elections in Indonesia, Bangladesh, Taiwan, Pakistan, the European Parliament, the U.S., and the U.K. Google similarly pointed to its published statements on the upcoming U.S. election, as well as the forthcoming contests in India and Europe. Meta noted that it has “provided extensive public information about our preparations for elections in major countries around the world,” including in statements on forthcoming elections in India, the E.U., Brazil, and South Africa.
X did not respond to requests for comment.
Tech platforms have long had a reputation for underinvesting in content moderation in non-English languages, sometimes to dangerous effect. In India, which kicks off its national election this week, anti-Muslim hate speech under the country’s Hindu nationalist government has given way to rising communal violence. The risks of such violence notwithstanding, observers warn that anti-Muslim and misogynistic hate speech continue to run rampant on social tech platforms such as Facebook, Instagram, and YouTube. In South Africa, which goes to the polls next month, online xenophobia has manifested into real-life violence targeting migrant workers, asylum seekers, and refugees—something that observers say social media platforms have done little to curb. Indeed, in a joint investigation conducted last year by the Cape Town-based human-rights organization Legal Resources Centre and the international NGO Global Witness, 10 non-English advertisements were approved by Facebook, TikTok, and YouTube despite the ads violating the platforms’ own policies on hate speech.
Rather than invest in more extensive content moderation, the Global Coalition for Tech Justice contends that tech platforms are doing just the opposite. “In the past year, Meta, Twitter, and YouTube have collectively removed 17 policies aimed at guarding against hate speech and disinformation,” Shtaya says, referencing a recent report by the non-profit media watchdog Free Press. She added that all three companies have had layoffs, with some directly affecting teams dedicated to content moderation and trust and safety.
Just last month, Meta announced its decision to shut down CrowdTangle, an analytics tool widely used by journalists and researchers to track misinformation and other viral content on Facebook and Instagram. It will cease to function on Aug. 14, 2024, less than three months before the U.S. presidential election. The Mozilla Foundation and 140 other civil society organizations (including several that signed onto the Global Coalition for Tech Justice letter) condemned the move, which it deemed “a direct threat to our ability to safeguard the integrity of elections.”
Read More: Inside Facebook’s African Sweatshop
Perhaps the biggest concern surrounding this year’s elections is the threat posed by AI-generated disinformation, which has already proven capable of producing fake images, audio, and video with alarming believability. Political deepfakes have already cropped up in elections in Slovakia (where AI-generated audio recordings purported to show a top candidate boasting about rigging the election, which he would go on to lose) and Pakistan (where a video of a candidate was altered to tell voters to boycott the vote). That they’ll feature in the upcoming U.S. presidential contest is almost a given: Last year, former President and presumptive Republican presidential nominee Donald Trump shared a manipulated video using AI voice-cloning of CNN host Anderson Cooper. More recently, a robocall purportedly recorded by President Biden (it was, in fact, an AI-generated impersonation of him) attempted to discourage voters from participating in the New Hampshire Democratic presidential primary just days leading up to the vote. (A political consultant who confessed to being behind the hoax claimed he was trying to warn the country about the perils of AI.)
This isn’t the first time tech companies have been called out for their lack of preparedness. Just last week, a coalition of more than 200 civil-society organizations, researchers, and journalists sent a letter to the top of executives of a dozen social media platforms calling on them to take “swift action” to combat AI-driven disinformation and to reinforce content moderation, civil-society oversight tools, and other election integrity policies. Until these platforms respond to such calls, it’s unlikely to be the last.
-
World News1d ago
World’s Best Brands – Brazil
-
World News2d ago
World’s Best Brands – India
-
World News2d ago
International Criminal Court Issues Arrest Warrants for Netanyahu and Hamas Commander
-
World News2d ago
Landmark Bill to Ban Children From Social Media Introduced in Australia’s Parliament
-
World News2d ago
American and Australian Tourists Die in Laos After Drinking Tainted Alcohol
-
World News2d ago
See Photos of the Seventh Volcanic Eruption on Iceland’s Reykjanes Peninsula in 12 Months
-
World News2d ago
Muhammad Yunus on the Race to Build Bangladesh 2.0
-
World News3d ago
U.S. Charges Indian Billionaire Gautam Adani With Defrauding Investors