The first U.S. presidential election in the era of deepfakes is presenting officials with challenges never before seen at a time when tech giants are scaling back their cyber-defenses.
Fabricated images and audio clips designed to sway voters are raising alarms in the days leading up to the Republican Primary in South Carolina on Feb 25 and Super Tuesday on March 5, when both parties will hold primaries in a number of states.
“Are protections in place sufficient to thwart” the influence of targeted deepfakes this year? Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania, said in an interview. “The capacity for individuals and nation-states to generate more misleading content that is micro-targeted and harder to detect could happen.”
A broad swath of tech companies acknowledged the threat in a major way late Friday. Alphabet Inc.’s GOOGL GOOG Google, Amazon.com Inc.
AMZN,
Facebook parent Meta Platforms Inc.
META,
Microsoft Corp.
MSFT,
OpenAI, X, Adobe Inc.
ADBE,
International Business Machines Corp.
IBM,
Tik Tok and others signed a pact to voluntarily adopt “reasonable precautions” to prevent AI tools from being used to disrupt democratic elections worldwide.
In recent weeks, OpenAI, Google and Meta have taken steps to limit the abuse of AI in elections.
AI-generated deepfakes have started making their way into presidential campaign ads and elections. Last month, a doctored robocall from a deepfaked President Biden attempted to discourage voting during the New Hampshire primary. That robocall was detected by security experts and covered by U.S. media, but others have probably gone undiscovered.
The threats appear to be more ominous, if not dangerous, outside the U.S. Days before Slovakia’s elections in November, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. The false narrative spread quickly across social media.
This month, Meta’s Oversight Board of independent academics, lawyers and experts who oversee onerous content decisions on the platform, criticized Meta for its “incoherent” and “confusing” policies on manipulated media after an altered video of President Biden spread on Facebook.
Meta decided not to remove the edited video, which showed Biden apparently touching his granddaughter inappropriately. In reality, Biden was placing an “I Voted” sticker on her chest.
“The inability to trust our senses could lead to distrust and paranoia, further breaking down social and political relations between people,” Sohrob Kazerounian, distinguished AI Researcher at Vectra AI, said.
Evolution of political meddling online
As technology has evolved, so have the methods employed by individuals and nation-states to manipulate it during elections — dating back to robocalls, targeted hit mail, and specious internet rumors. One such robocall targeted then-Republican candidate John McCain before the 2000 South Carolina primary.
The rise of social media played a decisive role in influencing the 2016 presidential election, when Russian-sponsored actors overran platforms with troll content.
Following the brouhaha, officials at then-Facebook, Twitter and others shored up defenses that led to fairly clean elections in 2018 and 2020.
Then came the Jan. 6, 2021, insurrection, stoked in great part by social media. Now, with the emergence of AI, twinned with drastic cutbacks at belt-tightening social-media companies, all bets are off in 2024.
X, Meta and YouTube have laid off thousands of employees and contractors since 2020, some of whom have included content moderators.
“Even if fully staffed, it is still not enough when half of the world is to vote amid the spread of nuanced, tailored deepfakes,” Jevin West, an associate professor at the University of Washington, said in an interview.
The reduction of trust and security teams, combined with a fusillade of deceptive content, “erode trust in a democratic system” and “leads voters to confusion and misperception,” according to West.
“My biggest concern is this has set the stage for things to be worse in 2024 than in 2020,“ West said.
What solutions Congress and the Federal Election Commission are exploring have yet to be turned into legislation or rules, leaving the onus on states such as Colorado, Minnesota and Wisconsin to push online public education efforts to promote election officials as a trusted source of election information in 2024.
The Federal Communications Commission this month unanimously banned unsolicited robocalls with AI-generated voices because the technology can mislead people. The FCC said AI-generated voices in unsolicited robocalls are prohibited under the 1991 Telephone Consumer Protection Act, which restricts marketing calls that use artificial and prerecorded voice messages. Robocalls must offer a way for people to opt out of future calls, the FCC said.
Read more: AI-generated voices in robocalls can deceive voters. The FCC just made them illegal.
Ultimately, gen-AI will have an impact on elections, experts agree. The question is how good or bad.
“All we hear about is nefarious AI use,” election lawyer Jessica Furst Johnson said in an interview. “But because it is new, it can also be used by election teams to communicate with voters. We don’t know really know how it will be used.”
Check out On Watch by MarketWatch, a weekly podcast about the financial news we’re all watching — and how that’s affecting the economy and your wallet. MarketWatch’s Jeremy Owens trains his eye on what’s driving markets and offers insights that will help you make more informed money decisions. Subscribe on Spotify and Apple.
This story originally appeared on Marketwatch