Federal prosecutors are increasing efforts to combat election-related crimes involving artificial intelligence. Deputy U.S. Attorney General Lisa Monaco announced that the Justice Department will seek tougher sentences for crimes using AI, including threats against election workers and voter suppression.
Artificial intelligence continues to be a significant concern as the 2024 election approaches. According to a new Elon University Poll, more than three-quarters of Americans believe AI abuses will impact the election’s outcome.
About 73% of Americans believe it’s “very” or “somewhat” likely AI will infiltrate and manipulate social media, including through the use of fake accounts and bots to spread misinformation.
Seventy percent suspect that AI-generated fake video and audio information will blur the lines between truth and deception. Meanwhile, 62% say AI will be used to convince certain voters to skip voting. Overall, 78% say at least one of these AI abuses will be used and over half think all of them are likely to happen.
Federal prosecutors are stepping up their efforts. Monaco announced on Monday, May 13, that a Justice Department task force will seek tougher sentences for crimes where AI is used, including threats against election workers and voter suppression.
“These advanced tools are providing new avenues for bad actors to hide their identities and obscure sources of violent threats,” Monaco said. “They’re providing new avenues to misinform and threaten voters through deepfakes, spreading altered video or cloned audio impersonating trusted voices. And they’re providing new avenues to recruit and radicalize with incendiary social media content that accelerates online hate and harassment.”
The policy change aims to address the growing challenges posed by AI tools in the lead-up to the 2024 presidential election. These tools can easily mimic politicians’ voices and likenesses, spreading false information more effectively. The new guidelines will target cases where AI makes these crimes more dangerous and impactful.
A recent example involved an AI-generated robocall in New Hampshire imitating President Joe Biden and urging voters to skip the primary. The robocall was created by a New Orleans magician for a political consultant working for Minnesota Rep. Dean Phillips, a Democratic challenger to Biden.
U.S. officials are also concerned about foreign adversaries using AI to spread election disinformation. In December, senior officials simulated a scenario in which Chinese operatives created an AI-generated video showing a Senate candidate destroying ballots.
“Fulfilling that charge means confronting the full range of threats to our elections,” Attorney General Merrick Garland said. “That includes continuing our work through this task force, our U.S. Attorney’s Offices and our FBI offices across the country to investigate, disrupt and combat unlawful threats against those who administer our elections.”
The Justice Department faces pressure from election officials to investigate the surge of threats and harassment they have received, many of which stem from false claims of fraud in the 2020 election. In March, an Ohio man was sentenced to over two years in prison for making death threats to an Arizona election official.