AI in politics: How synthetic media could shape the 2024 presidential election


Full story

Following President Joe Biden’s announcement that he will be running for reelection in 2024, the Republican National Committee released an AI-generated video depicting a dystopian version of his potential second term. AI-generated images, audio, and video are considered synthetic media and it can be influential in shaping public opinion.

Henry Ajder, an advisor on generative AI, deepfakes, and AI policy, has studied the societal implications of these technologies.

“We’ve already seen, even now in 2023, that deepfakes and generative AI have become a massive part of the digital landscape, in terms of memes and satire,” Ajder said. “So, I imagine in the year of the election in 2024, we will see much more of this kind of content, some of it intended as satire. Some of it is, you know, intended as kind of intentionally deceptive.”

Synthetic media can effectively manipulate public opinion and spread disinformation. A prime example of this was when AI-generated images of former President Trump in handcuffs circulated on social media.

The availability of AI image generators, such as Midjourney, means that more people are gaining access to these powerful technologies.

“The real question for me, though, is whether we’re going to see that hyper-realistic video or audio, that even critical viewers can’t tell if it’s real or not,” Ajder said.

Convincing deepfakes could create the illusion of a political candidate engaging in incriminating or defamatory actions, potentially derailing their campaign.

While there are signs that can give away an AI-generated video, such as distorted lighting, blurry edges, or a lack of blinking, Ajder said that expecting voters to discern between real and fake content is not, ultimately, the solution.

“It’s really unfair to expect the individual to be able to dedicate so much of their time to sort of pore over all of the details of every single image they see on their social media feed, or keep up to date with all the latest techniques and the latest papers coming out about what new generative tools are going to be released,” he says.

Ajder suggested that the focus should be on restrictions and content moderation.

“The emphasis really needs to be placed actually, on the companies that are developing the technologies and the platforms that are hosting the content, as well as the government, international governments that are responsible for regulating and holding those groups to account,” Ajder said. “We need to have better safety measures implemented in these tools to stop them being misused so easily.”

In 2019, California passed legislation banning deepfakes of politicians within 60 days of elections, provided the creator intended to harm the politician’s reputation or deceive voters. However, proving such intent and enforcing the law has proven difficult. Critics, including the ACLU of California, have argued that it infringes on freedom of speech. Furthermore, the law has an exception for synthetic ads that disclose their origin, such as the RNC video.

As the nation heads toward the next presidential election, the debate over the regulation of synthetic media continues to intensify. The rapid advancement of AI technology is set to further complicate this issue.

Tags: , , , , ,

Full story

Following President Joe Biden’s announcement that he will be running for reelection in 2024, the Republican National Committee released an AI-generated video depicting a dystopian version of his potential second term. AI-generated images, audio, and video are considered synthetic media and it can be influential in shaping public opinion.

Henry Ajder, an advisor on generative AI, deepfakes, and AI policy, has studied the societal implications of these technologies.

“We’ve already seen, even now in 2023, that deepfakes and generative AI have become a massive part of the digital landscape, in terms of memes and satire,” Ajder said. “So, I imagine in the year of the election in 2024, we will see much more of this kind of content, some of it intended as satire. Some of it is, you know, intended as kind of intentionally deceptive.”

Synthetic media can effectively manipulate public opinion and spread disinformation. A prime example of this was when AI-generated images of former President Trump in handcuffs circulated on social media.

The availability of AI image generators, such as Midjourney, means that more people are gaining access to these powerful technologies.

“The real question for me, though, is whether we’re going to see that hyper-realistic video or audio, that even critical viewers can’t tell if it’s real or not,” Ajder said.

Convincing deepfakes could create the illusion of a political candidate engaging in incriminating or defamatory actions, potentially derailing their campaign.

While there are signs that can give away an AI-generated video, such as distorted lighting, blurry edges, or a lack of blinking, Ajder said that expecting voters to discern between real and fake content is not, ultimately, the solution.

“It’s really unfair to expect the individual to be able to dedicate so much of their time to sort of pore over all of the details of every single image they see on their social media feed, or keep up to date with all the latest techniques and the latest papers coming out about what new generative tools are going to be released,” he says.

Ajder suggested that the focus should be on restrictions and content moderation.

“The emphasis really needs to be placed actually, on the companies that are developing the technologies and the platforms that are hosting the content, as well as the government, international governments that are responsible for regulating and holding those groups to account,” Ajder said. “We need to have better safety measures implemented in these tools to stop them being misused so easily.”

In 2019, California passed legislation banning deepfakes of politicians within 60 days of elections, provided the creator intended to harm the politician’s reputation or deceive voters. However, proving such intent and enforcing the law has proven difficult. Critics, including the ACLU of California, have argued that it infringes on freedom of speech. Furthermore, the law has an exception for synthetic ads that disclose their origin, such as the RNC video.

As the nation heads toward the next presidential election, the debate over the regulation of synthetic media continues to intensify. The rapid advancement of AI technology is set to further complicate this issue.

Tags: , , , , ,