As the 2024 election cycle begins to unfold, the role of artificial intelligence (AI) is gaining prominence, but concerns are growing about its potential impact on democratic processes. The United Nations (UN) has expressed apprehension, particularly due to the proliferation of deepfake technology that some within the organization have labeled “the greatest threat to democracy.”
Neil Sahota, an AI adviser for the UN, has warned that the upcoming presidential race will likely see a surge in the use of deepfakes — highly convincing manipulated audio and video content. This technology, capable of creating deceptive media featuring public figures, poses a significant challenge to ensuring the integrity of information disseminated during campaigns. Sahota also emphasized that addressing this issue presents a complex dilemma with no straightforward solution.
“A lot of people — and I think those in the media too are calling the 2024 election ‘the deepfake election’ that is probably going to be marred by tons and tons of deepfakes,” Sahota said. “Not much can be done right now to stop any of that.”
Echoing these concerns, the UN Security Council has cautioned that if left unchecked, AI could become a threat to humanity on par with the dangers of nuclear warfare. In response to these apprehensions, the UN is taking steps to develop and deploy software designed to detect deepfakes. Currently, debunking such fabricated content can take several days, allowing it time to spread widely and potentially manipulate public opinion in today’s digital age.
“If someone releases a very damaging deepfake video two days before the election, that may not be enough time to counteract it and prove it and get people to believe that,” Sahota said.
Reports indicate that deepfakes have already been influencing people’s decision-making processes. According to DeepMedia, an estimated 500,000 audio and video clips featuring synthetic content are projected to be shared throughout 2023.
This is partly due to the increased accessibility of tools required to create such media. The cost of cloning a voice, for instance, has drastically decreased, with companies now offering the service for only a fraction of what it used to cost: a few dollars compared to $10,000 just a year ago.
To combat the rise in usage of deepfakes for misinformation, some industry leaders like OpenAI have taken steps to prevent users from generating images of political figures such as former President Donald Trump or President Joe Biden. However, this approach has not deterred smaller AI startups from allowing users to create such images, with some even providing this ability for free.
“It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad,” said Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation.
Amid these developments, the Federal Elections Commission (FEC) has announced its intention to establish regulations governing the use of AI in electoral campaigns leading up to the 2024 elections. While there have been concerns about whether these regulations might inadvertently infringe upon free speech rights, FEC Commissioner Allen Dickerson has said that efforts will be made to craft regulations that target genuinely fraudulent activities without hindering protected expression.
“Precision of regulation is a requirement in our work,” Dickerson said. “And if the commission has authority to act in this area, I hope that commentators will also demonstrate that it is possible to tailor a regulation to truly fraudulent activity without slowing protected expression.”