As the 2024 election cycle begins, an unprecedented challenge looms on the horizon, demanding heightened vigilance from voters. The emergence of robocalls and AI-generated deepfakes poses a serious threat to election integrity, pushing authorities to confront these issues with newfound urgency.
As an example of this new challenge, a robocall sounding eerily similar to President Joe Biden was identified as an AI-generated attempt to dissuade voters from participating in New Hampshire’s primary in January.
Authorities traced the robocall back to two Texas-based companies, issuing cease-and-desist orders against them. New Hampshire Attorney General John Formella is treating the incident as a case of illegal voter suppression.
In response to the growing concern over such practices, the Federal Communications Commission (FCC) took action on Thursday, Feb. 8, announcing an immediate ban on scam robocalls utilizing AI-generated “voice cloning” technology. This ruling aims to bolster election security and combat misinformation effectively.
FCC Commissioner Geoffrey Starks commented on the ban, highlighting the dangers posed by malicious use of AI technologies. According to Starks, these technologies not only threaten election integrity but also public safety, exploiting the most vulnerable members of society.
The FCC’s ruling empowers state authorities to actively pursue and prosecute those behind such nefarious activities. Leveraging the Telephone Consumer Protection Act, the FCC has already been active in charging robocallers and telephone scammers. This new step ensures that AI technologies — including voice cloning — fall under the act’s jurisdiction.
The implications of these violations are far-reaching, with penalties reaching up to $1,500 per incident and no cap on damages. This could amount to nearly $38 million in damages, considering that nearly 25,000 people, predominantly Democrats, received the misleading calls impersonating Biden.