The Federal Communications Commission (FCC) took a significant step on Thursday, outlawing the use of voice-cloning technology in robocalls. This decision provides states with additional resources to combat the perpetrators behind these deceptive calls.
Effective immediately, the ruling addresses a growing concern as advancements in technology enable scammers to employ recordings that mimic the voices of public figures, including celebrities, politicians, and even family members.
FCC Chairwoman Jessica Rosenworcel emphasized the urgency of the issue, highlighting how these AI-generated voices are exploited in unsolicited robocalls to exploit vulnerable individuals, impersonate notable personalities, and spread misinformation. With the new regulations, state attorneys general are empowered to crack down on such scams, protecting the public from fraud and deception.
The FCC’s action was prompted by a recent incident preceding New Hampshire’s presidential primary, where a fabricated robocall impersonating President Biden urged voters to abstain from participating in the election. Investigations revealed the involvement of two Texas-based companies, leading to a criminal probe initiated by New Hampshire Attorney General John Formella.
In response to the rise of AI-generated disinformation, Senators Amy Klobuchar and Susan Collins urged the U.S. Election Assistance Commission to take proactive measures against such campaigns targeting voters. This incident underscores the broader challenge posed by the proliferation of manipulated images, videos, and audio in the digital landscape, particularly amid the ongoing 2024 campaign cycle.