FCC bans robocalls using deepfake voice clones − but AI-generated disinformation still looms over elections

The Federal Communications Commission on Feb. 8, 2024, outlawed robocalls that use voices generated by artificial intelligence.

The 1991 Telephone Consumer Protection Act bans artificial voices in robocalls. The FCC’s Feb. 8 ruling declares that AI-generated voices, including clones of real people’s voices, are artificial and therefore banned by law.

The move follows on the heels of a robocall on Jan. 21, 2024, from what sounded like President Joe Biden. The call had Biden’s voice urging voters inclined to support Biden and the Democratic Party not to participate in New Hampshire’s Jan. 23 GOP primary election. The call falsely implied that a registered Democrat could vote in the Republican primary and that a voter who voted in the primary would be ineligible to vote in the general election in November.

The call, two days before the primary, appears to have been an artificial intelligence deepfake. It also appears to have been an attempt to discourage voting.

The FCC and the New Hampshire attorney general’s office are investigating the call. On Feb. 6, 2024, New Hampshire Attorney General John Formella identified two Texas companies, Life Corp. and Lingo Telecom, as the source and transmitter, respectively, of the call.

Injecting confusion

Robocalls in elections are nothing new and not illegal; many are simply efforts to get out the vote. But they have also been used in voter suppression campaigns. Compounding this problem in this case is the application of AI to clone Biden’s voice.

In a media ecosystem full of noise, scrambled signals such as deepfake robocalls make it virtually impossible to tell facts from fakes.

The New Hampshire attorney general’s office is investigating the call.

Recently, a number of companies have popped up online offering impersonation as a service. For users like you and me, it’s as easy as selecting a politician, celebrity or executive like Joe Biden, Donald Trump or Elon Musk from a menu and typing a script of what you want them to appear to say, and the website creates the deepfake automatically.

Though the audio and video output is usually choppy and stilted, when the audio is delivered via a robocall it’s very believable. You could easily think you are hearing a recording of Joe Biden, but really it’s machine-made misinformation.

Context is key

I’m a media and disinformation scholar. In 2019, information scientist Brit Paris and I studied how generative adversarial networks – what most people today think of as AI – would transform the ways institutions assess evidence and make decisions when judging realistic-looking audio and video manipulation. What we found was that no single piece of media is reliable on its face; rather, context matters for making an interpretation.

When it comes to AI-enhanced disinformation, the believability of deepfakes hinges on where you see or hear them or who shares…

Access the original article