The Federal Communications Commission (FCC) revealed Thursday that it had unanimously agreed to make AI-generated voice cloning in robocalls illegal, a warning shot to scammers not to use the cutting-edge technology for fraud or to spread election-related disinformation.

The agency said that beginning Thursday it will rely on the 1991 Telephone Consumer Protection Act (TCPA) — which bars robocalls using pre-recorded and artificial voice messages — to target voice-cloning calls by establishing a so-called Declaratory Ruling within the TCPA.

The decision follows an AI-generated voice robocall mimicking President Joe Biden and telling voters not to turn out in the New Hampshire primary, alarming election security officials nationwide.

The New Hampshire Attorney General identified a Texas man and his company Life Corporation as the creator of the calls on Tuesday. Officials believe more than 20,000 New Hampshire residents received the calls.

“This will make it easier for state attorneys general in particular to go after these scammers,” said Harold Feld, senior vice president at the advocacy group Public Knowledge and an expert on the FCC. “While it seems fairly straightforward that an AI generated voice is ‘artificial,’ there is always the worry that a judge will not want to apply a statute written in 1991 to something new in 2024.”

“But because Congress delegated the authority to interpret the statute to the FCC, the FCC can render this common sense ruling and remove any doubt,” Feld added.

The FCC’s action will give state attorneys general nationwide a new lever to use when pursuing and ultimately charging people behind junk calls, FCC Chairwoman Jessica Rosenworcel said in a statement.

The deployment of AI generated voice-cloning robocalls has “escalated” in the last few years, the FCC said in a press release, noting that in November it started a “Notice of Inquiry” to “build a record on how the agency can combat illegal robocalls and how AI might be involved.”

During that inquiry, the agency investigated how AI can be used for robocall scams by mimicking voices known to recipients, but also looked into how AI might bolster pattern recognition, allowing authorities to more easily recognize illegal robocalls. The TCPA provides the FCC with the ability to fine robocallers through civil enforcement actions. It also allows consumer lawsuits, enables carriers to block the calls, and hands state officials a new tool for enforcement.

More than 26 state attorneys general recently wrote to the FCC expressing support for the new rule, the press release said.

On Wednesday, Commerce Secretary Gina Raimondo made new appointments to a National Institute of Standards and Technology (NIST) institute dedicated to coming up with science-based guidelines for AI, and on Thursday the agency announced the creation of a consortium of some 200 companies and organizations within the industry to support “the development and deployment of safe and trustworthy artificial intelligence.”

The forthcoming U.S. Artificial Intelligence Safety Institute will be charged with “laying the foundation for AI safety across the world,” according to an update posted on NIST’s website. “This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.”

  • PineRune@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    10 months ago

    If pre-recorded robocalls were made illegal in 1991, and I’m still getting them in 2024, I don’t have much faith that making AI-robocalls illegal will change anything.