Even If No Intent to Harm

Even If No Intent to Harm

The U.S. Federal Communications Commission voted unanimously to ban all robocalls that use AI-generated voices - even if there was no intent to harm. The ruling takes effect immediately and gives state attorneys general new tools to go after perpetrators, according to the FCC, whose commissioners come from both parties. Prior to the ruling, authorities can go after people who cause actual harm - such as fraud - through using AI voices in robocalls.

Now, the use of AI itself to create deepfake voices is illegal – whether or not it harmed the consumer. “This action now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues” for law enforcement, the FCC said in a statement. Last month, 26 states wrote to the FCC urging the agency to ban the use of AI by telemarketers.

Pennsylvania Attorney General Michelle Henry said in a statement that advancing technologies must not be “used to prey upon, deceive, or manipulate consumers.”

Biden and Taylor Swift deepfakes

The FCC ruling comes after President Biden’s voice was cloned and sent to New Hampshire voters in late January, urging them to skip voting in a Democratic presidential primary but instead save their votes for the election. Voice deepfakes have been used to imitate celebrities, politicians and family members. Related:Fake Biden Phone Call Meant to Keep Voters Home

Recently, explicit deepfake images of singer Taylor Swift began appearing on X (Twitter).

The social media site put a temporary ban on searches for these images but at least one was seen 47 million times. On Jan. 30, a bipartisan bill was introduced in the U.S. Senate to let victims demand a civil penalty against people distributing explicit deepfakes of them.

Read Full Article