The FTC bans AI impersonations of individuals — and unveils greater powers to win stolen money back
The Federal Trade Commission (FTC) has moved to ban the practice of using AI tools to spoof individuals, as well as announcing greater powers to win stolen money back from scammers.
The agency said that it is “taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals.”
The rise of public generative AI tools such as ChatGPT has meant that cybercriminals are able to spoof brands and organizations with greater accuracy and ease. Unreal images, voices and videos can be generated in moments that make use of high-profile figures; these are known as deepfakes, and they have been proliferating at a worrying rate.
New powers
The FTC also said that it is “seeking comment on whether the revised rule should declare it unlawful for a firm, such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation.”
FTC Chair Lina M. Khan added that the agency wants to expand the proposals to its impersonation ruling – which will now include individuals, not just governments and business – in order to “[strengthen] the FTC’s toolkit to address AI-enabled scams impersonating individuals.”
The commission said that it is making these expansions in response to feedback from the public to its previous proposals, as comments made “pointed to the additional threats and harms posed by impersonation of individuals.”
The FTC claims that the expansion “will help the agency deter fraud and secure redress for harmed consumers.”
It has also finalized the Government and Business Impersonation Rule, which will arm the agency with better weapons to fight scammers who abuse AI to spoof real entities.
It will now be able to file federal court cases directly, in order to get cybercriminals to return their earnings made from impersonation. The FTC believes this is a significant step, as it claims that a previous supreme court ruling (AMG Capital Management LLC v. FTC) “significantly limited the agency’s ability to require defendants to return money to injured consumers.”
Threat actors that use logos, email and web addresses, or imply a false affiliation with business and government, can now be taken to court by the FTC to “directly seek monetary relief.”
The commission voted on this final ruling, passing 3-0. It will be published in the Federal Register.
MORE FROM TECHRADAR PRO
- These are the best identity theft protection services around
- Deepfake threats are on the rise – new research shows worrying rise in dangerous new scams
- This could be the most expensive video call ever — “Deepfake CFO” tricks employee into handing over $25m to scammers
stereoguide-referencehometheater-techradar