Entertainment

Meta failing to curb spread of many sexualized AI deepfake celebrity images on Facebook

Meta has removed over a dozen fraudulent, sexualized images of famous female actors and athletes after a CBS News investigation found a high prevalence of AI-manipulated deepfake images on the company’s Facebook platform. 

Dozens of fake, highly sexualized images of the actors Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, Scarlett Johansson and former tennis star Maria Sharapova have been shared widely by multiple Facebook accounts, garnering hundreds of thousands of likes and many reshares on the platform.

“We’ve removed these images for violating our policies and will continue monitoring for other violating posts. This is an industry-wide challenge, and we’re continually working to improve our detection and enforcement technology,” Meta spokesperson Erin Logan told CBS News in a statement emailed on Friday. 

An analysis of over a dozen of these images by Reality Defender, a platform that works to detect AI-generated media, showed that many of the photos were deepfake images — with AI-generated, underwear-clad bodies replacing the bodies of celebrities in otherwise real photographs. A few of the images were likely created using image stitching tools that do not involve AI, according to Reality Defender’s analysis. 

“Almost all deepfake pornography does not have the consent of the subject being deepfaked,” Ben Colman, co-founder and CEO of Reality Defender told CBS News on Sunday. “Such content is growing at a dizzying rate, especially as existing measures to stop such content are seldom implemented.” 

CBS News has sought comments from Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, and Maria Sharapova on this story. Johansson declined to issue any comment, according to a representative for the actor.


Expert shows how to spot a deepfake created with AI

02:39

Under Meta’s Bullying and Harassment policy, the company prohibits “derogatory sexualized photoshop or drawings” on its platforms. The company also bans adult nudity, sexual activity and adult sexual exploitation, and its regulations are intended to block users from sharing or threatening to share non-consensual intimate imagery. Meta has also rolled out the use of “AI info” labels to clearly mark content that is AI manipulated. 

But questions remain over the effectiveness of the tech company’s policing of such content. CBS News found dozens of AI-generated, sexualized images of Cosgrave and McCurdy still publicly available on Facebook even after the widespread sharing of such content, in violation of the company’s terms, was flagged to Meta. 

One such deepfake image of Cosgrave that was still up over the weekend had been shared by an account with 2.8 million followers.

The two actors — both former child stars on the Nickelodeon show iCarly, which is owned by CBS News’ parent company Paramount Global — are the most prolifically targeted for deepfake content, based on the images of public figures that CBS News has analyzed. 

Meta’s Oversight Board, a quasi-independent body that consists of experts in the field of human rights and freedom of speech, and makes recommendations for content moderation on Meta’s platforms, told CBS News in an emailed statement that the company’s current regulations around sexualized deepfake content are insufficient. 

The Oversight Board cited recommendations it has made to Meta over the past year,  including urging the company to make its rules clearer by updating its prohibition against “derogatory sexualized photoshop” to specifically include the word “non-consensual” and to encompass other photo manipulation techniques such as AI.

The board has also recommended that Meta fold its ban on “derogatory sexualized photoshop” into the company’s Adult Sexual Exploitation regulations, so moderation of such content would be more rigorously enforced.

Asked Monday by CBS News about the board’s recommendations, Meta pointed to the guidelines on its transparency website, which show the company is assessing the feasibility of three of four recommendations from the oversight board and is implementing one of its suggestions, though Meta noted in its statement on its site that it is currently ruling out changing the language of its “derogatory sexualized photoshop” policy to include the phrase “non consensual.” Meta also says it is currently unlikely to move its “derogatory sexualized photoshop” policy to within its Adult Sexual Exploitation regulations.

Meta noted in its statement that it was still considering ways to signal a lack of consent in AI-generated images. Meta also said it was considering reforms to its Adult Sexual Exploitation policies, to “capture the spirit” of the board’s recommendations.

“The Oversight Board has made clear that non-consensual deepfake intimate images are a serious violation of privacy and personal dignity, disproportionately harming women and girls. These images are not just a misuse of technology — they are a form of abuse that can have lasting consequences,” Michael McConnell, an Oversight Board co-chair, told CBS News on Friday. 

“The Board is actively monitoring Meta’s response and will continue to push for stronger safeguards, faster enforcement, and greater accountability,” McConnell said.

Meta is not the only social media company to face the issue of widespread, sexualized deepfake content. 

Last year, Elon Musk’s platform X temporarily blocked Taylor Swift-related searches after AI-generated fake pornographic images in the likeness of the singer circulated widely on the platform and garnered millions of views and impressions.

“Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content,” the platform’s safety team said in a post at the time. 

A study published earlier this month by the U.K. government found the number of deepfake images on social media platforms expanding at a rapid rate, with the government projecting that 8 million deepfakes would be shared this year, up from 500,000 in 2023. 

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button