The Oversight Board, Meta's quasi-independent policy council, has initiated inquiries into Meta's management of explicit, AI-generated images on its social platforms.
Oversight Board Co-Chair, Helle Thorning-Schmidt, said that the investigations focus on two separate cases involving Instagram in India, and Facebook in the U.S.
Thorning-Schmidt, in a statement on Tuesday, added that both platforms have removed the offensive media.
However, the individuals targeted by the AI-generated images are not named "to avoid gender-based harassment," according to the statement.
The Oversight Board which intervenes in cases concerning Meta's moderation decisions said it will publish its comprehensive findings and conclusions in the future.
However, in its statement, the Board gave further details into its investigation and said the first case it is investigating, involved a user who reported an AI-generated nude of a public figure from India on Instagram, labelling it as pornography.
Despite the report, Meta failed to promptly remove the image, and subsequent appeals were automatically closed without further review.
Only after the user appealed to the Oversight Board did Meta finally act to remove the content for violating its community standards on bullying and harassment.
In the second case involving Facebook, an explicit AI-generated image resembling a U.S. public figure was posted in a Group focusing on AI creations.
While the social network swiftly took down the image, it had previously been added to a Media Matching Service Bank under the category of "derogatory sexualized photoshop or drawings."
When questioned about the selection of a case where the company successfully removed explicit AI-generated content, the Oversight Board explained that it chooses cases emblematic of broader issues across Meta's platforms.
The board added that it aims to assess the global effectiveness of Meta's policies and processes through these cases.
In response to the Oversight Board's actions, Meta stated that it removed both pieces of content. However, the company did not address its failure to remove content on Instagram promptly or disclose the duration of the content's presence on the platform.
Meta asserts that it employs a combination of artificial intelligence and human review to detect sexually suggestive content. The company also stated that it does not promote such content in areas like Instagram Explore or Reels recommendations.
The Oversight Board has invited public comments on the matter, addressing concerns regarding deep fake porn, the prevalence of such content in regions like the U.S. and India, and potential shortcomings in Meta's approach to detecting AI-generated explicit imagery.
The board will consider these comments alongside its investigations and publish its decisions in the coming weeks.