Meta revealed that it had identified "likely AI-generated" content being used deceptively on its Facebook and Instagram platforms.
This included comments praising Israel's handling of the war in Gaza, which were published below posts from global news organizations and U.S. lawmakers. The findings were part of Meta's quarterly security report.
The social media giant disclosed on Wednesday that the deceptive accounts posed as Jewish students, African Americans, and other concerned citizens, targeting audiences in the United States and Canada.
Meta attributed this disinformation campaign to STOIC, a political marketing firm based in Tel Aviv.
While Meta has encountered AI-generated profile photos in influence operations since 2019, this report marks the first instance of text-based generative AI being identified in such campaigns since the technology became prominent in late 2022.
Researchers have expressed concerns that generative AI, capable of rapidly and inexpensively creating human-like text, images, and audio, could enhance the effectiveness of disinformation campaigns and potentially influence elections.
During a press call, Meta security executives stated that they had promptly dismantled the Israeli campaign and believed that the new AI technologies did not hinder their ability to detect and disrupt influence networks.
These networks are coordinated efforts to propagate specific messages.
"There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn't really impacted our ability to detect them," said Mike Dvilyanski, Meta's head of threat investigations.
The executives also mentioned that they had not encountered AI-generated images of politicians that were realistic enough to be mistaken for genuine photos.