The board’s review of the issue was sparked by a video posted last June by a Facebook account based in the Philippines describing itself as a news source.

It was one of a string of fake AI videos posted to social media after the conflict began, with content either being pro-Israel and pro-Iran, which quickly collected at least 100 million views, according to a BBC analysis at the time.

Despite the Facebook video being AI-generated and showing content that was not real, and Meta receiving several user complaints about it, the company did not label the video as AI-generated or remove it.

It wasn’t until a Facebook user appealed directly to the Oversight Board, and the board took up the issue, that Meta even responded to concerns, according to the board.

The company then claimed the video, which garnered almost 1 million views, did not require any kind of label and did not need to be taken down because it did not “directly contribute to the risk of imminent physical harm.”

That is too high of a bar for labeling AI-generated content, particularly when the subject is armed conflict, the board said Tuesday, ruling that the video should have received a “high risk AI label.”

“Meta must do more to address the proliferation of deceptive AI-generated content on its platforms… so that users can distinguish between what is real and fake”, it said.

In its statement, Meta said that it would abide by the board’s suggestions the next time it encounters “identical” content that is also “in the same context” as the video the board reviewed.