US social media platform X incentivised the spread of inflammatory misinformation in the wake of the Bondi Beach shootings, according to a digital media expert.

In the hours after the attack on a Jewish celebration — in which 15 innocent people were killed, and dozens were injured — social media was flooded with false and misleading claims.

Sydney man Naveed Akram said he received death threats after he was wrongly identified on X as one of the gunmen, with some posts including personal information like his university. 

Mr Akram, who has the same name as one of the shooters, posted a video on the Facebook page of the Pakistan consulate in Sydney, pleading with people to stop spreading the misinformation.

“It was a real nightmare for me, seeing photos of my face shared on social media, wrongfully being called the shooter,” the Pakistani national told the ABC.

“Friends came with me to the police station to report it, but the police said they couldn’t do anything and told me just to deactivate my accounts.”

Read more on the Bondi Beach shooting:

While some posts and articles misidentifying him as one of the shooters were taken down after he flagged them, others remain online on X and other platforms.

“I am still shaking. This has put me at risk, and also my family back home in Pakistan [at risk],” he said.

“My mum broke down and feels in danger.”

A tweet with several images including one of Sydney man Naveed Akram in a Pakistan cricket team shirt.

Soon after the attack, X accounts began sharing a post misidentifying a photo of Naveed Akram in a Pakistan cricket team shirt as one of the Bondi shooters. (Supplied)

Social media users also posted video of fireworks, characterising it as celebrations in the western Sydney suburb of Bankstown by “Arabs” or “Islamists”.

However, a local community organisation said the display was for Christmas celebrations.

A screengrab of a tweet showing fireworks claiming it was Arabs celebrating.

A large number of X users shared a video of fireworks falsely claiming it was “Arabs” or “Islamists” celebrating the attack. (Supplied)

Community notes were later added to at least one of the posts, and some were deleted, but other users continued to repost and mischaracterise the video.

Other misinformation shared on X included wrongly identifying the shooters as former Israel Defense Forces members, or as being from Pakistan, that shootings also were taking place in other eastern suburbs, and that the tragedy was a “false flag” operation.

One of the shooters was originally from India, while the other was born in Australia.

‘Economy around disinformation’

Disinformation expert Timothy Graham said X continued to be an influential platform where “key false narratives” began to go viral before spreading more widely.

These narratives could be unintentionally misleading or wilfully deceptive, said Dr Graham, an associate professor in Digital Media at the Queensland University of Technology.

“The biggest takeaway for me really is that the platforms, X in particular, really incentivise this through their design features … unfortunately, this both propels and rewards [misleading content].”

Falsehoods spread online after Bondi Beach terrorist attack

ABC NEWS Verify debunks claims of “Mossad propaganda” and a news website based out of an Icelandic cafe.

Dr Graham said the biggest driver was X’s monetisation program, in which users get paid for engagement on their posts.

The X website says: “Earnings are calculated based on verified engagements with your posts, such as likes and replies.”

Dr Graham said that in the wake of events like the Bondi shooting, people were desperate for information.

While much of the misleading content intended to exploit that desperation was hyper-political, Dr Graham said most of it was financially motivated.

“People are incentivised to share content that they know is going to get a lot of clicks irrespective of its quality, irrespective of whether it’s true or factual, simply because they can make money out of it, and this is obviously a really big issue,” said Dr Graham.

“There’s basically an economy around disinformation now.”

The conditions of X’s Creator Revenue Sharing program say “content relating to tragedy, conflict, mass violence, or exploitation of controversial political or social issues” is restricted content that “may face restricted monetization”.

However, it is unclear when those conditions are enforced.

In order to join the monetisation program, accounts must already have a significant amount of engagement, including 5 million “organic impressions” within the past three months and at least 500 verified followers. They must also be a paid X subscriber.

The ABC has attempted to contact X for comment. 

Moderation by ‘community notes’

Meanwhile, Dr Graham said X’s “community notes” moderation system was unsuitable for divisive fast-breaking news situations like the Bondi shooting.

Under the system, users can “collaboratively add helpful notes to posts that might be misleading”.

X says the community notes only appear when posts are rated “helpful” by people from diverse perspectives.

man with red hair, side part, black half rimmed glasses, beard and moustache with blue blazer and chequered shirt

Tim Graham says X’s monetisation of engagement and ineffective content moderation propels the spread of false information. (Supplied)

“To identify notes that are helpful to a wide range of people, notes require agreement between contributors who have sometimes disagreed in their past ratings,” the platform’s website says.

Dr Graham said community notes worked for some content but not for breaking polarising events, which required agreement between people who have extreme opposing views.

The notes ended up taking too long or were never added, he said.

“Meanwhile, they’re racking up the views. They are being reported on. They are being picked up on by [other channels].

“It’s spreading like wildfire, and you know 10, 12, 24 hours later we still don’t see any context added.”

Social media misinformation an ‘infrastructure problem’

Dr Graham said the solutions to misinformation on social media were complex and had to find a balance between free speech and protecting the public.

However, he said a couple of important steps would be to address the incentives being offered by the platforms and also make social media platforms’ data more accessible.

“We’re living in a dark age of access to social media data,” he said.

He said stakeholders used to understand how much hate speech and what kind was occurring, as well as the levels of foreign interference.

Regulations were needed requiring platforms to share specifications of their algorithms, how they work, and what content they are boosting, he said.

EU fines X $210 million for breaching online content rules

European Union regulators say the breaches of the bloc’s digital regulations could leave users exposed to scams and manipulation.

Earlier this month, the European Union fined X 120 million euros ($210 million) for breaches of its Digital Services Act, including putting up “unnecessary barriers” for researchers trying to access public data.

“The European Union’s digital services act faces this issue head on, and I think it’s doing a really excellent job of trying to get inroads into the platforms,” Dr Graham said.

“They need to share data with people. We need to know what’s going on.”

Dr Graham said misinformation on social media was an “infrastructure problem”.

“We need to recognise that platforms like X are now modern infrastructure, like bridges are infrastructure, like telephone wires are infrastructure,” he said.

“If there’s something problematic about those, then we need to change them; otherwise, they’re going to keep doing the same things.”