Trolls, clout-chasers, and the merely clueless are flooding social media with clips from a military video game, passing them off as footage of the unfolding conflict in Gaza. The fake videos come as misinformation about the conflict overflows on X, the platform formerly known as Twitter.
Following Saturday’s large-scale attack by Hamas militants on Israeli targets, including the killing and hostage taking of civilians, misleading footage purporting to show the military escalation of the conflict flooded social media — including a slew of clips from Arma 3, a hyper realistic open-world combat video game that allows users to customize gaming scenarios.
One clip, depicting a helicopter being shot down by shoulder-fired missiles, received over 2.5 million impressions on X. “NEW VIDEO : Hamas fighters shooting down Israel war helicopter in Gaza,” the user caption read. Another Arma 3 clip depicting helicopters being shot down garnered over 8.5 million impressions. Both posts received community notes tags clarifying their source, but were not removed from the website.
Two Arma 3 clips, both with more than 3 million views, were posted to TikTok by an account that frequently posts clips from the game in a deceptive manner. One video, depicting a cluster of fighter jets flying amidst a chorus of air raid sirens and exploding munitions, was captioned “BREAKING: A large Israeli offensive is underway in retaliation for Hamas attacks.” A disclaimer that the video had been “filmed with a digital combat simulator” was tucked away at the bottom of its description — which is not visible unless a user selects an option to unfurl it — below the caption and a series of more than 20 hashtags.
The TikTok post was later cross-posted on X by Britain First party leader Paul Golding, who then deleted the post after being informed it was fake. Despite having been duped once, Golding posted a second video from Arma 3 with the caption “Israel is massively stepping up its retaliation.” The clip has gathered more than 260,000 impressions since Sunday, and had not been flagged with a community note.
Since its release in 2013, Arma 3 has become fodder for troll accounts attempting to pass off high-definition outtakes of the gameplay or cut-scenes as combat footage from real-world conflict. In October of last year, Arma 3 videos depicting surface-to-air combat, missile launches, and drone strikes were falsely portrayed on social media as combat footage from Russia’s invasion of Ukraine. In 2018, Russian state media mistakenly aired footage from the game, claiming it showed a Russian attack on a Syrian military convoy. In August of last year, a post containing Arma 3 content claimed to depict an attack by the Chinese military against Taiwan.
Bohemia Interactive, the Czech independent game development company that produced Arma 3, did not immediately respond to a request for comment from Rolling Stone. In November of last year, the company addressed the misuse of its content in a blog post.
“Arma 3 is more than just a military simulation game, it is a unique open sandbox platform,” Pavel Křižka, Bohemia’s PR Manager, wrote. “This means that players of Arma 3 can recreate and simulate any historic, present, or future conflict in great detail. … This unique freedom of the Arma 3 platform comes with a downside: videos taken from Arma 3, especially when the game is modified, are quite capable of spreading fake news.”
“We’ve been trying to fight against such content by flagging these videos to [social media] platform providers…but it’s very ineffective,” Křižka added. “We found the best way to tackle this is to actively cooperate with leading media outlets and fact-checkers (such as AFP, Reuters, and others), who have better reach and the capacity to fight the spreading of fake news footage effectively.”
The proliferation of fake imagery from the conflict in Gaza, along with other bogus content, is straining an already weakened moderation regime on X, allowing misinformation to run rampant, according to some veterans of the company’s predecessor, Twitter.
When reached for comment, X responded that they were “busy now” and to “check back later.”
“It’s nothing like the days of old. The level of disinformation is astonishing,” says one former Twitter employee familiar with the platform’s prior moderation process. “There are no checks in place to counter the disinformation, no policy or path to enforce against it.”
Experts say that a host of changes at X have turned the platform into fertile ground for misinformation about the conflict.
“It’s really because the way in which Twitter rewards its power-users for sharing content has changed significantly with the changes Elon Musk introduced to Twitter. Before, Twitter would hand out verification to news organizations and individuals who had a proven track record for producing reliable information, along with significant public figures,” says Eliot Higgins, founder of the open source investigative outlet Bellingcat.
“Now, with Musk’s changes, anyone can get a blue tick, giving multiple benefits with regards to how visible their posts become, but with those blue ticks being utterly meaningless in terms of the value of the information being shared. In fact, I would go as far to say that at this stage having a blue tick is generally an indication of low value information,” he adds.
Under its previous ownership, Twitter often erred on the side of caution during international conflicts. During the 2021 war in Ethiopia’s Tigray region, which saw thousands of civilians killed, halted advertisements in the region and stepped up moderation efforts.
But after Musk’s takeover, Twitter debuted a new ads revenue sharing program which allows some high profile accounts to monetize their content. While the platform has long been a target for clout-chasing and ideologically motivated misinformation, the revenue sharing program unleashed a new class of accounts that are low on verifiable information and high on sensationalism in search of revenue-generating clicks.
Over the weekend, X owner Elon Musk promoted two accounts, “WarMonitors” and “sentdefender,” with long records of posting bogus “news” and, in the case of WarMonitors, antisemitic content before deleting the post mentioning them.
“A lot of these accounts make no effort to validate if the sources are correct, so they are more prone to spread misinformation, and because a lot of them buy blue ticks they get an extra boost from Musk’s new Twitter,” says Higgins. “The accounts Musk promoted really fall into this category, and I also don’t think it’s a coincidence that the two accounts Musk recommended were both signed up to Twitter’s subscription service.”