The next front in the war over Israel is not on a battlefield, a campus, or even social media. It is inside the machines increasingly trusted to explain the world to us.
AI, particularly large language models (LLMs), is fast becoming the primary gateway through which people access information. Ask a question, and the answer arrives instantly, neatly packaged, confident in tone, and often treated as authoritative.
For a generation that no longer scrolls through search results or reads long articles, AI is not just a tool. It is a major source of truth.
That should concern anyone who cares about facts. It should alarm anyone who understands how narratives around Israel have long been contested, distorted, and weaponized.
Unfortunately, though predictably, the reality is that the actors who have spent years shaping narratives on social media, Wikipedia, and search engines have already turned their attention to AI, and they are learning fast.
I have seen this playbook before. For over a decade, coordinated networks, ranging from ideological activists to state-backed operations, have worked to influence digital systems.
Wikipedia entries are obsessively edited to frame events relating to Israel through a particular ideological lens. Search engine optimization (SEO) strategies and tactics are used to ensure that certain interpretations dominate the top results in Google. Social media campaigns flood platforms with emotionally charged, often misleading content designed to go viral before facts can catch up.
Now, those same inputs are feeding the training data of LLMs like ChatGPT, Claude, Perplexity and Copilot.
If a model is trained on a distorted information environment, it will reflect that distortion, polished, amplified, and stripped of visible bias. A slanted Wikipedia entry does not look like propaganda to the algorithms. A coordinated narrative push on social media does not register as manipulation. It simply becomes “data.”
What results is something far more insidious than a biased tweet or an inaccurate article. It is the quiet normalization of skewed narratives at scale.
Ask an AI model about the Israel-Palestinian conflict, and its answer may be shaped not only by verified facts, but by the volume and persistence of particular narratives embedded in its training data.
Over time, repetition becomes legitimacy. Language becomes softer towards the Palestinian narrative, harsher against Israel. Context is selectively expanded or omitted. Moral clarity blurs.
This is not always the product of malicious intent by the companies building these systems. It’s more often the ultimate result of a fundamental vulnerability: AI systems absorb the internet in its current state, not as it should be.
The internet, as we know, has been a raging battleground for years.
There are already concerning examples that have been well documented. Researchers have shown instances where AI systems provide incomplete historical context about Israel, echo contested or fringe claims without adequate qualification, or frame complex security realities in ways that flatten causality and responsibility.
In some cases, widely debunked narratives resurface with an air of neutrality simply because they exist in the original source material.
More concerning still is the ease with which these systems can be manipulated in real time.
Prompt engineering, coordinated usage patterns, and feedback loops can nudge models toward certain outputs. As AI becomes more interactive and continuously updated, the risk of narrative gaming only increases.
In other words, this is not a static problem. It is a dynamic front, and yet, much of the response remains stuck in an earlier era.
There are still those who believe that winning the narrative war means improving SEO rankings or securing favorable edits on Wikipedia. That was yesterday’s fight. Important, yes, but no longer enough.
Today, the question is not just what appears on the first page of Google. More important is what answer appears when there is no page at all, only a single, synthesized response delivered by the AI and presented as the truth.
If that answer is wrong, incomplete, or subtly biased, the correction may never come.
This is why the stakes are so high.
For Israel and its supporters, this is not simply a communications challenge, it’s a strategic imperative.
The integrity of historical record, the accuracy of real-time information, and the framing of complex moral and security issues are all now being organized and synthesized by systems that learn from an environment already saturated with bias and manipulation.
So, what is to be done?
First, facts must be produced, documented, and disseminated with greater rigor and accessibility than ever before. High-quality, verifiable information is not just a public good; it is the raw material from which AI systems learn. If credible sources are scarce or drowned out, the vacuum will be filled by those with louder, more coordinated narratives.
Second, engagement with AI developers cannot be optional. Governments, institutions, and civil society organizations must actively work with the companies building these systems to ensure that training data is diverse, credible, and resistant to manipulation. Transparency around sources and methodologies should not be seen as a luxury, but as a necessity.
Third, there must be a recognition that this is a long-term contest. Just as adversaries have invested years in shaping digital narratives, so too must those committed to factual integrity invest in the ecosystems that will define the next decade of information consumption.
Finally, and perhaps most importantly, there must be a refusal to give up and throw in the towel.
We have seen what happens when digital spaces are ceded. Wikipedia became a battleground precisely because it was left uncontested for far too long. SEO was gamed because bad actors understood its mechanics early on and aggressively exploited them.
AI cannot be the next arena where that mistake is repeated, because once narratives are embedded at the level of machine-generated knowledge, they become harder to detect, harder to challenge, and far harder to reverse.
The war of narratives has not disappeared. It has evolved, and if we are not present where the next generation will seek its answers, we should not be surprised when those answers are molded and shaped against us.