In this context, Israel has, since the early 2010s, invested in multilingual digital content aimed at influencing external audiences and consolidating its political and security narratives. These tools have also been incorporated into broader media and psychological operations campaigns.
This deployment has extended to the informational and psychological confrontation with Iran. Here, the objective is not limited to countering Tehran’s messaging or responding to military developments. Rather, it involves shaping online debates, influencing trends in Iranian public opinion, and creating political effects that serve strategic goals beyond the battlefield.
The Digital Influence Ecosystem: Frameworks and Tools
The digital environment has become the primary arena in which political conflict is reproduced in public perception. Influence no longer depends mainly on direct messaging or traditional media. Instead, it exploits the structural properties of social platforms: algorithms, interaction patterns, and networked communities. These features are used to manage information, steer attention, and reorder priorities.
This shift changes the nature of influence from straightforward persuasion to the management of the “cognitive architecture” of public opinion. Through algorithmic personalization, network manipulation, and digital intermediaries, platforms have turned into strategic spaces for information warfare—spaces where the aim is to govern what people see, what they feel, and what they believe is socially dominant.
Algorithms are central to this process. They frequently privilege provocative or preference-aligned content, which contributes to information bubbles and echo chambers. Several theoretical frameworks help explain these outcomes: digital agenda-setting (the reordering of what appears important), media framing (how events are interpreted), misinformation diffusion, the digital spiral of silence (the suppression of minority views), closed networks and insulated communities, emotional mobilization and collective identity formation, and the engineering of digital public opinion—where confirmation bias is intensified and dominant narratives are reinforced.
One of the most significant operational tools in this environment is astroturfing, organized campaigns that simulate grassroots support to amplify a fabricated public mood. These operations use networks of human-operated or semi-automated accounts as part of computational propaganda. The goal is to confuse the public sphere, manipulate algorithms, and promote selected narratives through artificial visibility.
Influencers also function as key intermediaries, often embedding political messaging inside cultural or entertainment frameworks. In many cases, these are targeted campaigns, sometimes sponsored, designed to reach specific segments and reshape the broader perceptual climate under the banner of “content” rather than overt political communication.
Time-based patterns often reveal coordination. Sudden spikes in posting, unusual surges in engagement, and abnormal geographic or nocturnal activity can indicate centrally managed operations reacting to political or military developments. Emotional and narrative manipulation adds another layer: rather than focusing on facts alone, campaigns seek to manage collective emotions such as fear, ridicule, distrust, and skepticism, reshaping group perception gradually and cumulatively. The use of sentiment analysis tools and tone monitoring strengthens this approach.
A further development lies in hybrid human–automation operations. Coordinated campaigns increasingly combine low-credibility human accounts with partial automation to maximize reach and synchronize engagement peaks. This hybrid structure makes detection harder and amplifies cumulative impact—an increasingly common feature of next-generation computational propaganda.
Taken together, these tools demonstrate that platforms are not neutral spaces. They are algorithmically structured environments through which actors can manufacture narrative dominance and treat public opinion as a target of cognitive management. The contest over perception has become a strategic arena in its own right.
How Israel Employs Digital Platforms Toward Iran
Israel has developed an advanced model for using digital platforms as instruments of strategic influence. This model combines digital diplomacy, algorithmic manipulation, and coordinated campaigns that exploit transnational social networks. The objective is to shape Iranian public opinion and manufacture a form of “digital opposition” aligned with Israel’s broader strategy of containing—and potentially undermining—the political order in Tehran.
Within this approach, low-credibility accounts and algorithmic dynamics are used to amplify selected narratives, particularly during crises. Researchers in political communication have described this as the exploitation of platform affordances—using what platforms make possible, structurally and technically, to push discourse in desired directions.
The Social Forensics Findings: Disinformation, Harassment, and the Monarchist Push
A Social Forensics report published in July 2023, funded by the National Iranian American Council (NIAC), documented what it described as disinformation, defamation, and intimidation campaigns targeting members of the Iranian diaspora in the United States and Europe. The report also noted efforts to amplify monarchist calls and promote the perception of rising royalist momentum.
According to the report, targets included NIAC, its staff, supporters, and additional figures such as journalists, academics, and independent activists with no formal link to the organization. The study observed that online activity surged in connection with the “Woman, Life, Freedom” protest wave following the death of Mahsa Amini in September 2022, a moment that reshaped Iran’s digital environment by introducing new voices and intensifying polarization. The report also referred to high-profile actors, including Emily Schrader, as part of the broader digital discourse targeting NIAC.
Empirically, the report identified approximately 213,000 Twitter accounts that generated roughly 1.5 million tweets over a three-month period. Many accounts had limited audiences: 48.3% had fewer than 100 followers. The report also noted that 46.3% of the accounts were created during 2021–2022, and 23.7% were created after Mahsa Amini’s death. The operational logic suggested what the report described as a pattern of “decentralized leadership,” which lowers risk and complicates attribution.
Network analysis of follow relationships among 53,000 accounts identified seven communities, including a “core monarchist community” and a “core influential opposition community.” The report described the use of monarchist symbols to create the appearance of widespread support for Reza Pahlavi, including the use of sock-puppet tactics. It also noted that some accounts appeared to continue to influence the space even after suspension, suggesting sustained coordination.
The report further documented artificial amplification mechanisms: mass retweeting, follower inflation, and abnormal daily mention volumes, up to 120,000 mentions per day, and repeated changes in account names designed to obscure identities. It also cited more than 3,434 inauthentic accounts using crown imagery to project a manufactured popularity for monarchist messaging.
Signals of Linkage: The @IsraelPersian Account and Pattern-Based Evidence
The report pointed to indicators it described as consistent with Israeli linkage. These included follow relationships between the official Israeli Persian-language account @IsraelPersian and a large number of accounts assessed as inauthentic or deceptive, as well as overlapping participation in amplification activities.
Attribution remains a complex and contested process, and the report did not claim definitive proof at the level of direct command and control. However, it argued that the quantitative scale and behavioral patterns support the hypothesis of partial official oversight or alignment, particularly since the surge of digital activity accompanying the Mahsa Amini protests.
Four Operational Models of Israeli Digital Activity Targeting Iran
This section examines distinct patterns and cases of Israeli digital activity aimed at influencing Iranian public opinion and destabilizing internal perception.
The @IsraelPersian Account on X
According to Social Forensics (May 14, 2023), the account @IsraelPersian had approximately 456,000 followers, and 19.4% of its output consisted of retweets. Among the most prominent posts were those related to Reza Pahlavi’s visit to Israel in April 2023.
The account followed 345 accounts, including @SAvginsaz, identified as the Persian-language media director at Israel’s foreign ministry. Follow-mapping analysis grouped the followed accounts into three main clusters: Persian-language media, Israeli government actors, and Iranian monarchist networks. The report also observed the presence of inauthentic accounts that appeared to inflate engagement and reinforce messages—suggesting an overlap between formal digital diplomacy and artificial networks, especially after Mahsa Amini’s death.
The “Prison Break” Campaign During the 12-Day War
A Citizen Lab study published on October 2, 2025, reported that Israel exploited the Iranian digital space during the conflict through a coordinated operation involving more than 50 fake accounts, managed with professional discipline and supported by AI tools.
The study described characteristics such as structured activity during working hours, heavy reliance on web posting, stolen profile images, repeated or templated content, and account creation waves dating back to 2023, followed by intensified output during the war. The campaign circulated fabricated videos alleging an Israeli strike on Evin Prison, alongside calls for protest and incitement aimed at economic and social disruption.
The network promoted a broader narrative of regime collapse through posts about corruption, economic breakdown, and water and energy crises. The operational goal, according to the analysis, was destabilization—contributing to conditions that could facilitate regime weakening or change. Notably, the study reported that two days into the war, accounts urged Iranians to withdraw money from ATMs, claiming the regime was “stealing their savings.” AI-generated videos depicting long queues at ATMs spread widely.
The operation also revived a symbolic protest practice under the hashtag #8PMCry, encouraging citizens to chant “Death to Khamenei” from their balconies. This was reinforced through edited videos presented as evidence of mass participation.
In parallel to this campaign, additional Israeli and global networks reportedly promoted a narrative of imminent collapse in Tehran. Posts circulated claims of prison escapes and border congestion, often using old footage or fabricated AI-generated material. Some accounts attempted to spread the claim that an internal revolution was underway, while others promoted the idea of popular support for Israeli intervention through fabricated videos showing Iranians waving Israeli flags and chanting in Persian. Network analyses described an organized multilingual operation operating in Hebrew, Persian, Arabic, and English, intensifying between June 14 and 17, 2025, to amplify a destabilization narrative.
Fabricating Iranian Support for an Israeli Minister’s Account
Within the broader escalation of digital influence warfare tied to Iran, the Israeli newspaper Haaretz reported coordinated use of fake accounts to promote Israeli political content. According to the report, hundreds of accounts believed to be operating from within Iran amplified posts by Israel’s Minister of Innovation Gila Gamliel, especially content attacking Tehran or endorsing Reza Pahlavi.
The engagement pattern appeared abnormal, concentrated in the first minutes after publication, and it did not translate into similar momentum on other platforms. This strengthened the hypothesis of coordinated manipulation rather than organic Iranian engagement. The report also noted that many of these accounts were created during or after the 2022 protest wave, suggesting purpose-built influence networks designed to distort public perception and push platform algorithms through manufactured support.
Direct Digital Recruitment of Iranians
Israel’s approach, as described in the Arabic text, does not rely solely on inauthentic accounts and AI-generated amplification. It also includes attempts at direct digital recruitment. Media reports referenced a campaign known as “Blue Message,” described as an effort to recruit Iranians digitally in support of regime-change goals.
The campaign reportedly used sponsored Google advertisements across 19 countries, targeting specific segments—such as families of Iranian nuclear engineers in the diaspora—and redirecting respondents to Google Forms and Telegram channels. The stated purpose was to solicit assistance for Israeli intelligence (Mossad) and encourage participation in actions framed as contributing to the overthrow of the Iranian system. Emotional messaging, as well as promises of protection and rewards, were reportedly used to increase compliance and participation.
Concluding Findings
The study concludes that Israel has, in recent years, developed an integrated approach to managing its conflict with Iran by shifting part of the confrontation into the informational domain. Digital platforms have become operational tools for reshaping public perception, manufacturing political trends, and producing an internal climate of instability—alongside military, security, and diplomatic pressure.
Available indicators suggest that a core pillar of this approach is the construction of a “digital opposition environment” through networks of inauthentic accounts impersonating Iranian citizens. These networks reportedly operate through coordinated patterns: synchronized timing, repeated content and hashtags, non-identifying imagery, and the use of AI tools. The effect is to create the perception of growing dissent, even when the momentum is controlled and artificial.
The danger of these operations becomes more severe when they coincide directly with military activity—as illustrated by the Evin Prison episode during the June 2025 conflict. Here, digital action appears not merely reactive but integrated into event management itself: early claims, fabricated content, and guidance encouraging specific real-world behavior. This indicates that the digital sphere has become a structural component of modern conflict management.
The study also suggests a transition from delegitimization to political engineering: beyond undermining the regime, campaigns promote “alternatives” through intensive amplification of particular symbols, such as Reza Pahlavi, attempting to impose a digitally constructed legitimacy via algorithms and artificial visibility. This path extends further into direct recruitment via sponsored ads, data collection, and private communication channels—signaling escalation from cognitive influence toward deeper societal penetration.
Overall, the paper identifies a coherent operational pattern: network preparation, crisis-time activation, narrative amplification, the manufacture of alternatives, and a subsequent shift toward recruitment and penetration. This points to an organized system of influence rather than spontaneous digital activism. The study concludes that countering such threats requires treating the digital domain as a strategic arena of conflict, developing early-warning monitoring tools capable of detecting inauthentic networks, and dismantling manufactured narratives before they harden into “facts” within public consciousness, especially given the ease with which such tools can be exported to other regional arenas.
*Heba Zain (Senior Researcher, Egyptian Center for Strategic Studies, ECSS).
*Yasmin Mahmoud (Researcher, Egyptian Center for Strategic Studies, ECSS).
Short link: