🗣️ Let’s Talk—What Are You Seeing?
📩 Reply to this email or drop a comment.
🔗 Not subscribed yet? It is only a click away.
From the Israel-Iran conflict to China's aggressive disinformation campaign against Taiwan's new submarine, information is now a primary weapon.
Even within democratic systems, as seen with a viral deepfake involving a Philippine Senator, digital manipulation is actively shaping narratives.
Yet, amidst the chaos, initiatives like the Council of Europe's hackathon offer a glimmer of proactive defence. Understanding these evolving threats isn't just about current events; it's about the integrity of our global information space.
Welcome to this week in disinformation.
Hybrid Warfare Escalates: Disinformation and Cyberattacks in the 2025 Israel-Iran Conflict

◾ 1. Context
Tensions between Israel and Iran have dramatically escalated into conflict in June 2025. The actions have moved beyond proxy engagements to direct military and digital confrontations.
This conflict is characterized not only by conventional military actions but also by an intense, parallel hybrid war.
◾ 2. What Happened
Iran's Information Warfare: Iranian-linked actors have launched widespread disinformation campaigns and psychological operations.
Iranian state media and affiliated channels have aggressively pushed narratives portraying Israel's strikes as failures and Iran's retaliation as successful, often using AI-generated images and videos (e.g., fake images of mass destruction in Tel Aviv or downed Israeli F-35 jets). They also leverage extensive social media botnets and fake personas to amplify pro-Iran narratives and incite panic. Furthermore, they have imposed internet disruptions nationwide within Iran to control information flow domestically.
Israel's Information Warfare: Israel has primarily focused on impactful cyber strikes targeting Iranian critical infrastructure.
In the information domain, Israel's strategy appears to involve controlling its own official narrative and possibly using psychological operations to create the perception of a credible existential threat. This is observed by revealing Israeli intelligence on Iranian nuclear damage and arrests of alleged Mossad spies.
◾ 3. Nature of Information
Both sides are guilty of disinformation since the conflict escalated in June 2025.
The broader online narrative distortion also involves malinformation, where true information might be decontextualized or selectively used to amplify malicious intent.
These tactics are part of an explicit Hybrid Warfare strategy.
◾ 4. Character of Information
Disinformation is rapidly disseminated across various digital platforms, including SMS, messaging apps, and social media, often utilizing AI-driven tools for content generation (e.g., deepfakes of war events, fake images of destruction) and amplification via botnets and coordinated networks.
Iranian tactics specifically aim for psychological impact on the Israeli home front and international reach.
Israel's information efforts primarily revolve around official announcements, counter-narratives to Iranian disinformation, and selective leaks to influence strategic perception.
◾ 5. Impact & Information Effects Statement
The disinformation campaign, particularly from the Iranian side, is directly impacting public morale and internal stability within Israel by sowing panic and eroding trust in official communications.
Israel's cyber operations, conversely, aim to disrupt Iranian infrastructure and potentially sow internal distrust in the regime. Geopolitically, these hybrid tactics risk further escalating regional tensions by inflaming public opinion and complicating de-escalation efforts.
The pervasive nature of these hybrid attacks makes it challenging for populations to discern truth from falsehood in a high-stakes conflict.
It is assessed with high confidence that the escalation of disinformation and psychological warfare by Iran, alongside Israeli cyber operations in the Israel-Iran conflict, has heightened societal panic and undermined information integrity. This is likely intended to destabilize the adversary's home front and influence international perceptions.
◾ 6. Strategic Implications And Lessons Learnt
This conflict serves as a stark illustration of hybrid warfare, where cyberattacks and disinformation are as crucial as conventional military operations. It demonstrates that the information space is now a direct battlefield.
This implies that global stability is increasingly susceptible to cognitive manipulation.
Sources:
Radware: Hybrid Warfare Unfolded: Cyberattacks, Hacktivism and Disinformation in the 2025 Israel-Iran War
OECD.AI: Ai-generated deepfakes fuel disinformation in Iran-Israel conflict
Unit 42, Paloalto networks, Threat Brief: Escalation of Cyber Risk Related to Iran
Newsguard Iranian State-Affiliated False Claims Tracker: 22 Myths about the War and Counting
Institute For The Study Of War: Iran Update, June 25, 2025
China Deploys Disinformation Against Taiwan's Indigenous Submarine

◾ 1. Context
Taiwan is actively developing indigenous defence capabilities, including its first domestically built submarine, the Hai Kun.
This initiative aims to strengthen Taiwan's self-reliance amidst escalating military pressure from China, which claims the island as its territory. Beijing consistently seeks to undermine Taiwan's sovereignty and defence efforts through various means, including information warfare.
◾ 2. What Happened
Following initial sea trials on June 17, 2025, Taiwan's Hai Kun submarine became the target of a disinformation campaign. Fake social media accounts and Chinese state-aligned media, including Voice of the Strait radio, rapidly circulated manipulated claims of hull deformation.
Taiwanese defence officials and the state-owned CSBC Corp. immediately debunked these claims, clarifying the alleged "deformation" was a standard sonar dome.
◾ 3. Nature of Information
The claims regarding the submarine's hull deformation constitute disinformation, as they were intentionally fabricated and disseminated to deceive.
This content was designed to discredit Taiwan's defence capabilities and erode public trust. Taiwanese official responses are factual, directly refuting the false claims. The sources spreading the disinformation are primarily Chinese state-affiliated media and unverified social media accounts.
◾ 4. Character of Information
The disinformation spread rapidly across social media platforms and through state-run broadcasts. Key manipulation tactics included visual distortion and the immediate propagation of false narratives following the sea trails and press release.
The speed and coordination suggest a pre-planned, orchestrated campaign, consistent with China's broader information operations and "troll farm" targeting Taiwan.
◾ 5. Impact & Information Effects Statement
This disinformation aimed to undermine public confidence in Taiwan's defence program and foster internal divisions. The widespread dissemination and rapid refutation highlight the ongoing information battlefield in the Taiwan Strait.
It is assessed with high confidence that Chinese state-affiliated actors' dissemination of fabricated claims regarding the Hai Kun submarine's integrity has attempted to undermine public confidence in Taiwan's indigenous defence capabilities. This is likely intended to demoralize the population and deter international support for Taiwan.
◾ 6. Strategic Implications And Lessons Learnt
This incident exemplifies China's probing for information vulnerabilities, leveraging disinformation to achieve strategic goals without overt military action. It showcases how information operations can target critical national projects, blurring lines between military and cognitive domains.
This persistent threat implies constant vigilance for geopolitical stability in the Indo-Pacific.
Sources:
Taipei Times: Submarine deformation claims not true, CSBC says
Naval News: Taiwan indigenous submarine complete first sea trial
Small Wars Journal: China’s Political Warfare: The Fight for Taiwan on the Information Battlefield
Hybrid CoE Research Report 9 China’s hybrid influence in Taiwan: Non-state actors and policy responses
Council of Europe Hackathon: Outsmart Disinformation, Protect Free Speech
◾ 1. Context
The Council of Europe initiated the "Outsmart Disinformation, Protect Free Speech" Hackathon as a key component of its broader New Democratic Pact.
This Pact, championed by Secretary General Alain Berset, aims to revitalize democracy and address challenges like declining trust and polarization across its member states. The hackathon specifically targets young people to co-create solutions against disinformation while upholding human rights.
◾ 2. What Happened
On June 20, 2025, the European Youth Centre Budapest (EYCB) hosted a satellite event for the Council of Europe Hackathon. Organized in collaboration with ELTE University and GYIÖT (Federation of Children's and Youth Municipal Councils), it gathered young participants and experts to brainstorm solutions.
Secretary General Alain Berset joined for an online Q&A session, emphasizing high-level commitment. This event directly informed the main Hackathon outcomes in Strasbourg.
◾ 3. Nature of Information
This story is factual.
The event directly addresses the threat of disinformation and misinformation, aiming to equip participants to counter these.
◾ 4. Character of Information
The entire initiative is structured to foster critical thinking and provide tools against coordinated information manipulation, rather than engaging in it.
◾ 5. Impact & Information Effects Statement
The hackathon events aim to equip young people with practical skills and strategies to identify and combat disinformation, thereby strengthening democratic resilience.
The initiative promotes critical media literacy and digital citizenship among youth, who are often primary targets of online influence operations. Its likely reach includes youth activists, civil society, and policymakers across member states.
◾ 6. Strategic Implications And Lessons Learnt
This initiative represents a strategic shift towards proactive resilience-building against hybrid threats, acknowledging disinformation as a fundamental challenge to democratic security.
Historically, security focused on physical threats; now, it encompasses protecting the information environment and cognitive spaces. Lessons highlight the imperative for multi-stakeholder collaboration, emphasizing youth engagement as a frontline defence.
This proactive education and skill-building empower citizens for personal security in the digital age, while strengthening democratic ecosystems against both internal erosion and external geopolitical manipulation.
Sources:
Outsmart disinformation, protect free speech: Council Of Europe
Hackathon and democratic debates on “Outsmart disinformation, protect free speech”: Council Of Europe
Philippine Senator's Deepfake Post Fuels Disinformation Concerns
◾ 1. Context
The Philippines is a democracy with high social media penetration, making it fertile ground for information operations.
The country's political landscape is often polarized, with key figures frequently targeted or involved in online content.
The rise of accessible AI deepfake technology has added a new layer of complexity to these existing vulnerabilities.
◾ 2. What Happened
Earlier in June, Philippine Senator Ronald "Bato" Dela Rosa shared an AI-generated deepfake video on his widely followed social media accounts, reportedly in support of Vice President Sara Duterte.
The video, depicting two male students in a "TikTok style" interview, quickly went viral, garnering over 7 million views. This incident immediately sparked national debate and raised fresh concerns about the weaponization of AI in political discourse.
◾ 3. Nature of Information
The video shared by Senator Dela Rosa is an example of disinformation, as it was an intentionally fabricated AI-generated deepfake designed to convey a specific political message and likely mislead viewers.
While the Senator's intent in sharing it might be debated, the content itself is demonstrably false and manipulated. Its rapid spread indicates a clear attempt to influence public opinion.
◾ 4. Character of Information
The deepfake was primarily disseminated through social media platforms, notably Facebook, leveraging its vast reach and engagement mechanisms. The use of a "TikTok style" format indicates a deliberate attempt to make the content appear authentic and relatable to a wide, particularly younger, audience.
The high views and interactions suggest effective amplification, potentially exploiting social media algorithms and existing political polarization.
◾ 5. Impact & Information Effects Statement
The deepfake post immediately ignited a national debate, diverting public discourse and potentially sowing confusion regarding political narratives. It highlighted the ease with which AI-generated content can be created and widely shared, eroding trust in online information and political figures.
It is assessed with High confidence that the dissemination of the AI-generated deepfake video by a Philippine Senator has exacerbated concerns about digital deception and the erosion of trust in political information. This is likely intended to manipulate public perception regarding specific political figures or narratives.
◾6. Strategic Implications And Lessons Learnt
This incident underscores a critical vulnerability in modern democracies: the weaponization of AI-generated content to influence political outcomes and undermine public trust.
This implies a significant challenge, as state and non-state actors can exploit such technologies to destabilize adversaries, interfere in elections, and polarize societies.