🗣️ Let’s Talk—What Are You Seeing?
📩 Reply to this email or drop a comment.
🔗 Not subscribed yet? It is only a click away.
The developments of information warfare demands a keen understanding of its threats. This week, we cover European Union actions against Russian disinformation and electronic warfare units, revealing a systemic approach to destabilization.
Then, explore the harm inflicted by a TikTok death hoax targeting influencer Grace Wolstenholme for illicit commercial gain. This is not geopolitics, and yet it shows disinformation and its reach.
Finally, we look into the alarming use of AI generated deepfakes by public officials in Philippine political campaigns, further eroding trust.
These cases collectively provide a comprehensive look at how digital manipulation is challenging our security and democratic processes.
Welcome to this week in disinformation.
EU Sanctions Target Russia's Hybrid Warfare and Disinformation Networks
◾ 1. Context
On July 15, 2025, the Council of the European Union implemented new sanctions against Russia. These measures address Russia's destabilizing actions, specifically targeting Foreign Information Manipulation and Interference (FIMI).
The sanctions primarily target Russian state-owned media, such as the Federal State-owned Enterprise 'Russian Television and Radio Broadcasting Network' (RTRS), electronic warfare units like the 841st Separate Electronic Warfare Centre, and organizations linked to figures such as Yevgeny Prigozhin and Aleksandr Dugin.
This suggests a long-term, systemic approach how the EU aims to conduct information warfare. Furthermore, the EU's decision to sanction both information manipulation and electronic warfare activities, such as GNSS jamming from Kaliningrad, demonstrates a recognition that modern security threats are hybrid in nature.
◾ 2. What Happened
On July 15, 2025, the European Union imposed new sanctions on nine individuals and six entities due to their involvement in Russia's destabilizing activities.
◾ 3. Nature of Information
The information disseminated by the sanctioned entities is primarily Disinformation, characterized by their intentional falsity designed to deceive. This is evident in RTRS's systematic replacement of Ukrainian broadcasting with Russian-approved content, explicitly aimed at suppressing dissent and delegitimizing Ukrainian authority.
The unequivocal intent behind the spread of this information is to manipulate public opinion, advance Russian interests, and destabilize others.
The explicit targeting of RTRS for "effectively replacing established Ukrainian broadcasting systems in occupied regions" reveals a core Russian strategic objective: direct control over the information environment in annexed or occupied territories.
Of note, this is a foundational step in consolidating control and suppressing local resistance.
◾ 4. Character of Information
Dissemination of Russian approved false narratives occurs through a multi-pronged approach. It has been recorded that Russia leverages state-controlled media infrastructure like RTRS, dedicated web companies such as Tigerweb, and seemingly independent "journalists associations" or "foundations" that function as proxies.
The psychological hooks and manipulation strategies employed aim to suppress dissent, align populations with Russian policies, and delegitimize opposing authorities. The content is specifically designed to "undermine public trust in the (European) Union values and processes, democracy and Union cohesion," frequently incorporating "manipulative anti-NATO, anti-Ukraine and anti-NGO narratives". This strategic framing targets fundamental societal trust and cohesion.
◾ 5. Impact & Information Effects Statement
The harm inflicted includes the systematic replacement of independent broadcasting in occupied territories, significant disruption to civil aviation through GNSS jamming, and the erosion of democratic processes and public trust within the EU. The likely reach of these operations extends across various European countries, including the Baltic States and France.
It is assessed with High confidence that the Russian Federation’s dissemination of disinformation and propaganda, coupled with electronic warfare activities, has undermined democratic processes and public trust in the EU and Ukraine. This was likely intended to destabilize target states and advance Russian foreign policy interests.
◾ 6. Strategic Implications And Lessons Learnt
These developments reveal that hybrid threats, encompassing information manipulation, cyber-attacks, and electronic warfare, fundamentally challenge traditional notions of security.
Disinformation from foreign adversaries makes attribution difficult and response complex. They represent a persistent, non-military form of aggression aimed at undermining democratic institutions and societal cohesion.
The multi-faceted nature of these threats implies that a purely governmental or military response is no longer sufficient. This informs us that countering hybrid threats requires a collective effort involving governments, civil society, the private sector, and individual citizens.
Sources:
COUNCIL IMPLEMENTING REGULATION (EU) 2025/1444 of 15 July 2025 implementing Regulation (EU) 2024/2642 concerning restrictive measures in view of Russia’s destabilising activities
Babel: EU expands sanctions against Russia for destabilization in the European Union and Ukraine
EU NeighboursEast: Russian hybrid threats: EU lists nine individuals and six entities responsible for destabilising actions in the EU and Ukraine
United 24: EU Sanctions Nine Individuals and Six Entities Over Russia's Destabilizing Actions
TikTok Death Hoax Targets Influencer Grace Wolstenholme

◾ 1. Context
Grace Wolstenholme, a TikTok content creator living with cerebral palsy, cultivated a substantial following of 3 million users since 2021 by sharing her life.
◾ 2. What Happened
An anonymous TikTok account misused a 2021 video of Grace Wolstenholme, depicting her falling in a gym class. This was done by captioning it with a fabricated story claiming her death and misidentifying her disability as autism.
The primary perpetrator behind the disinformation incident was an anonymous TikTok account, which appeared to be engaged in selling pillows. Subsequent investigations revealed this account used an unregistered business name and an invalid VAT number. This indicates fraudulent activity and a lack of legitimacy.
This disinformation video was viewed over 650,000 times on TikTok, gaining significant reach and engagement. Despite TikTok's removal of the content, the perpetrator reposted the video days later and sent a series of abusive messages, demonstrating persistent harassment.
The false claim asserted that the account owner had "lost my autistic sister today so I bought this pillow to imitate cuddling her," implying Ms. Wolstenholme was deceased. It also affirms intent to drive sales of pillows due to links.
Ms. Wolstenholme publicly clarified that the death claim was false and that her disability is cerebral palsy, not autism.
◾ 3. Nature of Information
The information regarding Grace Wolstenholme's death is classified as disinformation because it was intentionally false and designed to deceive.
The intent behind its spread appears to be malicious and exploitative, potentially seeking financial gain through the associated pillow sales. This incident highlights a trend where disinformation campaigns are directly linked to financial fraud or illicit commercial activity. It also blurs the lines between information warfare and criminal enterprise.
A critical vulnerability that facilitated the “information vacuum” of Ms. Wolstenholme's temporary absence from online platforms due to illness. This inadvertently lent credibility to the false claims.
◾ 4. Character of Information
The primary platform utilized for disseminating this hoax was TikTok, where the false death claim video accumulated over 650,000 views.
The perpetrator also leveraged Instagram to send abusive messages, indicating a multi-platform harassment campaign.
The manipulation strategy involved crafting a fabricated narrative about losing an "autistic sister" to promote pillow sales, while misusing Ms. Wolstenholme's image and inaccurately depicting her disability.
◾ 5. Impact & Information Effects Statement
The incident profoundly affected Grace Wolstenholme, causing significant emotional distress and quantifiable financial harm. Her vulnerable followers, many of whom struggle with mental and physical health, reported being deeply disgusted and found the false claims "triggering".
This highlights how disinformation can inflict concrete financial damages on individuals in the digital economy, moving beyond abstract harm to direct livelihood disruption.
It is assessed with High confidence that an anonymous TikTok account’s dissemination of a false death narrative and misrepresentation of Grace Wolstenholme’s disability has caused significant emotional distress to her and her vulnerable audience, alongside quantifiable financial loss to her income. This is likely intended to exploit her image for illicit commercial gain and inflict harassment.
◾ 6. Strategic Implications And Lessons Learnt
This incident exemplifies how hybrid threats, particularly targeted disinformation, challenge traditional security paradigms.
Historically, disinformation has been a tool in warfare and politics, but its online evolution makes it more dangerous due to its capacity for tailoring, micro-targeting, and rapid, widespread amplification. This expansion of the battlefield into the information domain challenges traditional security notions, impacting societal cohesion and potentially leading to breakdown.
Sources:
BBC: “Someone faked my death on TikTok”
Global Herald: TikTok Creator Confronts False Death Rumors and Advocates for Digital Responsibility
Cisa.gov: Tactics of Disinformation
A Growing Problem of Digital Autocratisation in the Philippines

◾ 1. Context
The Philippines has a history of political manipulation through social media, known as "digital autocratisation." This involves using digital tools and organized 'troll' farms to weaken democratic norms. It started under the Rodrigo Duterte administration (2016-2022) and continued through the 2022 elections with Ferdinand Marcos Jr.
New generative AI tools greatly increase the risk of false information, including deepfakes and biased algorithms. Spikes in AI-driven disinformation were clear in December 2024 as the rivalry between the Marcos and Duterte groups grew.
Filipinos are very vulnerable to false information. This is mainly because they use social media at one of the highest rates globally, and many lack digital literacy.
Since the 2016 elections, Filipinos have increasingly used social media to choose how to vote, making them more open to online manipulation.
◾ 2. What Happened
The direct involvement of a high-ranking public official in disseminating AI-generated disinformation represents a critical escalation. This tactic leverages institutional trust to amplify false narratives and bypass traditional media gatekeepers.
A public official's endorsement provides an immediate, powerful, and often unquestioned stamp of legitimacy to the content for their followers, which is far more impactful than dissemination by anonymous troll accounts. This exploits the public's inherent trust in elected representatives. It also directly contributes to the erosion of public trust in government and political processes.
Furthermore, it suggests a deliberate strategy to weaponize the public profile of officials for information operations, making it harder for citizens to discern truth from falsehoods.
Tsek.ph, a local fact-checking group, saw a rise in AI use in false information before the May 2025 midterm elections. They noted that almost a third of 35 altered claims from February to May likely used deepfake technology to imitate public figures or twist reality.
◾ 3. Nature of Information
The public official's clear statement, "I don't care if this post is AI-generated," dangerously normalizes false information. It's a shift from claiming ignorance to openly defying factual accuracy.
This statement implies a deliberate plan to appeal to a loyal base. For this group, as Professor Rachel Khan noted, "perception is truth," and facts are less important than reinforcing a story or political loyalty. This fundamentally challenges fact-checking and truth verification. It suggests that for some, the message's impact matters more than its truthfulness.
Such an attitude, especially from an authority figure, can further reduce public trust in institutions, media, and the idea of objective truth. This makes people even more vulnerable to manipulation and contributes to a "post-truth" political environment.
◾ 4. Character of Information
Manipulation tactics include using deepfake technology. AI systems create very realistic fake videos and audio to imitate people and twist reality. Examples include a deepfake of Escudero's slain brother and a false video of President Marcos Jr. ordering an attack on China.
False content often uses strong emotional appeals, such as criticism of "selective justice" in the Sara Duterte deepfake or using anti-China feelings in geopolitical stories. Public officials, like Senator Dela Rosa, sharing AI-generated content use their perceived authority. This gives fabricated information a false sense of credibility and legitimacy.
Evidence of planned activity, fake grassroots campaigns (astroturfing), and amplified messages is also common.
◾ 5. Impact & Information Effects Statement
Public officials spreading AI-generated deepfakes directly causes a significant loss of public trust in government, political processes, and the truthfulness of information itself.
This is especially worrying because almost 7 out of 10 Filipinos were already concerned about false information.
Professor Rachel Khan's observation, "for the educated, it reinforces their already tainted image of disregarding truth. But for followers, it could reinforce the dictum that 'perception is truth'," shows this dynamic.
This means false information doesn't necessarily change minds everywhere. Instead, it strengthens existing biases and loyalties within specific "filter bubbles," while also alienating and frustrating those who value factual accuracy.
◾ 6. Strategic Implications And Lessons Learnt
The "digital autocratisation" in the Philippines began with the Rodrigo Duterte administration in 2016. It effectively used social media to spread narratives that justified autocratic policies, like the war on drugs and attacks on press freedom.
This trend continued with Ferdinand Marcos Jr.'s 2022 presidential campaign. His campaign used a complex media system to sanitize historical narratives, such as the "golden age" myth, and influence public opinion. This was especially true among digitally active young voters, often bypassing traditional media.
This "digital autocratisation," greatly amplified by AI, represents a systemic erosion of democratic foundations. It requires broad, long-term resilience strategies, not just reactive measures. The problem isn't just isolated deepfakes but a fundamental shift in how political power is sought and kept.
AI provides the tools to scale and perfect this systemic manipulation. Therefore, reactive measures like fact-checking, while needed, are not enough on their own, as their reach and impact are often limited.