A Clausewitzian Meditation On Disinformation
There is a reason why what we are doing is not working.
Democracy may die in darkness, but it’s drowning in a flood of falsehood.
We count the cost in more than money. It hits us in lost lives, torn communities, and the basic way we work together to fix problems.
Yet we throw huge resources at fighting false information. We build AI detectors, fund fact-checkers, teach media skills, and create platform rules. Research centres and universities study the problem constantly.
But the problem keeps getting worse.
In 1982, a small Indian newspaper claimed U.S. scientists made AIDS in a lab. By 1987, the KGB had spread this lie to 80 countries in 30 languages.
Jump to 2025. A fake video of a politician slurring words raced across six platforms in minutes. Even with smart detection tools flagging it and thousands of copies removed, yet many voters still think it was real.
Two disinformation campaigns, four decades apart, with the same result. Widespread belief in falsehood despite countermeasures.
Our approach has a fundamental flaw. To understand what, we need an unlikely guide: a 19th-century Prussian military theorist named Carl von Clausewitz. His framework distinguishing between the unchanging “nature” of war and its ever-evolving “character” offers a powerful lens for understanding why our fight against disinformation remains stuck in a cycle of reaction rather than resolution.
Our countermeasures fail because they target the evolving character rather than the unchanging nature of disinformation.
Clausewitz and Cognitive Warfare
Clausewitz distinguished between war’s eternal essence and its changing external form in his book “On War.” The nature of war remains constant across centuries while its character transforms with each era. Meanwhile, the character of war is what changes, such as tactics, training and procedures.
This framework illuminates our struggle against disinformation.
Disinformation represents a form of cognitive warfare that manipulates us. Like traditional warfare, it has both permanent and evolving aspects.
The nature of disinformation consists of elements that remain constant regardless of technology or era:
Strategic intent to manipulate belief through deliberate deception
Exploitation of specific cognitive vulnerabilities (confirmation bias, emotional reasoning, group identity protection)
Informational asymmetry between creator and target
Strategic blending of falsehoods within recognizable truths to create plausibility
Meanwhile, the character of disinformation consists of elements that continuously evolve:
Technological delivery systems that change the speed, scale and perceived credibility of messages
Methods of disguising sources and creating plausible deniability
Velocity and reach of dissemination (from months to milliseconds, local to global)
Precision of audience targeting and cross-platform coordination techniques
As tactics evolve, new opportunities emerge to spread lies. But they all work by triggering the same human weaknesses. A 1500s printed pamphlet and today's AI-made deepfake video look different but fool us for exactly the same reasons.
Clausewitz gave us other useful ideas too. He talked about "friction". This is how small problems pile up in war and eventually become overwhelming. We see this when people resist new facts that challenge their beliefs. He also described the "fog of war" as the confusion of battle. We face similar fog when trying to spot truth in a flood of conflicting claims.
When we view disinformation through this Clausewitzian lens, our strategic error becomes obvious. We obsessively target the changing character of disinformation while neglecting its constant nature. We invest billions in detecting deepfakes but little in understanding why people believe them. We celebrate taking down thousands of bot accounts while ignoring the collapse of trust that makes audiences receptive to alternative information sources. We regulate technological platforms while leaving psychological vulnerabilities unaddressed.
This fundamental mismatch—fighting the tactics while ignoring the strategy—explains why our most sophisticated technical countermeasures consistently cannot and will not reduce the impact of disinformation campaigns. And as we’ll see, this mismatch isn’t just a contemporary problem.
It has historical precedent.
Same Nature, Different Character
The consistency of disinformation’s nature across history reveals why character-focused defences ultimately fail.
In 1939, Nazis used radio to spread lies about Poland attacking Germany. They played on national pride, mixed lies with bits of truth, and timed their messages for biggest effect. These are the same tricks used today.
In 1994, a Rwanda radio station called Tutsis "cockroaches" and told people to kill them. They used radio instead of Twitter, but the way they made people hate others matches what we see in online hate groups now.
In each time period, people tried to fight these lies by controlling the media - blocking newspapers, jamming radio, or making TV rules. All these efforts failed because they focused on how the lies spread, not why people believed them.
It also incentives bad actors to become more creative with their tactics.
Today, we repeat this mistake. We spend billions on AI tools to spot fake content but ignore why people believe and share it. We delete thousands of fake accounts while missing the reason why people trust questionable sources.
History proves that fixing how lies spread only creates short-term defences. Both governments and companies keep making this error, trapping us in endless reaction.
Our leaders keep getting it wrong. During the 2020 U.S. election, officials hunted foreign bots and took down content. They removed many fake accounts but ignored how party loyalty made voters believe false claims about stolen votes. The result? Even with good tech solutions, more people still believe the election lies.
The EU's Digital Services Act shows the same problem. It makes platforms delete harmful content and be more open, but doesn't tackle why people find false stories believable. Nor do they address how to heal our relationship with the information world.
Companies make the same mistake. Meta hires thousands to review content and uses smart software to find bad posts. But research shows this often backfires when removed content moves to sites with fewer rules, where it seems more credible for being "censored." They count success by how much they remove, not by changes in what people believe.
This creates a perpetual game of whack-a-mole. We identify an attack vector and then develop countermeasures. But history has shown us how offensive tactics often outpaces defensive tactics with alarming creativity.
This reactive trap leaves us constantly one step behind, focusing on yesterday’s tactics while disinformation actors move to tomorrow’s opportunities.
A Balanced Framework for Effective Countermeasures
Clausewitz wrote that effective military strategy must address both the unchanging nature of war and its evolving character. Since disinformation works as a form of cognitive warfare, we need this balanced approach to fight it.
Finland demonstrates this rather well. After Russian disinformation during their 2015 refugee crisis, Finland had to do something. They built critical thinking into their primary schools. Their education system taught students to spot confirmation bias, emotional manipulation, and identity-based arguments. This directly counters the psychological weaknesses that make disinformation work. At the same time, Finland created partnerships between government agencies, media companies, and tech platforms to counter disinformation tactics.
By 2018, Finland reported that Russian disinformation had much less impact than neighbouring countries. Their experience proves an important point: tech solutions alone only provide temporary advantages.
Taiwan's "humour over rumour" strategy offers another effective example. When COVID-19 lies threatened public health in early 2020, Digital Minister Audrey Tang's team paired quick fact-checking with humorous corrections.
This approach knew two things. First is that plain facts can't beat the emotional hook of false stories (nature) and quick responses can slow their spread (character). Their funny, shareable content could compete with lies for people's attention.
These examples point toward a framework that addresses both dimensions:
Nature-focused interventions:
Cognitive inoculation against specific manipulation techniques
Trust-building measures for authoritative information sources
Community-based resilience through local information networks
Character-focused interventions:
Technical detection and content moderation
Platform design modifications to slow information spread
Transparent attribution systems
The key insight isn’t implementing both types, but understanding how they interact. Technical solutions create breathing room which allows space for trust-building. Community resilience makes technical detection more effective by reducing the audience for disinformation.
At its very core, the Clausewitzian framework suggests a fundamental reorientation. Rather than asking, “How do we detect disinformation more effectively?” we should ask “How do we make communities more resistant to manipulation?”
The first question leads to an endless tactical arms race.
The second creates strategic advantage.
Beyond the Reactive Cycle
Clausewitz's framework reveals the fundamental error in how we deal with disinformation. By focusing primarily on its ever-changing character while neglecting its enduring nature, we condemn ourselves to be trapped in this infinite loop of not solving the core problem.
Each new technology brings fresh panic and investment in character-specific defences. Yet the underlying vulnerabilities remain unchanged, ensuring that disinformation will always find new avenues to exploit.
Breaking this cycle requires a balanced approach that addresses both dimensions simultaneously.
The Prussian general who never sent an email or saw a smartphone has given us the framework to understand why our digital defences keep failing. Now we must apply this understanding to build more effective countermeasures that address not just how disinformation spreads, but why it works.
For we will loose the next one, again.