Why Disinformation Is So Deniable And Un-enforceable
It is a problem, and it is getting worse.
Disinformation is a fundamental challenge because it is both deniable and unenforceable. Its complex nature makes definitive attribution and effective countermeasures difficult.
The prevalence of organized social media manipulation has grown substantially in recent years. A 2019 study by the Oxford Internet Institute documented evidence of such campaigns in 70 countries, up from 28 countries in 2017. This is a 150% increase over three years.
Compounding this issue, researchers from the MIT Media Lab analysed approximately 126,000 Twitter stories shared by ~3 million people over 11 years (2006-2017). They found that false news spread more rapidly and broadly than verifiable information on this platform. Specifically, false stories were 70% more likely to be retweeted than true stories.
So, why has nothing been done about it? The answer is rather complex.
The Evolution of Disinformation
Disinformation as a threat has existed before the internet.
In the Cold War era, the KGB planted false stories using tactics known as "active measures." Today, the digital world now powers these techniques to new levels. False information races across social media instantly, reaching vast audiences with a few clicks, while hiding those who create it. The campaigns that once needed months of planning by large agencies now can be run by small teams with basic skills and limited funds.
Since most of humanity is connected by the internet, disinformation reaches millions instantly. According to a 2021 Brookings Institution analysis, between 2018 and 2020, Facebook and Twitter took down 147 influence operations, highlighting the scale and frequency of disinformation. The NATO Strategic Communications Centre of Excellence notes that current disinformation efforts leverage emerging technologies and artificial intelligence to analyse the digital environment and target specific demographic groups. This evolution has transformed disinformation from a slow, resource-intensive effort into what the RAND Corporation calls "the weaponization of information".
The Technology of Anonymity
The technical architecture of modern disinformation campaigns often employs various tools to obscure the identities and locations of those behind them. Both foreign and local actors rely on tools like VPNs and proxy servers to mask their real locations. This makes their accounts look like they're posting from different countries. But these digital disguises aren't perfect as they can still leave traces.
The Mueller Report documented how Russian military intelligence used these tactics during the 2016 U.S. election. They made Russian-run accounts appear to be American voters. The report specifically showed how Russia's Internet Research Agency (IRA) ran "a social media campaign that favoured presidential candidate Donald J. Trump and disparaged presidential candidate Hillary Clinton."
Even when extensive evidence exists, strategic denial remains effective. Despite the Mueller Report identifying specific Russian military intelligence officers and detailing their methods, Russian officials consistently denied any involvement.
The concept of plausible deniability is a key feature of these operations. As noted by researchers, Russia often employs non-state actors with close links to the regime, such as the Wagner Group and the Internet Research Agency, to maintain plausible deniability. This strategy lets countries harm their rivals while making it difficult for those rivals to strike back effectively. When a nation can't clearly prove who attacked them, they struggle to respond with confidence or gain support from allies.
Cross-border legal barriers present another significant challenge. Even when attribution succeeds, national borders can make enforcement nearly impossible. Article 61 of Russia’s constitution, for example, prohibits the extradition of Russian citizens to other states. This constitutional protection has indeed created challenges in prosecuting individuals indicted for election interference.
While these challenges are significant, it’s an overstatement to say that holding anyone accountable is “nearly impossible.” Advanced forensic techniques, international cooperation, and improved detection methods continue to evolve, leading to successful attributions many times.
Its the next step that we haven’t really figured out yet.
Free Speech Tensions
Democratic nations often protect free speech in their constitutions. Take the United States for example. The First Amendment makes it very hard to limit what people can say, even when they're spreading falsehoods. This creates a real problem: how do you fight against harmful lies without stepping on important free speech rights?
This challenge goes beyond any single country. There's a natural tension between stopping false information and protecting people's right to express themselves. Democratic legal systems are built this way on purpose. They make it difficult to punish people for what they say. In the U.S. legal system, when dealing with public figures or important issues, you must prove "actual malice." This means showing that someone knew they were lying or simply didn't care about the truth. This high standard makes it extremely difficult to prosecute disinformation cases.
This high legal threshold exists for good reason. It guards the right to speak freely. But it also creates a weakness that many take advantage of. False claims that don't qualify as fraud, defamation, or calls to violence often remain protected by the constitution.
A 2021 Pew Research Centre study found that roughly half of U.S. adults (48%) now say the government should take steps to restrict false information online, even if it means losing some freedom to access and publish content. This is up from 39% in 2018, showing a growing willingness to prioritize restricting false information over protecting unfettered access to content. A further 59% continue to support technology companies taking such action.
However, governments may misuse laws designed to fight disinformation to suppress legitimate speech or silence opposition. This risk has caused many to be wary of granting government’s broad powers to regulate speech.
Disinformation operators exploit this intentional gap in regulation, knowing that democratic societies cannot criminalize all false statements without undermining core liberties.
Sometimes, that is what they want.
The Self-Reinforcing Cycle
When all these factors work together, they create a cycle that makes disinformation difficult to deny and hard to stop.
First, campaigns launch using anonymity technologies and often originate in jurisdictions with limited cooperation with Western democracies. This technical and jurisdictional distance starts the cycle with near-zero accountability risk.
When researchers identify suspicious activity, the technical barriers deliberately built into these operations prevent definitive attribution. The lack of smoking-gun evidence gives implicated actors plausible grounds for denial.
Even with substantial evidence, cross-border legal barriers prevent enforcement. Without specific international agreements addressing digital disinformation, foreign actors remain effectively untouchable.
This absence of consequences directly encourages more extensive operations. Hostile actors see high rewards with almost no risks, so they naturally expand their efforts. Each successful campaign that goes unpunished becomes a blueprint for more sophisticated attacks.
The real dangers of this cycle became evident after the 2020 U.S. election. False claims about election fraud, spread through organized campaigns, helped fuel the January 6th Capitol Riot. This attack marked the first violent disruption of power transfer in American history.
This problem affects elections worldwide. Research indicates that even a small number of individuals can significantly spread false information across networks. A 2024 study published in Science found that just over 2,100 people accounted for 80% of the fake news shared about the 2020 U.S. election in a sample of nearly 665,000 registered voters on X.
When voters cannot distinguish fact from fiction, informed civic participation becomes impossible. Countries face difficult trade-offs between competing values: content regulation versus free expression, national security versus open information, platform accountability versus innovation.
The current landscape clearly favours those spreading disinformation. They operate from safe jurisdictions, hide behind technological shields, and exploit the legitimate protections of free speech to avoid consequences.
For citizens everywhere, this means a continuing erosion of the information environment required for democratic self-governance. This is a threat that strikes at democracy's very foundation with plausible deniability.