"Where do I go to get my reputation back?" Richard Jewell
No one should have to ask that question
On July 27, 1996, security guard Richard Jewell spotted a suspicious backpack during the Atlanta Olympics. He alerted authorities and helped evacuate the area before the bomb exploded. The man saved countless lives. For three days, he was celebrated as a hero.
Then he wasn’t.
"F.B.I. Suspects 'Hero' Guard May Have Planted Bomb," announced The Atlanta Journal-Constitution. Within minutes, Jewell transformed from hero to villain. News vans surrounded his apartment, reporters paid neighbours to spy on him, and the "wannabe cop seeking attention" narrative dominated headlines. For 88 days, Jewell endured national vilification.
Despite zero evidence against him.
While broad disinformation distorts public understanding of issues, targeted disinformation aims to destroy specific individuals. It uses them as both casualties and instruments with an agenda.
What makes targeted disinformation particularly destructive is its precision and impact. It exploits vulnerabilities in media systems, leverages institutional authority, and manipulates audience psychology to create narratives resistant to correction. The damage radiates from the individual target to undermine public trust in institutions.
Their power lies in exploiting predictable patterns of media amplification, institutional vulnerabilities, and audience psychology.
The Anatomy of a Targeted Attack: Universal Patterns
When the Atlanta Journal-Constitution published their explosive headline about Richard Jewell, they weren't just reporting a story. No. They were activating a false narrative pattern that would consume an innocent man's life.
The FBI's interest in Jewell began with not evidence, but a profile. He fit their "lone wolf" construct. This is a “single white male seeking recognition” and in this case supported with the idea of “limited success in law enforcement aspirations.” This stereotype-based profile, when leaked to the press, started the chain reaction of villainizing and reporting.
Jewell's case illustrates how disinformation targets are chosen. Targets typically possess a combination of vulnerability and symbolic value. Jewell's position as a security guard made him accessible, while his hero status made his fall narratively satisfying. Against the combined weight of federal law enforcement and national media, he stood alone.
Through analysing this case, we can identify a four-phase framework that characterizes many deliberate disinformation campaigns:
Phase 1: Selection of Target (Strategic Identification) Targets aren't chosen randomly but selected with clear intent. The FBI selected Jewell based on a profile rather than evidence. What's crucial is that this selection is deliberate as targets are chosen specifically because they serve the attacker's strategic objectives.
Phase 2: Construction of Believable Falsehood (Narrative Crafting) The "wannabe cop seeking attention" narrative was deliberately crafted to seem plausible. This phase involves the intentional creation of false narratives designed to damage the target. These narratives strategically incorporate enough truth to make the lies digestible and tap into existing cultural anxieties. It makes falsehoods feel intuitively "right" even when facts contradict them.
Phase 3: Strategic Amplification (Institutional Leveraging) The front-page story triggered widespread coverage with media outlets competing for the most sensational angles. This amplification isn't accidental but calculated, as a deliberate exploitation of institutional channels will maximize damage. Whether through newspapers, cable news, or today's algorithmic platforms, amplification creates a "credibility cascade" where volume and sources are substituted for evidence.
Phase 4: Sustained Exploitation (Narrative Resilience) For 88 days, Jewell remained under suspicion despite the lack of evidence. This persistence reveals the intentional nature of disinformation as maintaining the false narrative serves specific purposes even after contradictory evidence emerges. Few institutions benefit from admitting they were wrong, creating systemic resistance to correction.
These phases often overlap and reinforce each other. Manifestations vary based on context, target, and media environment, but successful campaigns typically incorporate elements of all four phases.
Consider Jo Ellis, the transgender Black Hawk pilot falsely accused of causing a fatal crash earlier in 2025. Like Jewell, Ellis was intentionally selected because her identity made her symbolically valuable to attackers. A fabricated narrative linked her gender identity to alleged incompetence. Social media algorithms amplified this falsehood exponentially faster than Jewell's pre-internet targeting. The narrative persisted despite contradictory evidence, as Ellis had not died in the crash.
What makes this framework particularly revealing is how it connects to underlying dynamics: institutional incentives, cognitive vulnerabilities, and media economics. The technology changes, but the deliberate pattern of targeting individuals remains recognizable.
The Psychological Warfare of Personal Targeting
"Where do I go to get my reputation back?"
This question, posed by Richard Jewell, reveals the central objective of targeted disinformation: irreversible psychological damage for the benefit of another. During his ordeal, Jewell couldn't leave his apartment without cameras tracking him. His mother faced reporters rifling through their trash. This wasn't accidental as it was a calculated psychological siege that permanently altered both their lives.
Disinformation actors deliberately exploit specific psychological vulnerabilities that make their campaigns effective. Research in cognitive psychology identifies several mechanisms they leverage: the availability heuristic (we judge likelihood based on how easily examples come to mind), confirmation bias (we selectively interpret evidence to support existing beliefs), and the continued influence effect (false information affects beliefs even after being corrected). These aren't merely academic concepts but tactical advantages for those spreading falsehoods.
What motivates disinformation actors? For law enforcement in Jewell's case, the pressure to solve a high-profile terrorist attack provided institutional incentive. This "action bias" compelled them to target someone rather than admit the more frightening reality: they had no suspect at the time. For media organizations, the competitive drive for viewers overwhelmed journalistic standards. The "hero turned villain" narrative generated views and profits. While this maybe an overgeneralisation, it does allow an insight to the intent.
Disinformation actors understand that targeting individuals offers a crucial advantage: undermining systemic critiques by personalizing complex issues. When Kate Middleton faced Russian disinformation about her health, the campaign wasn't merely about her but aimed to destabilize trust in British institutions during geopolitical tension. Individual targeting serves as an efficient vehicle for broader objectives, offering attackers plausible deniability while achieving maximum systemic damage.
The effectiveness of these campaigns stems from their exploitation of audience psychology. False narratives about individuals prove particularly believable because they tap into existing archetypes. The "attention-seeking fraud" narrative applied to Jewell resonated because it fit cultural tropes about people seeking fame through deception. Research shows that stories with clear villains fulfil our need for cognitive closure, our tendency to prefer simple explanations over ambiguous realities.
Disinformation actors leverage what psychologists call the "just-world fallacy". This is our need to believe the world is fair and people get what they deserve. Often we might be able to assess the constructive narrative from this fallacy. This cognitive bias makes us receptive to narratives where a seemingly good person is revealed as secretly deserving punishment. This psychological comfort comes at the expense of the truth.
Understanding these psychological dynamics reveals why targeted disinformation remains a devastatingly effective tactic across historical periods and media environments. By weaponizing our own psychology against us, disinformation actors achieve maximum damage with minimal effort.
Evolution Without Revolution: From Print to Pixels
The technology that spread falsehoods about Richard Jewell seems almost quaint today. Yet understanding how targeting operates across media environments reveals a complex interplay between technological evolution and persistent human factors.
Media technologies actively shape each targeting phase. During selection, print-era targeting required public visibility, limiting potential targets. Radio expanded the pool by reaching audiences who couldn't read. Television added visual dimensions that made appearance part of vulnerability. Digital platforms now enable targeting based on algorithmic visibility.
False narrative construction evolved with media capabilities. Newspapers required coherent narratives that fit column formats. Television introduced visual storytelling with heightened emotional impact. Today's digital environment enables fragmented narratives assembled across platforms, where claims on Twitter, images on Instagram, and videos on TikTok create composite falsehoods no single platform takes responsibility for.
Amplification mechanics transformed dramatically. Town criers reached dozens; newspapers reached thousands; broadcast media reached millions. But digital amplification differs qualitatively. Algorithmic boosting creates visibility based not on journalistic judgment but engagement metrics. The Jo Ellis case exemplifies this. Her targeting spread through automated systems optimized for engagement, creating virality without accountability.
Platform mechanics reshape targeting dynamics. Recommendation algorithms create pathways toward increasingly extreme content. Quote-tweets enable pile-ons while maintaining deniability. Content moderation systems struggle with context-dependent falsehoods, creating enforcement that often fails targets.
The removal of traditional gatekeepers represents a significant transformation. While journalistic standards failed Jewell, they at least existed. When false claims about climate scientist Michael Mann circulated in "Climategate," traditional media eventually applied scrutiny. By contrast, the targeting of election workers occurred largely on platforms without editorial standards.
Emerging technologies create new challenges. Tim Walz faced allegations supported by deepfakes requiring expert analysis to debunk. As generative AI advances, synthetic evidence against individuals will become increasingly convincing. Voice cloning enables fabricated audio "evidence," while language models can generate seemingly credible false narratives at scale.
Despite these evolutions, fundamental continuities persist. The psychological impact on targets, institutional incentives, and audience vulnerabilities remain consistent. What has changed is the efficiency with which these human factors can be exploited.
Across Cultures
Examining how targeted disinformation operates across diverse political systems reveals both variations and consistent patterns.
In liberal democracies, institutional credibility often drives targeting effectiveness. The Jo Ellis case exemplifies this dynamic. Her selection as a transgender pilot made her symbolically valuable to those opposing military inclusion policies. The false narrative linking her gender identity to a helicopter crash was deliberately constructed to seem plausible. Amplification came through algorithmically-boosted social media, rapidly reaching millions. The exploitation phase continued despite statements clearing her, demonstrating how narrative persistence transcends evidence.
Authoritarian contexts reveal more centralized but equally devastating targeting mechanics. Chinese businessman Guo Wengui faced intentional targeting through the state-orchestrated "Spamouflage" network. The selection was strategic as Guo's corruption allegations threatened powerful officials. The false narrative was deliberately constructed to discredit him as a criminal. Amplification came through hundreds of coordinated fake accounts spreading identical claims. Even the wording was identical.
The advent of synthetic media creates new vectors across all systems. Liu Xin's case demonstrates intentional deepfake deployment, where false videos portrayed the Chinese dissident making fabricated allegations against Canadian politicians. This selection was deliberately designed to damage diplomatic relations while discrediting Liu. The believable falsehood combined real identity with manufactured statements. Digital amplification spread the deepfakes across multiple platforms simultaneously. The exploitation continued even after debunking, as the technical sophistication made verification difficult for average viewers.
Across these different contexts, the intent behind disinformation campaigns remains consistent: strategic destruction of individual reputations to achieve broader objectives. What varies is not the fundamental pattern but the specific institutional mechanisms, amplification channels, and narrative themes that prove most effective in different environments.
The Ongoing Struggle for Truth and Dignity
The persistence of targeted disinformation across time, borders, and technologies reveals a disturbing truth. That is destroying an individual remains an efficient strategy for achieving broader objectives. The analysis shows how the four-phase pattern—strategic selection, deliberate narrative construction, institutional amplification, and sustained exploitation—adapts to different environments while maintaining its effectiveness.
This understanding suggests three critical interventions:
First, early response systems are essential. The damage from targeted disinformation occurs rapidly—within hours for Jewell, minutes for digital targets. Specialized units within platforms, news organizations, and advocacy groups could identify and respond to targeting patterns before narratives cement in public consciousness.
Second, institutional accountability must increase. Media organizations rarely face meaningful consequences for amplifying unverified claims against individuals. Platforms design algorithms that boost sensational accusations without verifying them. Creating stronger incentives for accuracy could shift institutional calculations.
Third, target support frameworks are urgently needed. Individuals facing coordinated campaigns have few resources—legal, psychological, or technical—to defend themselves against institutional forces. Specialized legal aid, reputation management tools, and psychological support services could help balance this asymmetrical conflict.
Success would mean fewer lives destroyed and information ecosystems that better distinguish fact from fabrication. It would mean fewer individuals like Richard Jewell facing deliberate destruction of their reputations and psychological well-being. Success would mean no one ever again having to ask the question that haunted Jewell until his death at age 44:
"Where do I go to get my reputation back?"