Disinformation linked to Syria's week of violence
📌 What Happened: In early March 2025, disinformation campaigns intensified in Syria, particularly targeting the Alawite minority. False reports and manipulated media circulated on social platforms, alleging attacks by Alawite groups. These deceptive narratives aimed to inflame sectarian tensions and destabilize the interim government led by President Ahmad al-Sharaa. Such disinformation exploited existing societal fractures, leading to widespread unrest.
⚡ The Fallout: The spread of disinformation resulted in violent clashes across Syria's coastal regions. Over 1,000 individuals were killed in the Alawite region due to a crackdown by the new Islamist rulers. Entire families were reportedly executed, and thousands fled their homes seeking safety.
The violence raised international concerns about the interim government's ability to maintain order and protect minority rights. The United Nations condemned the killings and called for investigations.
🔍 The Narrative Behind It: This incident highlights the strategic use of disinformation to exploit sectarian divides and undermine political transitions. By disseminating false information, actors sought to provoke violence, delegitimize the interim government, and hinder efforts toward national unity.
The situation underscores the critical need for robust information verification mechanisms and inclusive governance to counteract the destabilizing effects of such malicious campaigns.
📝 Information Effects Statement Assessment: The deliberate dissemination of disinformation targeting Syria's Alawite minority has exacerbated sectarian violence, undermining the interim government's stability and efforts toward national reconciliation. The intent is clear due to the targeting and damage that is caused.
Confidence in this assessment is high, based on consistent reports from credible sources. Alternative hypotheses, such as spontaneous sectarian violence without external provocation, are less supported by available evidence. The implications include more regional instability and challenges to international diplomatic initiatives.
Key indicators to monitor are the frequency of disinformation campaigns, escalation in sectarian violence, and responses from international actors.
Sources
https://www.dw.com/en/how-disinformation-intensified-syrias-weekend-of-violence/a-71887818
https://www.ungeneva.org/en/news-media/news/2025/03/104164/un-rights-chief-raises-alarm-over-escalating-violence-syria
https://www.securitycouncilreport.org/monthly-forecast/2025-03/syria-77.php
Russia Pravda advances Disinformation Tactics by polluting chatbots
📌 What Happened: A Russian disinformation network, known as "Pravda," has been exploiting AI chatbots to disseminate pro-Kremlin propaganda. By flooding large language models (LLMs) with false narratives, this network aims to manipulate chatbot outputs to align with Russian interests.
A study by NewsGuard analysed 10 leading AI chatbots and found that they repeated falsehoods from the Pravda network over 33% of the time, indicating a significant infiltration of AI systems by disinformation actors.
The Pravda network operates through 150 domains across 49 countries, but has a strong ties to Russia. They published an estimated 3.6 million articles in 2024 alone that push pro-Kremlin narratives.
⚡ The Fallout: The infiltration of AI chatbots with disinformation has led to the unintended spread of pro-Kremlin narratives, potentially influencing public opinion and undermining trust in AI technologies. Users seeking unbiased information may unknowingly encounter manipulated content, leading to misinformation and skewed perceptions of global events.
This development raises concerns about the vulnerability of AI systems to malicious actors and the broader implications for information integrity in the digital age.
🔍 The Narrative Behind It: This incident underscores the evolving tactics of disinformation campaigns, highlighting how state-sponsored actors adapt to technological advancements by targeting AI systems.
The ability to manipulate AI chatbots reflects a sophisticated approach to influence operations, aiming to control narratives on a global scale. It also emphasizes the critical need for robust safeguards in AI development to prevent the spread of false information.
📝 Information Effects Statement Assessment: The deliberate manipulation of AI chatbots by the Russian "Pravda" network represents a significant advancement in disinformation tactics. This poses new challenges to information integrity and public trust in AI technologies.
It is also assessed that untangling this will be incredibly difficult due to polluted data.
Confidence in this assessment is high, based on findings from credible research organizations. Alternative hypotheses, such as unintentional biases in AI systems leading to misinformation, are less supported by the evidence. Implications include the need for enhanced security measures in AI development and increased vigilance against state-sponsored disinformation campaigns. Key indicators to monitor are the frequency of AI-generated disinformation incidents and the effectiveness of countermeasures implemented by AI developers.
Sources
https://www.newsguardtech.com/special-reports/generative-ai-models-mimic-russian-disinformation-cite-fake-news/
http://techcrunch.com/2025/03/07/russian-propoganda-is-reportely-influencing-ai-chatbot-results/
https://www.kyivpost.com/post/48608
Not Our War Protest disguises Disinformation
📌 What Happened: On March 9, 2025, Madrid hosted the "Peace and Neutrality, Not Our War" demonstration, advocating for Spain's neutrality in the Ukraine conflict. The demonstration was specifically against "the warmongering drift of the European political class in relation to the war in Ukraine”.
Maldita.es identified that 82% of the 28 primary social media promoters of this event had previously disseminated pro-Russian disinformation. this includes falsehoods about the Bucha massacre and fabricated stories about Ukrainian leadership. Additionally, half of these accounts propagated hoaxes related to Spain's recent natural disasters, such as the alleged concealment of victims in the Bonaire shopping centre car park.
The analysis covered 28 profiles with over 2 million followers, generating 800,000 impressions in 24 hours.
⚡ The Fallout: The involvement of disinformation-affiliated accounts in promoting the demonstration has raised concerns about the authenticity of public sentiment and the potential manipulation of public opinion. It also complicates the line between disinformation and misinformation. Such associations risk undermining the credibility of genuine protest movements and may lead to increased polarization within Spanish society. The spread of false information about national disasters further exacerbates public distrust and hampers effective crisis management.
91% of accounts spreading pro-Russian disinformation had promoted more than one hoax, with 39% amplifying at least half of the sample hoaxes.
🔍 The Narrative Behind It: This situation highlights the intersection of domestic social issues with broader geopolitical disinformation strategies. The deliberate amplification of certain narratives by accounts with known disinformation histories suggests coordinated efforts to influence public perception and sow discord. It underscores the necessity for critical media literacy and robust fact-checking mechanisms to navigate the complex information landscape.
📝 Information Effects Statement Assessment: The analysis indicates a deliberate effort to leverage social media platforms for disseminating disinformation under the guise of legitimate protest. This potentially skews public perception and deepening societal divisions.
Confidence in this assessment is moderate, based on the correlation between promoters of the demonstration and known disinformation activities. It also blurs the line between disinformation and misinformation due to the difficulty to assess intent as scale.
Alternative explanations, such as coincidental associations without strategic intent, are less supported by the evidence. The implications involve challenges for public discourse integrity and the need for enhanced information verification practices. Key indicators to monitor include patterns of information dissemination by identified accounts and public responses to disinformation mitigation efforts.
Sources
https://maldita.es/investigaciones/20250310/promoters-demonstration-spain-pro-russian-disinformation/
https://global.espreso.tv/russia-fake-news-top-20-spanish-language-outlets-that-are-spreading-russian-propaganda-and-kremlin-friendly-narratives
🗣️ Let’s Talk—What Are You Seeing?
📩 Reply to this email or drop a comment.
🔗 Not subscribed yet? It is only a click away.