Russian Fake-News CopyCop Reboots with 200+ AI-Powered Sites: What It Means for Cybersecurity and Data Protection

AI-Driven Fake News Floods the Web” or “CopyCop’s Return: AI & Disinformation, 2025 - Governance Risk & Compliance Hub

In 2025, a notorious Russian fake-news operation known as CopyCop (or Storm-1516) has rapidly expanded, launching at least 200 new websites designed to spread disinformation targeting audiences in the US, France, Canada, and beyond. This network, attributed to former Florida deputy sheriff turned Kremlin-backed disinformation agent John Mark Dougan, combines AI technology with sophisticated political manipulation.

The network’s use of advanced, self-hosted large language models (LLMs) based on Meta’s open-source LLaMA 3 technology enables CopyCop to churn out a volume of fabricated news stories with minimal human oversight. These articles, often mimicking local news and fact-checking sites, push pro-Putin narratives and false claims about Ukraine, US politics, and other global affairs.

Key Cybersecurity and Data Protection Lessons

AI as a Double-Edged Sword

While AI improves efficiency and innovation, its misuse in automated content generation poses serious risks. Organisations must monitor for deepfake content and AI-generated misinformation creeping into their communication channels or affecting public perception.

Importance of Source Verification

CopyCop sites impersonate credible outlets, making it critical for users, particularly media outlets, regulators, and educational institutions to rigorously verify sources before trust or sharing.

Cyber Threats Extend Beyond IT

Disinformation campaigns like CopyCop’s illustrate how cybersecurity is as much about protecting information integrity as defending against intrusions.

Supply Chain Vigilance

Third-party content and syndicated information can be vectors for disinformation. Organisations should adopt thorough supplier and partner vetting to avoid inadvertent amplification of false narratives.

Regulatory and Infrastructure Support

The weakening of US federal disinformation countermeasures signals the ongoing vulnerability of election security and public discourse. Strengthening legislative and technology frameworks is crucial.

FAQ

Q: What technologies does CopyCop use to generate fake news?

A: CopyCop uses uncensored, self-hosted large language models based on Meta’s LLaMA 3 for automated, AI-driven article generation.

Q: How can organisations protect against disinformation attacks?

A: Implement strong source verification practices, educate teams on misinformation tactics, deploy content monitoring tools, and maintain robust cybersecurity hygiene.

Q: Why is disinformation considered a cybersecurity threat?

A: It undermines trust, sows division, and manipulates public opinion, impacting political stability, brand reputation, and user safety beyond traditional cyberattacks.

Ready to Strengthen Your Data Protection and Cybersecurity Posture?

📩 Get in touch to learn more about our Virtual DPO and Cybersecurity services and how we can support your organisation.

Learn more about our  Data Protection and Cybersecurity Services and how we support UK organisations, across various sectors, with GRC implementation.

The Governance Risk & Compliance Hub - Data Protection and Cybersecurity Specialists Logo.

Governance Risk & Compliance Hub LIMITED