In 2025, a notorious Russian fake-news operation known as CopyCop (or Storm-1516) has rapidly expanded, launching at least 200 new websites designed to spread disinformation targeting audiences in the US, France, Canada, and beyond. This network, attributed to former Florida deputy sheriff turned Kremlin-backed disinformation agent John Mark Dougan, combines AI technology with sophisticated political manipulation.
The network’s use of advanced, self-hosted large language models (LLMs) based on Meta’s open-source LLaMA 3 technology enables CopyCop to churn out a volume of fabricated news stories with minimal human oversight. These articles, often mimicking local news and fact-checking sites, push pro-Putin narratives and false claims about Ukraine, US politics, and other global affairs.
While AI improves efficiency and innovation, its misuse in automated content generation poses serious risks. Organisations must monitor for deepfake content and AI-generated misinformation creeping into their communication channels or affecting public perception.
CopyCop sites impersonate credible outlets, making it critical for users, particularly media outlets, regulators, and educational institutions to rigorously verify sources before trust or sharing.
Disinformation campaigns like CopyCop’s illustrate how cybersecurity is as much about protecting information integrity as defending against intrusions.
Third-party content and syndicated information can be vectors for disinformation. Organisations should adopt thorough supplier and partner vetting to avoid inadvertent amplification of false narratives.
The weakening of US federal disinformation countermeasures signals the ongoing vulnerability of election security and public discourse. Strengthening legislative and technology frameworks is crucial.
A: CopyCop uses uncensored, self-hosted large language models based on Meta’s LLaMA 3 for automated, AI-driven article generation.
A: Implement strong source verification practices, educate teams on misinformation tactics, deploy content monitoring tools, and maintain robust cybersecurity hygiene.
A: It undermines trust, sows division, and manipulates public opinion, impacting political stability, brand reputation, and user safety beyond traditional cyberattacks.
📩 Get in touch to learn more about our Virtual DPO and Cybersecurity services and how we can support your organisation.
Learn more about our Data Protection and Cybersecurity Services and how we support UK organisations, across various sectors, with GRC implementation.