During the Cold War, Western export controls attempted to limit Soviet access to advanced technologies. Soviet intelligence responded by investing heavily in technology theft: the KGB (the Soviet security and intelligence service) and GRU (Soviet military intelligence) recruited sources to enable these thefts, established front companies, and ran extensive operations to acquire Western advances in technology deemed essential for Soviet defense industries. This approach saved Moscow significant time and resources while propping up Soviet military capabilities.
When the Soviet Union collapsed, this changed. The West eagerly partnered with the new Russian Federation, selling advanced technologies that enabled resource extraction and enriched a new oligarch class. This honeymoon ended abruptly in 2014 when Putin annexed Crimea, triggering Western sanctions that intensified dramatically after Russia's 2022 invasion of Ukraine. Once again cut off from Western technology, Russian defense and economy sectors are now struggling to access everything from nails to advanced electronics and manufacturing equipment. As a result, Moscow has reverted to Cold War tactics, working to circumvent sanctions across the board.
Nowhere is this technology gap more dramatic than in artificial intelligence (AI). The United States leads global AI development, with billions in investment driving capabilities viewed as potentially transformative for societies and labour markets. China follows in close second, maintaining competitive AI capabilities despite US export controls on advanced chips and technologies. Due to US sanctions Russia, by contrast, is encountering critical AI deficits. While European nations can access advanced chips from the US and Taiwan, Russia must rely on domestic production that lags 13-fold behind China and 33-fold behind the US. Its AI research capacity shows similar weakness, trailing China and the US by 20-30 times, with its top university ranking only 213th globally in AI research output, whereas leading European institutions such as the École Polytechnique Fédérale de Lausanne (EPFL), Eidgenössische Technische Hochschule(ETH) Zurich, and the University of Edinburgh rank among the top 30 institutions.
Despite 'import-replacement' strategies promoted since Crimea's annexation and intensified after the 2022 invasion, Russian AI development faces formidable obstacles: Western sanctions, massive brain drain, chronic underinvestment, and pervasive corruption. As Russia still struggles to produce nails and continues to operate machinery received as reparations from Germany eight decades ago, developing cutting-edge AI domestically remains a fantasy.
Russia's homegrown technology gap - generative AI
Generative AI represents a particularly significant development for intelligence services. A long-awaited breakthrough in artificial intelligence, a field born in the 1950s but constrained by hardware limitations until recently, generative AI, unlike previous systems that focused on pattern recognition and classification, is able to create new content, such as text, images, code, and synthetic media, making it ideal for use in disinformation campaigns, social engineering, deepfake creation, and automated hacking. Advanced electronics have finally enabled Large Language Models (LLMs) - systems powering chatbots like ChatGPT that can generate text, write code, and reason through problems after training on massive datasets. However, developing and training these models requires enormous computational resources and advanced specialized hardware - particularly high-end graphics processing units (GPUs) costing millions of dollars and consuming massive amounts of electricity. This hardware dependency explains Russia's AI disadvantage: Western sanctions have cut off access to cutting-edge chips, making it impossible to train frontier models domestically. Evidence of this dependency emerged in 2024 when cybersecurity researchers discovered that APT28 (Advanced Persistent Threat 28), a Russian military intelligence unit conducting offensive cyber operations, was relying on API (Application Programming Interface, a set of rules that allows different software programs to communicate with each other, share data, and perform actions) calls to a Chinese-developed LLM (Qwen2.5-Coder-32B-Instruct) hosted on Hugging Face's cloud infrastructure for real-time malware command generation. Despite proclamations of domestically developed and hosted AI capabilities, Russian intelligence operations depend on publicly accessible foreign services, underscoring both the strategic importance of generative AI for offensive cyber operations and Russia's inability to develop equivalent domestic capabilities. Russia's pariah status further compounds the problem - AI development, like all modern science, thrives on international collaboration, but Western researchers increasingly avoid partnerships with Russian institutions. Simultaneously, the Russian state views scientific collaboration as a vector for foreign espionage, a paranoia reflected in the imprisonment of numerous Russian scientists on fabricated espionage charges. Yet once trained elsewhere, models can be accessed remotely or deployed on far less sophisticated hardware - an asymmetry Russian intelligence readily exploits.
China, despite facing US sanctions, remains able to develop competitive AI systems, training models like DeepSeek, which through algorithmic innovation achieves competitive performance on less advanced GPUs at a fraction of US training costs. equivalents. China's success reflects several advantages: massive state investment in science, technology, and research including AI development, a large and sophisticated domestic technology sector capable of algorithmic innovation, and stockpiles of advanced chips accumulated before export controls tightened. In contrast, Russia faces stricter export controls, invests far less in technology development, and suffers from deeper technological deficits. This gap has made technology theft not merely convenient but essential. And when the target is digital, theft becomes remarkably easy. Unlike smuggling semiconductors or machine tools, AI models can be downloaded, copied, and deployed without complex supply chains. Russian intelligence services weaponize generative AI tools across multiple domains.
Generative AI in hybrid warfare
The most visible application for generative AI is disinformation. Russian operations leverage publicly available LLMs to generate content at scale, creating personas, crafting narratives in multiple languages, and flooding social media platforms with manufactured commentary. The volume is so significant that it has begun poisoning AI models themselves: whether intentional or not, Russian disinformation proliferating online contaminates training datasets. When chatbots like ChatGPT and other LLMs train on web data, they inadvertently ingest and reproduce Russian propaganda, serving it to ordinary users as if it were legitimate information. This creates a self-reinforcing cycle where disinformation becomes embedded in the very tools designed to provide knowledge, reinforcing what historian Timothy Snyder calls the 'politics of eternity', a cyclical narrative that erases factual history and traps societies in mythologized past grievances, a new approach in information warfare.
Beyond disinformation, Russian intelligence weaponizes generative AI in cyber operations. Russian intelligence has integrated generative AI directly into malware for command-and-control functions. Unlike previous cases where hackers used AI to generate phishing emails or assist in coding, these advanced systems integrate AI into the operational phase itself: during active intrusions, the malware queries LLMs in real-time to request tailored instructions, receiving custom code that executes immediately and makes dynamic decisions on lateral movement and exfiltration based on the specific victim environment. This dynamic, adaptive approach complicates defence: security tools rely on detecting consistent patterns, but AI-generated malware varies its tactics with each intrusion, evading traditional detection methods.
Russian intelligence also leverages commercial generative AI systems throughout their attack lifecycle. Google reported that Russian APT actors used its Gemini model to research infrastructure and hosting providers, conduct reconnaissance on targets, identify vulnerabilities, develop payloads, and craft malicious scripts with evasion techniques. OpenAI documented Russian state-backed actors using ChatGPT to develop and refine Windows malware, debug code, and establish command-and-control infrastructure. Notably, these operators demonstrated operational security awareness: they deployed temporary email addresses to create ChatGPT accounts and limited each account to single conversations about incremental code improvements, avoiding detection patterns that might flag suspicious activity.
This ecosystem extends beyond intelligence services. Russian cybercriminals, often co-opted into working for intelligence agencies in exchange for protection from prosecution, actively participate in AI-enabled operations. Russian-language criminal forums freely share jailbroken LLMs like FraudGPT and WormGPT designed specifically for malicious code generation, phishing, and evading security controls, blurring the lines between state-sponsored operations and organized crime.
The weaponization likely extends beyond these documented cyber and disinformation operations into targeting, intelligence analysis, and operational planning. While Russian disinformation campaigns have demonstrated AI-powered multilingual content generation and translation, these same capabilities enable intelligence services to process intercepted communications (COMINT) and open-source materials (OSINT) at unprecedented scale, identify patterns across vast datasets, and rapidly synthesize intelligence assessments. Operators can query multiple AI platforms until finding ones without robust military-use restrictions, the same exploitation strategy used for disinformation and cyber operations. This pattern is evidenced by OpenAI and Google's repeated suspension of accounts linked to Russian disinformation campaigns, malware development, and intelligence analysis, cconfirming both the scale of Russian reliance on commercial AI platforms and the ongoing challenge of controlling access to these dual-use technologies.
Russian intelligence's rapid adoption of generative AI follows an established pattern: these services have long deployed earlier AI generations based on deep and machine learning. The embrace of generative AI represents a natural evolution, motivated by the same quest for productivity gains that have led Western companies to integrate these tools for employee augmentation, except Russian intelligence applies this productivity boost to their everyday work - disinformation generation and offensive cyber operations.
Lessons to learn
This pattern reinforces a fundamental lesson about technology and intelligence: technological leadership matters less than operational effectiveness. Russia's inability to develop cutting-edge AI has not prevented its intelligence services from weaponizing available tools with devastating effectiveness. The Cold War playbook - steal, adapt, deploy - proves remarkably durable in the digital age, perhaps even more effective when the target can be downloaded rather than smuggled. Western policymakers face a stark reality. Sanctions can deny Russia the ability to innovate, but cannot prevent exploitation of openly available systems. Lacking the legal constraints, public accountability, and oversight that try to keep Western intelligence services on the straight and narrow, Russian counterparts play fast and furious, deploying whatever technology they can access without ethical review or democratic constraints. As generative AI capabilities advance and new dual-use technologies emerge, Russia will continue replicating this asymmetric approach across domains, consistently lagging in innovation while leading in weaponization. Defenders must focus less on who builds the best capabilities and more on who deploys them most destructively.
Photo by Steve Johnson on Unsplash
