If I can get the information faster and more efficiently with AI, is that really a bad thing?
In national security, cyber defense, and intelligence work, speed and accuracy aren’t luxuries, they’re requirements. The faster an analyst can detect, assess, and act on information, the more resilient our posture becomes. So, it’s worth asking: if tools like AI can help us get to those insights faster, does it matter how we got there?
This isn’t just a classroom debate anymore. It’s a matter of operational advantage that I’m afraid adversarial states may be addressing quicker.
Intelligence Work is Changing
In the traditional model, analysts were trained to research exhaustively and reason independently. Today, the volume of data is overwhelming, the velocity of conflict is increasing, and the information space is more contested than ever. Memorizing doctrine or manually parsing SIGINT is outdated.
AI changes the workflow. It doesn’t remove critical thinking; it simply relocates it. Instead of spending hours searching for the right piece of intel or policy precedent, analysts can use AI to surface patterns, contextualize alerts, and propose early assessments. That frees up cognitive space to focus on what it means and what to do next.
Another key shift in modern intelligence work is the sheer volume of internally generated reporting, ranging from post-incident summaries and investigative writeups to tactical threat advisories. Over time, these internal repositories have grown so vast that referencing older yet still-relevant documents in future reporting becomes a major challenge. Analysts often know the insight exists somewhere in the backlog, but tracking it down quickly, especially under time pressure, is inefficient or even unfeasible.
This is where private, domain-specific AI models trained exclusively on an organization’s own corpus can change the game. By indexing historical reports and enabling semantic search across them, these models can retrieve and summarize relevant findings in seconds. For example, if a threat actor resurfaces after a long dormancy, the AI can instantly surface prior incidents, TTPs, and internal commentary, giving analysts a head start and ensuring continuity across time. Rather than reinventing the wheel, intelligence teams can build on their own institutional knowledge more effectively. While some organizations may already employ this functionality, I believe most companies and agencies have yet to adopt it at scale; at least for now.
The Real Threat Isn’t AI, It’s Passive Use
Threat actors are already using AI to generate disinformation, automate phishing, and map attack surfaces. If defenders don’t leverage the same tools, they fall behind.
The real concern isn’t that AI makes us weaker thinkers. It’s that some people will use it to skip thinking entirely. I wouldn’t say that’s the AI’s fault, it’s the user’s intent. A disengaged mind won’t be saved or spoiled by technology. A sharp one, however, can be enhanced.
Stategic Implications
In a contested world both geopolitically and informationally, the competitive edge doesn’t go to the one who remembers the most. It goes to the one who can interrogate input, synthesize perspectives, and act decisively. AI, used correctly, accelerates the process.
National security professionals, educators, and leadership teams should embrace AI not as a crutch, but as a force multiplier. Train people not just to consume answers but to pressure-test them. To ask better questions. To turn good input into greater output.
Final Thought
Whether you’re an analyst, policymaker, or digital defender, the real skill today isn’t thinking in isolate, it’s knowing how to think with assistance. The people who learn that now will be the ones driving strategy tomorrow.
Microsoft Threat Intelligence has surfaced a new Russia-affiliated cyber actor: Void Blizzard, also tracked as LAUNDRY BEAR. Active since at least April 2024, this group is focused on long-term espionage targeting sectors critical to Western governments, infrastructure, and policy-making.
Void Blizzard is not just another APT clone or cluster moniker. It represents an evolution in operational flexibility and tradecraft, shifting from relying on stolen credentials bought off the dark web to more aggressive adversary-in-the-middle (AitM) phishing campaigns. These newer efforts leverage typosquatted domains mimicking Microsoft Entra portals to harvest authentication tokens and compromise enterprise identities.
Target Profile
Void Blizzard’s campaign focus aligns closely with Russian state priorities. It has gone after targets in:
Defense and government agencies
Transportation and healthcare infrastructure
NGOs, education institutions, and intergovernmental organizations
Media and IT service providers
While some activity overlaps with known Russian actors like APT29, Void Blizzard appears to operate as a distinct cell, coordinating within a larger ecosystem of state-sponsored espionage.
Notable Tactics
Credential-based access remains a preferred entry point, but the shift to AitM phishing is a signal of increasing confidence and offensive posture.
Microsoft Entra impersonation suggests a deliberate focus on trusted identity systems, highlighting how fragile authentication flows can be under targeted pressure.
Operational consistency across NATO states and Ukraine further indicates strategic alignment with geopolitical goals, not just opportunistic targeting.
Analyst Comments
If you’re in defense, energy, public health, or civil society work, Void Blizzard’s tradecraft should raise alarm bells. Organizations should be:
Auditing Entra ID and authentication logs for anomalies tied to session replay or suspicious SSO activity
Deploying phishing-resistant MFA such as FIDO2 keys
Training users to identify lookalike URLs and domain spoofing, particularly in password reset or login prompts
Tracking overlaps with other Russian campaigns, especially Star Blizzard and Midnight Blizzard, to catch infrastructure reuse or strategic convergence
Final Thoughts
Void Blizzard is not flashy, but it is serious. It demonstrates how Russia continues to evolve its cyber espionage toolkit beneath the noise of more destructive attacks. In an era of hybrid conflict, groups like Void Blizzard are the quiet operatives laying groundwork for geopolitical advantage. They definitely won’t be the last.
Disclaimer: This research uses data derived from open-source materials like public intelligence assessments, government publications, and think tank reports. This report is based solely on my personal insights and independent analysis. It does not contain any sensitive or classified information and does not reflect the views of my employer. This report’s purpose is to serve as an exercise in analysis and critical thinking.
Introduction
Since 9/11, the global terrorism threat landscape has expanded from traditional kinetic attacks to include cyber approaches. Terrorist groups like Al-Qaeda, ISIS, Hamas, and Hezbollah have increasingly adopted digital tools for propaganda, recruitment, surveillance, and humble cyber operations. This shift has pressured counterterrorism (CT) strategies to evolve, integrating cybersecurity, intelligence, and offensive capabilities to address both physical and digital threats.
Evolution of Terrorist Cyber Capabilities
In the early 2000s, jihadist groups used the internet mainly for communications and propaganda. By 2014, ISIS had transformed its online presence by actively exploiting social media and encrypted messaging apps to recruit followers, spread propaganda, and coordinate activity beyond traditional battlefields. Though their cyber skills remained limited, some supporters engaged in doxing (public release of personal information), defacements, and minor breaches. A notable case involved a Kosovo hacker passing stolen U.S. personnel data to ISIS [1]. More recently, terrorist networks have begun experimenting with AI tools for media production, reconnaissance, recruitment, and influence operations.
Groups like ISIS-K, Hamas, and Hezbollah have explored AI-generated videos and deepfakes to amplify their messaging. Hamas has also used fake dating apps to hack phones, and Hezbollah has engaged in cyber espionage aligned with Iranian interests. These adaptations primarily support propaganda and recruitment, not large-scale cyberattacks.
Traditional vs Cyber Terrorism
Cyber capabilities have not replaced traditional terrorism but serve as force multipliers. Cyber tools are used to support kinetic attacks, plan operations, and magnify impact. Examples include cyber-assisted target identification and using drones for surveillance or attacks. Analysts conclude that terrorists aim to pair physical destruction with digital disruption. These tactics are not unique to the narrow view of Middle Eastern, or Islamic extremist, terrorist groups, but are also employed by modern Russian intelligence supporting their war with Ukraine.
Counterterrorism Strategy Shifts
Cybersecurity integration: Governments treat cyber as central to CT. Coordination between state agencies and the private sector protects critical infrastructure (ISACs, CISA, Infragard, etc).
Digital Intelligence and Surveillance: Intel agencies use AI and data analytics to monitor online radicalization and terrorist planning. Tools flag extremist content and behaviors on encrypted platforms.
Offensive Cyber Operations: States have launched direct cyberattacks on terrorist infrastructure. Operation Glowing Symphony by US Cyber Command disrupted ISIS media operations [2].
Online Radicalization Prevention: Governments promote alternative narratives and partner with communities to counter online extremism.
Infrastructure Protection and Crisis Response: CT planning now includes simulations of cyber-physical attacks. Agencies collaborate to ensure emergency response continuity.
Persistent Challenges
One of the primary challenges in countering cyber-assisted terrorism is actor attribution. In cyberspace, it is often difficult to determine who is behind an attack, especially when threat actors use anonymization techniques or false flag operations. A disruption to infrastructure or a breach of data originate from a lone hacker, a terrorist cell, or a hostile state, complicating response strategies and legal recourse. This ambiguity forces intelligence agencies to closely examine digital footprints, motives, and affiliations before responding, often in real time.
Resource limitations and skill gaps also slow down effective CT operations in cyber. Traditional law enforcement and CT units often lack the deep technical expertise needed to triage malware, decrypt communications, or conduct forensics on seized devices. Recruiting and retaining cyber talent remains difficult for public agencies, especially as adversaries continue to innovate rapidly using widely available technology. The widespread use of encrypted communication platforms like Telegram and Signal compounds the problem, allowing terrorists to organize and recruit while remaining hidden from surveillance.
Another pressing issue is the overwhelming volume of data. Every day, analysts must sift through massive amounts of online content to detect meaningful threats. AI tools can assist but are prone to false positives and blind spots, sometimes flagging harmless content or missing cleverly disguised plots. Legal and jurisdictional barriers further complicate enforcement efforts, especially when attackers operate across multiple countries. Existing laws are often outdated or inconsistent with the pace of modern cyber threats. Finally, terrorist groups remain highly adaptive, quickly shifting tactics, platforms, and tools in response to enforcement measures. This constant innovation challenges even the most capable security agencies, requiring them to remain agile and proactive in their strategies.
Conclusion/Policy Implications
Cyberterrorism has not replaced traditional terrorism but increasingly complements it. CT efforts now require a holistic approach integrating digital capabilities with conventional methods. Policymakers should focus on:
Cross-sector partnerships
Legal modernization
Investment into talent and tech
Infrastructure resilience
The post-9/11 period demonstrates that success in CT depends on anticipating how terrorists will exploit emerging technologies and being ready to disrupt both their online and offline operations.
Rethinking Russian Influence Operations in the Age of Weaponized Visibility
Earlier this month, Sweden’s Psychological Defence Agency and Lund University released Beyond Operation Doppelgänger, a 200-page deep dive into the capabilities of Russia’s Social Design Agency (SDA). While most public reporting has focused on the now-infamous mirror sites used to spread fake news, this report makes a clear case that those cloned websites were just one piece of a much broader, and more enduring, strategy.
According to the authors, SDA isn’t some freelance influence shop. It’s part of a well-funded, Kremlin-directed propaganda network that merges digital marketing tactics with political messaging, psychological ops, and elements of classic espionage. This ecosystem is not designed to convince people of a particular narrative. It’s built to persist, to stay present, and to dominate the conversation. Success isn’t measured by belief, it’s measured by visibility.
What the Report Really Tells Us
Doppelgänger was not the operation, it was a delivery method
Those cloned news sites? One tactic among many. The report makes it clear that SDA’s influence work goes far beyond any one campaign. Doppelgänger was part of a series of coordinated “counter-campaigns” aimed at Europe, Ukraine, the United States, and beyond.
SDA uses attention, not persuasion, to justify effectiveness
The goal isn’t to get people to agree, it’s to make sure Russian messaging shows up in the conversation. If a piece of content gets fact-checked, reported on, or criticized, that’s considered a win. The more visibility these campaigns get, the more SDA is rewarded by its Kremlin backers.
The leaks could have been deliberate
One of the more provocative angles in the report is the suggestion that some of the leaked SDA documents might have been released on purpose. Whether the goal was to overload researchers, build internal prestige, or tie up resources while new infrastructure was being built, the leak may have been a calculated move.
Narratives are interchangeable, presence is the goal
SDA isn’t wedded to any particular storyline. The messages are interchangeable. If a campaign, whether it’s a meme, a bot swarm, or a fake news drop, gets traction, it’s scaled up. If it doesn’t, it’s dropped. The point is to flood the zone, not to persuade.
Some Questions Worth Asking?
This report calls into question a lot of our assumptions about what influence operations are trying to do—and how we should be responding. A few questions that come to mind:
If visibility is the goal, not the risk, how do defenders responsibly counter disinformation without amplifying it?
Are we unintentionally helping adversaries by publicizing their operations too effectively?
Where is the line between countering propaganda and participating in its feedback loop?
Are our current frameworks designed to deal with long-term influence ecosystems or only isolated events?
Are we seeing the emergence of a disinformation-industrial complex, where performance metrics and funding cycles shape how propaganda is created and sustained?
Beyond Operation Doppelgänger doesn’t just describe a disinformation campaign, it maps out a system that adapts, exploits visibility, and treats media attention, sanctions, and cyber takedowns as signals of progress.
It’s not about changing minds. It’s about owning space…
When you think of cyber warfare, you often imagine digital equivalents of tanks, missiles, and grand battles between major powers. In reality, however, the cyber conflict we see today looks less like Normandy and more like a slow-burning insurgency.
State-sponsored actors, whether they be from Russia, China, Iran, or North Korea, rarely go toe-to-toe with superior Western cyber defenses in a direct, conventional fight. Instead, they operate in the shadows, using asymmetric tactics meant for low-cost, high-yield disruption. Their methods resemble the playbook of guerrilla fighters throughout history: blend in, strike vulnerable targets, and exploit the defender’s size and rigidity.
In today’s post, I’ll unpack how these cyber operations mirror classic guerrilla warfare and why this analogy is so interesting and matters for defenders.
Guerrilla Warfare 101
It’s all about fighting smarter, not harder. It’s the art of the weak harassing the strong. Following the great stalemates of trench warfare in WWI, insurgent groups have leveraged mobility, surprise, and intimate knowledge of the terrain to outmaneuver larger, better-equipped militaries.
Characteristics of guerrilla warfare include:
Asymmetry: Small groups using unconventional methods to challenge superior foes.
Deniability: Fighters blend into civilian populations, making attribution and retaliation tougher.
Psychological ops: Targeting public morale, misinformation.
Terrain advantage: Mastery of local geography to evade and frustrate conventional forces.
Sounds familiar, right? Swap out “fighters” for “APT groups“, “civilian populations” for “cybercriminal groups“, and “terrain” for “network infrastructure“, and you’ve got a pretty solid picture of today’s cyber landscape.
Guerrilla Tactics in Action: State-Sponsored Cyber Threats
Asymmetry in the Digital Domain
State-sponsored groups like Russia’s APT28 and APT29 or North Korea’s Lazarus Group rarely match US or allied cyber capabilities head-on. They exploit the cost asymmetry. For a few thousand dollars in phishing kits, compromised VPNs, leased botnets, or commercial malware, they can inflict millions in damages, steal sensitive data, or shape public narratives. The defender’s dilemma? Defending every endpoint and supply chain vector costs exponentially more than launching simple, repeatable attacks.
Deniability and Proxy Warfare
Just as guerrillas hide among civilians, cyber operators mask their identities using compromised infrastructure, false flags, or contracting work out to cyber criminal elements and impressionable anarchists in the case of Russian GRU’s Unit 29155 who incite anarchy and sabotage through various Telegram channels to Ukrainian youth. North Korea’s use of 3rd party IT freelancers to infiltrate Western companies is another prime example. The plausible deniability muddies attribution, delays response, and allows our adversaries to operate with relative impunity.
Hit-and-Run in Cyberspace
Watering hole attacks, defacements, and smash-and-grab data theft mirror the guerrilla’s ambush. Breach a vulnerable vendor, pivot to the target, exfiltrate quickly, and vanish while defenders are left scrambling. These aren’t prolonged sieges, they’re opportunistic raids meant to probe weaknesses and sow chaos.
Information Warfare as PsyOps
Iranian and Russian cyber units have elevated disinformation to an art form. Influence operations targeting elections, societal divisions, or corporate reputations function as digital equivalents of guerrilla psychological operations. The goal isn’t always tangible damage; sometimes it’s just to erode trust and create confusion or panic.
Mastering the Digital Terrain
In guerrilla conflicts, knowing the terrain is everything. In cyberspace, that “terrain” includes compromised networks, 3rd party vendors, poorly monitored endpoints, and the dark web. State-sponsored groups map this terrain meticulously, identifying soft targets and exploiting global infrastructure for cover.
Some Case Studies: Cyber Guerrilla Warfare in Practice
In 2025, there are now plenty of examples to pull from but some of the more recent, notable cases include:
Russia’s (FSB) COLDRIVER/Callisto/Star Blizzard
Operating between cyberespionage and influence, this group exemplifies cyber guerrilla tactics. With recent reporting detailing their persistent targeting of Western NGOs, think tanks, and academia reflects a strategy of sustained harassment. They focus on undermining soft targets, shaping narratives, and stealing sensitive (not always classified) information that feeds broader geopolitical campaigns.
North Korea’s IT Worker Fraud
The DPRK has combined traditional APT activities with an insurgent-style infiltration campaign: fraudulent IT workers securing remote jobs at Western firms. Once inside, these operatives act as insider threats with direct access to networks, sidestepping conventional perimeter defenses. This tactic parallels how insurgents embed within civilian populations to evade detection and execute attacks from within. In this case, funding the regime’s weapons programs, among other motivations.
Iran’s APT33/35/42
Iranian threat groups excel at opportunistic targeting, often focusing on vulnerable sectors like oil & gas, transportation, and academia. Their attacks prioritize disruption, espionage, and influence, mirroring guerrilla strategies of infrastructure sabotage and psychological impact over decisive victories.
Volt Typhoon: An Occupational Model
China’s Volt Typhoon operations showcase a more sophisticated “occupation” model. Rather than smash-and-grab, their campaigns are long-term entrenchments in U.S. critical infrastructure, designed for persistent access and latent sabotage potential. This is less hit-and-run, more like guerrilla fighters establishing fortified zones in contested territory.
Why the Guerrilla Warfare Analogy Matters
Understanding cyber threats through the lens of guerrilla warfare reframes how we think about defense and deterrence.
Misaligned Defenses: Conventional cyber defenses are analogous to defending cities with large armies while insurgents roam freely in the countryside. Static defenses are insufficient against agile, persistent adversaries.
Deterrence is Harder: You can deter a nation’s military with superior firepower. Deterring a deniable, decentralized cyber guerrilla force is a different challenge.
Hybrid Warfare Context: These cyber guerrilla tactics don’t exist in a vacuum. They’re part of broader hybrid strategies, supporting kinetic operations, diplomatic pressure, or internal destabilization efforts.
Mitigation?
This is tough one as mitigation against guerrilla tactics requires more than simply building bigger walls or buying more security tools. Some things worth considering:
Persistent threat hunting
Implement honeypots
Coordination/collaboration across government, private sector, and civil society
Publicly naming and sanctioning enablers
Tactics Snapshot
Phishing (social engineering)
Credential Harvesting (Supply chain raids)
Watering Hole Attacks (sabotaged Infrastructure)
Supply Chain Subversion (indirect targeting)
Wiper Malware (destructive sabotage)
Conclusion
Guerrilla warfare didn’t disappear with the end of colonial insurgencies or Cold War proxy wars. It evolved and found a new battleground on the web. Today’s state-sponsored cyber operations mirror the asymmetric tactics of historical insurgencies in that they’re cheap, deniable, persistent, and designed to frustrate superior foes. For defenders like us, recognizing this parallel is less academic and more essential for adapting strategy, resource allocation, and useful threat modeling.
The digital guerrilla is no longer just a rebel in the jungle. They’re a sanctioned asset, behind a keyboard, operating in the blurred space between espionage, sabotage, and information warfare.
Between Jan and Apr 2025, suspected Russian FSB-linked threat group COLDRIVER delivered LOSTKEYS malware using a fake CAPTCHA to target Western officials, journalists, think tanks, and NGOs.
Russian Threat Group COLDRIVER Deploys LOSTKEYS Malware Targeting Western Entities
The Russian state-sponsored threat group, COLDRIVER (aka UNC4057, Callisto, and Star Blizzard), has expanded its cyberespionage toolkit with the additional of a new malware strain dubbed LOSTKEYS. According to Google’s Threat Intelligence Group (GTIG), this development marks a significant evolution from COLDRIVER’s usual credential phishing tactics to more sophisticated malware development.
Evolution of Tactics
Historically, COLDRIVER focused on credential phishing campaigns targeting high-profile individuals in NATO governments, NGOs, and former intelligence and diplomatic officials. The threat group’s primary objective has been intelligence collection in support of Russian strategic interests. More recent activities observed in early-2025 indicate a shift towards deploying more custom malware to further enhance their data exfiltration capabilities.
Introduction of LOSTKEYS
LOSTKEYS was designed to steal files from predefined directories and file types, as well as send system information and running processes back to the threat actors. The malware is delivered through a multi-stage infection chain that begins with a lure website featuring a fake CAPTCHA. Once a target interacts with the CAPTCHA, they’re then prompted to execute a PowerShell script, starting the malware installation process. This method, known as “ClickFix”, involves socially engineering targets to copy, paste, and execute malicious PowerShell commands. The technique has been gaining increased notoriety as various other threat actors have begun leveraging it.
Targets and Objectives
COLDRIVER’s recent campaigns have targeted current and former advisors to Western governments, militaries, journalists, think tanks, NGOs, and individuals connected to Ukraine. The group’s operations aim to gather intelligence that aligns with Russian strategic interests. In some cases, COLDRIVER has been linked to hack-and-leak campaigns targeting officials in the UK and NGOs.
Analyst Comments:
The evolution of COLDRIVER from basic credential phishing to deploying custom malware like LOSTKEYS emphasizes a broader trend in Russian cyberespionage: the increasing willingness to burn bespoke tools in pursuit of higher value intelligence collection. The shift seems to suggest mounting pressure on Russian intelligence services to deliver actionable insights amid ongoing geopolitical tensions, particularly related to NATO support for Ukraine and Western policy responses.
Through their targeting of advisors, think tanks, and NGOs, COLDRIVER is focusing on influencers and policy shapers, not just government officials. This indicates a strategic effort of preempting or shaping foreign policy decisions. Their adoption of techniques like ClickFix also signals an emphasis on user-driven execution, a smart bypass of traditional email defenses and endpoint controls. As we’ve seen in the past, employees are the weakest link in an organization’s security posture.
For us network defenders, this campaign highlights the importance of defense-in depth strategies, user education (a must), and proactive threat hunting. The fact that COLDRIVER now deploys malware directly onto victim systems raises the stakes for organizations previously focused only on account compromise prevention.
In short, COLDRIVER’s operational pivot is just another reminder that cyberespionage groups adapt faster than most defensive postures. Organizations in policy-adjacent sectors should assume they are in the targeting scope, even if they don’t handle classified information, and adjust security postures accordingly.
Disclaimer: This research project uses data derived from open-source materials like public intelligence assessments, government publications, and think tank reports. This report is based solely on my personal insights, hypothetical scenarios, and independent analysis. It does not contain any sensitive or classified information and does not reflect the views of my employer. This report’s purpose is to serve as an exercise in research, analysis, and critical thinking.
Purpose: This paper argues for the reframing of AI as a strategic tool, not an existential threat, and outlines how US defense education institutions must evolve to prepare future leaders for operationalizing AI in national security environments.
Executive Summary: Artificial intelligence (AI) is transforming the strategic, operational, and educational dimensions of national defense. While public discourse often gravitates toward extremes, the reality is more pragmatic: AI is becoming foundational infrastructure in modern warfare. As such, the Deportment of Defense (DoD) and its professional military education (PME) institutions must adapt to cultivate leaders who understand, integrate, and govern AI systems effectively.
This paper argues for a shift in how AI is conceptualized within defense circles. Drawing historical parallels to the role of ENIAC during World War II, I contend that AI should be seen less as an independent cognitive entity and more as a strategic enabler – one that augments decision-making processes across all echelons of command. The report outlines current defense applications of AI, analyzes institutional barriers to integration within PME, identifies governance challenges, and positions AI literacy as a cornerstone of future competitive advantage.
Key recommendations include embedding AI case studies and simulations into curriculum, developing interagency and industry-academic partnerships, and enforcing principles of explainability and human-in-the-loop oversight. Ultimately, preparing warfighters and strategists for the AI era requires a comprehensive modernization of defense education grounding in technical fluency, ethical judgement, and operational relevance.
Introduction: AI has rapidly moved from theoretical construct to operational reality. Once confined to academic laboratories and speculative fiction, AI now underpins critical functions in logistics, intelligence, command-and-control (C2), and cybersecurity. As the US and its adversaries invest heavily in AI for strategic advantage, the defense community must make a pivotal choice: will AI be treated as a black box novelty managed by contractors, or as a core component of national defense doctrine managed by trained leaders?
This paper adopts a strategic lens to answer this question, using the legacy of early computing – particularly ENIAC’s wartime role – as a historical analogue. Just as ENIAC revolutionized how ballistic trajectories were computed, enabling faster and more precise battlefield decisions, AI today offers unprecedented opportunities to extend cognitive reach. But the key to unlocking this potential lies not just in technology, but in human leadership.
The central thesis is that AI must be embedded into defense education as both subject and tool. PME institutions need to produce not only tacticians and strategists, but also technologically literate leaders who understand AI’s strengths, limitations, and ethical implications. By framing AI as infrastructure we position it where it belongs: at the heart of 21st century defense readiness.
The sections that follow will explore the evolution of AI narratives, real world applications in defense, barriers to educational integration, risk governance, and the implications of strategic competitive in the age of AI.
Public Fears and Dystopian AI Narratives
Public discourse around AI often gravitates toward sensational fears – from Hollywood’s Terminator-style takeovers to worries of mass unemployment. Surveys have shown that a majority of American approach AI with trepidation. For example, in 2023 Pew [1] found 52% of US adults were more concerned than excited about growing AI use (versus only 10% more excited).
Survey results showing U.S. adults’ concerns about artificial intelligence in daily life, with a clear majority indicating more concern than excitement.
Common public concerns include, but are not limited to:
Existential “AI Takeover” Scenario: Dystopian scenarios loom large. In one poll, 63% of US adults voiced worry that AI could lead to harmful behavior, and a similar share feared AI systems might “learn to function independently from humans” [2]. Over half (55%) even believed AI could pose a risk to the very existence of the human race. Such views reflect the enduring influence of science fiction tropes. The 1984 film The Terminator, for instance, “popularized fears of unstoppable machines” and cemented the notion of AI as an existential threat in the public imagination [3]. Some decades later, its imagery of a rogue superintelligence (Skynet) remains shorthand for AI doom in media narratives.
Mass Unemployment and Social Disruption: Another prevalent fear is that AI and automation will displace human workers on a massive scale. Among Americans more concerned than excited about AI, the risk of people losing jobs is the top reason for their concern. As an example, about 83% of Americans expect that driverless vehicle adoption would eliminate jobs like rideshare and delivery drivers. This anxiety extends itself beyond blue-collar work with white-collar workers also worrying that advances in generative AI could render their skills obsolete. Media coverage often highlights these scenarios of AI-induced economic upheaval, reinforcing public apprehension that “the robots” will leave humans unemployed.
Loss of Human Control and Ethical Misuse: People also fear humans could lose control over AI systems, leading to unpredictable or unethical outcomes. High-profile AI incidents and dystopian portrayals have primed the public to be wary of autonomous decision-making. In surveys, large majorities express concern that increasing AI use will erode privacy or be deployed in ways they are not comfortable with. Ethical campaigns have seized on these fears – for instance, advocacy groups invoking “killer robot” imagery push for bans on lethal autonomous weapons, tapping into public unease about machines making life-and-death decisions [4]. The vivid narrative of a moral boundary crossed by ungoverned AI resonates widely, even if actual military policy still mandates human oversight of use-of-force decisions.
These dystopian or exaggerated perceptions are amplified by popular media and entertainment. While they reflect genuine concerns, they often overshadow the more mundane reality of what current AI can (and cannot) do. The result is a public narrative skewed toward worst-case scenarios – one that stands in stark contrast to how strategic decision-makers view AI.
Defense Strategists’ Perspective: AI as a Tool, Not a Terminator
Great catchline, I know. At the strategic level – particularly within U.S. defense and national security circles – artificial intelligence is predominantly seen as a force multiplier and necessary enabler, rather than a sentient threat. The Department of Defense (DoD) views AI as a technology to be harnessed in order to maintain a competitive edge. The Pentagon’s official strategy frames AI as transformative in augmenting human capabilities and improving military effectiveness, not replacing human judgment outright [5]. Key leaders emphasize integration over fear:
Maintaining a Competitive Edge: The DoD’s Third Offset Strategy explicitly aimed “to exploit all the advances in artificial intelligence and autonomy and to insert them into the Defense Department’s battle networks” as a means to preserve U.S. military superiority [6]. Rather than dwelling on speculative dangers, defense planners focus on how AI can change the character of warfare to the U.S.’s advantage. The 2018 National Defense Strategy anticipated that AI will significantly alter warfighting, and accordingly officials like Lt. Gen. Jack Shanahan (first director of the Joint AI Center) argued the United States “must pursue AI applications with boldness and alacrity” to retain strategic overmatch. In this view, failing to embrace AI is the bigger risk, as adversaries racing ahead in AI could threaten U.S. security.
AI as a Practical Enabler: Inside the Pentagon, AI is treated as a suite of powerful tools – from data-crunching algorithms to intelligent decision-support systems – that can streamline operations and enhance human decision-making. Officials stress that current AI is narrow and task-specific, not an all-powerful brain. For example, the Joint Artificial Intelligence Center (JAIC) was established in 2018 specifically to accelerate the DoD’s adoption and integration of AI across missions [7]. JAIC’s mandate has been to serve as an AI center of excellence providing resources and expertise to military units, underlining that AI’s role is to assist warfighters and analysts. As JAIC Director Lt. Gen. Michael Groen put it, “We seek to push harder across the department to accelerate the adoption of AI across every aspect of our warfighting and business operations”. This illustrates the prevailing mindset that AI is a general-purpose capability to be infused into logistics, intelligence analysis, maintenance, training, and other domains to make the force more effective and efficient.
Augmentation, Not Autonomy Run Amok: Defense leaders are generally cognizant of public fears and have repeatedly clarified that their pursuit of AI is not about ceding control to machines. DoD policies (such as directives on autonomous weapons and the 2020 AI Ethical Principles) insist on meaningful human oversight of AI-driven systems. In practice, the military’s near-term AI projects are largely focused on decision support, automation of tedious tasks, and optimizing workflows – far from Hollywood’s rogue robots. As one Navy official noted, much of AI’s impact will come through “mundane applications… in data processing, analysis, and decision support,” rather than any dramatic battlefield androids. The internal narrative frames AI as a collaborative technology: an aid to human operators that can sift intelligence faster, predict maintenance needs, or simulate scenarios – ultimately empowering human decision-makers, not displacing them. This perspective stands in stark relief against the “AI takeover” trope; instead of fearing AI’s agency, defense strategists worry about not using AI enough to keep pace with rivals.
In summary, U.S. defense decision-makers tend to regard AI as a critical enabler to be integrated responsibly into military and security operations. The emphasis is on opportunity – leveraging AI to enhance national security – tempered by pragmatic risk management (ensuring reliability, ethics, and control), rather than on existential danger. This measured, tool-oriented outlook differs markedly from public dystopian narratives, focusing on AI’s strategic utility rather than its threat to humanity.
Think Tank Perspectives: Weighing Risks Versus Strategic Integration
Leading national security think tanks and research centers (RAND, CNAS, CSET, and others) have analyzed AI’s implications and generally echo the need to avoid hyperbole. Their reports often strike a balance – acknowledging legitimate risks from military AI, yet cautioning against exaggerated fears that could hinder innovation. Several consistent themes emerge from expert analyses:
AI as Transformative, but not Apocalyptic: Analysts note that while AI will shape the future of warfare, it is better understood as a continuum of technological evolution rather than a revolution that overnight yields super intelligent machines. A recent Center for a New American Security (CNAS) study argues that comparisons to an “AI arms race” are overblown – in reality, military adoption of AI today “looks more like routine adoption of new technologies” in line with the decades-long trend of incorporating computers and networking into forces [8]. In other words, there is momentum behind AI integration, but not the kind of breakneck, uncontrolled spiral that sci-fi scenarios or headlines might suggest. The report underscores that current military AI is a general-purpose technology akin to an improved computer, not a doomsday weapon in itself.
Concrete Risks: Safety, Bias, and Escalation: Think tank assessments tend to focus on tangible risks that come with deploying AI – e.g. system failures, vulnerabilities, or inadvertent escalation – rather than speculative sentience. A RAND Corporation analysis of military AI highlighted issues like reliability in high-stakes contexts and the need for testing to prevent accidents [9]. Similarly, CNAS has pointed out the risk that flawed AI could misidentify threats or act unpredictably in complex environments, which could increase the chance of accidents or even unintended conflict if not managed. These are serious concerns, but notably within the realm of technical and strategic problem-solving – addressable by policy, human oversight, and international norms – as opposed to uncontrollable AI revolt. By highlighting such issues, experts aim to ensure integration is done responsibly, without invoking a need to halt AI advancements altogether.
Strategic Integration as Imperative: On the whole, expert communities frame AI as an indispensable element of future national security, one that must be integrated strategically and swiftly. The consensus is that the U.S. cannot afford to fall behind in AI adoption, given competitors like China investing heavily in military AI. For instance, a RAND report on DoD’s AI posture emphasized scaling up AI experiments and talent to maintain U.S. tech superiority. Think tanks frequently describe AI as a “general-purpose technology” that will underpin intelligence analysis, cybersecurity, logistics, and more – a foundation for military power much like electricity or the internet. As such, their recommendations often focus on accelerating AI integration (through funding, R&D, public-private partnerships) while instituting safeguards (ethical guidelines, testing regimes, confidence-building measures internationally) rather than entertaining the idea of slowing or banning military AI outright.
In think tank narratives, there is an implicit push to reframe the conversation about AI in national security. Rather than viewing AI itself as the threat, the emphasis is on the risk of misusing or not using AI. Experts urge policymakers to mitigate the real risks – such as unintended escalation or AI failures in weapons – through norms and oversight, but at the same time to push beyond public fear-based reluctance so that beneficial AI applications are not lost. This balanced perspective reinforces the notion that AI, handled correctly, is a net strategic enabler, not a harbinger of doom.
Narrative Gaps in Policy, Investment, and Education
The divergence between public fears and defense-sector views of AI has tangible effects on policymaking, defense investments, and even the education of the national security workforce. A threat-centric narrative can create frictions – from public resistance to military AI projects, to slowed adoption – whereas an enabler-centric narrative could foster more proactive policy and innovation. Several notable impacts of the differing narratives include:
Public Opinion Shaping Policy Debates: Heightened public fear of AI can translate into political pressure for restrictive policies. Lawmakers attuned to their constituents’ dystopian anxieties may call for strict regulations or bans on certain AI uses (e.g. autonomous weapons) before the technology is fully understood. For instance, the visceral “killer robot” trope has fueled campaigns at the United Nations to ban lethal autonomous systems preemptively. While ethical in intent, such moves – driven by worst-case imagery – could limit the military’s ability to develop AI for defensive or benign uses (like active protection systems) if not carefully negotiated. On the flip side, when expert communities and defense leaders advocate AI as a strategic necessity, they push for policies that invest in AI R&D and set guidelines for responsible use rather than prohibition. This tug-of-war between dystopian narratives and strategic imperatives plays out in policy forums. The outcome can affect everything from budget allocations to the rules governing AI development. A climate of fear might spur oversight (e.g. Congressional hearings grilling AI programs for potential dangers), whereas a reframed narrative highlighting AI’s national security benefits could build public and bipartisan support for sustained investment.
Tech Industry Engagement and Investment: The narrative gap also directly impacts collaboration between the government and the tech industry – a critical relationship for defense AI innovation. A stark example was Google’s withdrawal from the Pentagon’s Project Maven in 2018 after employee protests. Google engineers, influenced by concerns that their work on AI could contribute to lethal drone operations, argued it ran afoul of the “Don’t be evil” ethos. Facing internal revolt and public criticism, Google opted to cancel its AI contract with DoD [10]. This incident sent shockwaves through the defense community. It demonstrated how a workforce steeped in dystopian AI fears or moral concerns can impede defense AI projects, even those aimed at non-lethal tasks like imagery analysis. The MITRE Corporation analyzed this rift and noted that thousands of tech employees objected to their companies partnering with the military, perceiving it as “going against their values”. Similar pushback hit other firms (Microsoft, Amazon) in cases where AI or tech contracts for defense raised alarm among staff. The result is a chilling effect on defense tech investment: companies become hesitant to bid on AI programs that might spark public relations issues or staff resignations. This dynamic hampers DoD’s access to top AI talent and tools. Defense strategists recognize that sustaining U.S. military AI leadership requires close cooperation with the private sector (which leads in AI innovation) – but that cooperation is harder to forge when the public narrative paints such work as contributing to dystopia. Bridging this gap is thus seen as essential for investment and innovation.
Defense Education and Talent Development: Within military and defense educational institutions, there is a concerted effort to counter hype and fear with sober, informed understanding of AI. Leaders acknowledge that some segments of the public – and even the workforce – are uneasy about AI. address this, defense educators are reframing the narrative for the next generation of officers and analysts. A U.S. Naval War College conference in 2019 was pointedly titled “Beyond the Hype: Artificial Intelligence in Naval and Joint Operations,” aiming to dispel misconceptions and highlight practical applications of AI as a tool. Scholars and military practitioners at that event discussed real-world use cases and limitations of AI, rather than science-fiction fantasies, implicitly teaching that AI is a technology to be mastered, not feared. Likewise, the DoD has launched AI education initiatives to raise the baseline knowledge across the force. The 2020 DoD AI Education Strategy called for integrating AI into professional military education curricula and training programs, ensuring personnel have a basic grasp of AI capabilities and ethics. This not only prepares the workforce to use AI effectively, but also helps inoculate them against sensationalized notions. By normalizing AI as another subject of proficiency – alongside cybersecurity or electronics – the defense community is building a culture that views AI rationally and focuses on operational advantages and safeguards. In short, defense education efforts seek to narrow the narrative gap by producing leaders who can engage with AI’s opportunities and risks in a nuanced way, rather than defaulting to pop-culture driven extremes.
The effects of narrative are thus self-reinforcing. Public fears, if unaddressed, can slow or skew policy and scare off key partners, which in turn could hinder the U.S. from fully leveraging AI for security. Recognizing this, many defense stakeholders argue that winning the “hearts and minds” on AI – both within the force and among the public – is becoming as important as the technology itself. This sets the stage for reframing AI’s role in national security.
Reframing AI as a Strategic Enabler
Given the evidence, a clear lesson for the defense community is the need to shift the narrative on artificial intelligence from one of looming threat to one of strategic enablement. The goal of such reframing would be to align public perception with the reality that AI, managed correctly, is a tool that can enhance security and prosperity, not an out-of-control adversary. Support for this reframing argument is found in both policy analysis and practice:
Emphasizing Benefits and Mission Outcomes: Defense agencies are beginning to tell a more positive, concrete story about AI’s role. Rather than speak in abstractions, they highlight how AI can save lives by improving search-and-rescue, or how it reduces routine workload for troops. This kind of messaging helps the public and Congress see AI as directly contributing to safer, more effective military operations. A MITRE study in 2020 specifically urged DoD leaders to communicate a compelling narrative about “the value of defending the country with honor” using modern technologies like AI, and to stress the Department’s commitment to ethical deployment of these tools. showcasing adherence to ethics and human oversight, the Pentagon can alleviate fears of ungoverned AI. For example, DOD’s adoption of AI is often coupled with a Responsible AI framework – sending the message that the U.S. will use AI in line with its values, not as a reckless killer robot. Making such assurances public and transparent can build trust and counteract dystopian impressions.
Bridging the Cultural Divide: AI as an enabler also involves closing the gap with the tech sector and general workforce. This means engaging Silicon Valley and young technologists on shared values and national security needs. Success stories of AI-public sector collaboration are being lifted up to change minds. For instance, highlighting how an AI tool developed by a tech firm helped U.S. forces deliver aid more efficiently, or how a machine-learning model is saving maintenance costs in the Air Force, can illustrate AI’s positive impact. Think tanks and industry leaders suggest that public-private partnerships on AI should be promoted in the narrative – to show that working on defense AI can be a force for good, protecting soldiers and civilians alike. The hope is that as more technologists see AI projects in defense yielding constructive results (and not just weapons), the stigma diminishes and investment flows more freely. In tandem, DoD is adjusting its own messaging to be more receptive to ethical concerns, rather than dismissive. Instead of waving away protests, defense advocates are increasingly acknowledging the need to earn trust. This cultural dialogue is part of reframing AI as a shared mission for security, as opposed to a government venture that the public should fear.
Aligning Narrative with Reality: Fundamentally, the reframed narrative must continually point out that the “science-fiction” view of AI is misaligned with the current reality. As experts note, most military AI systems are more akin to smart assistants than independent actors. Driving this point home can correct misperceptions. The contrast between a fictional Skynet and real-world AI applications (like predictive maintenance algorithms) is stark – a reframed narrative leverages that contrast to reduce undue alarm. Defense educators and communicators therefore stress separating fact from fiction: acknowledging genuine AI-related risks (e.g. algorithm bias or adversary use of AI for disinformation) but clarifying that these are challenges manageable through policy and engineering, not reasons to halt progress. As it has been put before, even AI-enabled weapons “lack the malevolent sentience of Skynet,” and keeping humans in the loop is the prudent path – so we should focus on maintaining control and ethics rather than fearing an uprising. This kind of messaging directly tackles the Terminator mythos, reframing the issue around human responsibility and strategic advantage.
In conclusion, repositioning AI in the public and policy narrative as a strategic enabler – a powerful tool under human direction – is critical for the United States to fully benefit from the AI revolution in defense. The chasm between public fear and military optimism can be narrowed by education, transparency, and consistent examples of AI’s value. Strategic-level decision makers and thought leaders increasingly advocate this reframing because they recognize that without public buy-in and understanding, even the best AI technology may fail to be adopted. The background evidence presented here supports the argument that AI is not an autonomous menace to be halted, but a strategic asset to be guided and governed wisely. Reframing the narrative in this way can help ensure robust policymaking, sustained investment, and an informed defense workforce – all oriented toward integrating AI in service of national security, responsibly and effectively.
Operational Use of AI in the US Defense Sector
AI technologies are already being fielded across multiple domains of U.S. defense operations, enhancing everything from intelligence analysis to maintenance and cybersecurity. One high-profile example is Project Maven, launched in 2017 as the Department of Defense’s “Algorithmic Warfare” initiative. Project Maven uses machine learning to process the vast streams of drone surveillance video and satellite imagery to identify potential targets with far greater speed than traditional methods [11]. By rapidly classifying objects (e.g. distinguishing hostile tanks from civilian trucks) and integrating those insights into battlefield command systems, Maven dramatically compresses the kill chain. Human operators remain in the loop to validate targets, but the AI enables them to go from analyzing only ~30 targets per hour to as many as 80, according to some reports [12]. Deployed in conflict zones like Iraq, Syria, and Yemen, Maven has proven its value by narrowing target lists for airstrikes and even helping U.S. Central Command locate enemy rocket launchers and vessels in the Middle East. These real-world results illustrate how AI can increase operational tempo and precision in intelligence, surveillance, and reconnaissance (ISR) missions, augmenting human analysts and decision-makers.
To scale such successes across the force, the Pentagon stood up the Joint Artificial Intelligence Center (JAIC) in 2018 (now reorganized under the Chief Digital and AI Office) with a mandate to accelerate AI adoption for “mission impact at scale” [13]. The JAIC coordinated DoD-wide AI efforts, developing prototypes in areas like predictive maintenance, humanitarian assistance, and warfighter health, and ensuring that lessons learned in one military service could benefit others. For example, in the realm of predictive maintenance, the Air Force’s Rapid Sustainment Office worked with industry to deploy an AI-based Predictive Analytics and Decision Assistant (PANDA) platform as a new “system of record” for aircraft maintenance [14]. PANDA aggregates data from aircraft sensors, maintenance logs, and supply records, then uses machine learning models to predict component failures and optimal maintenance scheduling. This data-driven approach has measurably improved readiness: in one case involving the B-1 bomber fleet, an AI predictive maintenance tool completely eliminated certain types of unexpected breakages and cut unscheduled maintenance labor by over 50%. These efficiencies translate to higher aircraft availability and operational reliability – a clear example of AI acting as a force multiplier for logistical and sustainment activities.
AI is also bolstering U.S. capabilities in less visible but critical domains such as cyber operations. Modern cyber defense involves monitoring enormous volumes of network data and responding to threats in milliseconds. Here, AI algorithms help identify anomalous patterns and intrusions far faster than human operators alone. Military cyber units are experimenting with machine learning systems that flag suspicious network behavior and even autonomously execute initial countermeasures. As one Army Cyber Command technology officer observed, AI is beginning to shift the advantage to the defender in cyberspace, partially countering the traditional dominance of offense [15]. Fast-running AI detection tools can contain attacks or malware in real time, making it “much harder for the offensive side” to succeed. At the same time, strategists recognize that AI is a dual-edged sword in cyber warfare: the same technology could enable more sophisticated phishing, deepfake-induced misinformation, or automated hacking by adversaries. This has prompted the DoD to invest in AI for cybersecurity while also researching defenses against AI-driven threats.
Across the services, a variety of other AI applications are moving from pilot projects into operational use. The Navy and Coast Guard, for instance, have begun employing computer vision algorithms to scan satellite and radar data for illicit maritime activities (such as smuggling or illegal fishing) that previously went unnoticed [16]. The Army is testing AI-enabled battle management systems that fuse sensor inputs to recommend battlefield courses of action, effectively providing decision support to commanders. Even the U.S. Special Operations community has embraced AI tools for tasks like ISR analysis, language translation, and mission planning. In 2023, U.S. Special Operations Command pivoted towards aggressive adoption of AI, open-sourcing certain software and pushing deployment to the tactical edge [17]. Leaders at SOCOM rate their recent progress as substantial, but acknowledge more work is needed to integrate AI into legacy systems and train personnel to use these tools effectively. Such case studies – from Project Maven’s target recognition to PANDA’s maintenance forecasting and cyber anomaly detection – underscore that AI is no longer just a theoretical future capability. It is already enhancing operational readiness and efficiency across the U.S. defense enterprise, augmenting human warfighters in handling the growing speed and complexity of modern military missions.
From ENIAC to AI: Johns von Neumann’s Legacy and the Next Cognitive Revolution
History shows that transformative technologies can radically enhance military capability when paired with visionary integration. A useful parallel to today’s AI revolution is the advent of electronic computing during and after World War II – a revolution epitomized by the work of John von Neumann on the ENIAC computer. Von Neumann, a Hungarian-American mathematician and polymath, was a key figure in the Manhattan Project and an early computing pioneer who recognized the strategic potential of automation in calculations [18]. In 1944, he became involved in the U.S. Army’s ENIAC project (Electronic Numerical Integrator and Computer), which was the first general-purpose electronic computer. ENIAC was initially built to compute artillery firing tables – a laborious task that previously required teams of human “computers” working with mechanical calculators and often struggling to keep up with wartime demands. By automating these computations, ENIAC could perform in seconds what took people hours or days, fundamentally changing the pace of wartime calculations. In fact, one of ENIAC’s first assignments in 1945 was running simulations for the feasibility of the hydrogen bomb, a top-secret program that would have been impractical without electronic computing power [19]. This breakthrough demonstrated how high-speed computing became a strategic enabler, allowing the United States to solve complex problems (like nuclear weapon design and ballistic trajectories) that were previously intractable or painfully slow.
Historical image of John von Neumann, a pioneering figure in computing and influential figure in military strategies.The ENIAC, the world’s first electronic large-scale general-purpose digital computer, symbolizes the dawn of computing technology.
John von Neumann’s influence went beyond the engineering of ENIAC; he also conceptualized how computers could serve as cognitive aids to strategists and planners. He pioneered the stored-program architecture (now known as the von Neumann architecture) that underlies virtually all modern computers, and he’s considered a father of game theory – bringing a new mathematical rigor to defense strategy. Under von Neumann’s guidance, early computers were used not only for crunching numbers but also for tasks like weather forecasting and systems analysis, essentially the forerunners of today’s data-driven decision-support systems. The early computing revolution turned what were once human-only intellectual tasks into human-machine collaborative tasks, greatly increasing speed and accuracy. For example, the time to produce complex firing tables or decrypt enemy codes dropped dramatically as machines took over the repetitive calculations. Military planning began to incorporate computational modeling, from logistics to nuclear targeting, augmenting human judgment with machine precision.
Today’s artificial intelligence represents the next phase of cognitive augmentation in warfare – a step beyond what von Neumann’s generation achieved with manual programming and calculation. If ENIAC and its successors gave commanders unprecedented computational power, AI offers something arguably even more profound: the ability for machines to learn, adapt, and assist in decision-making in real time. This can be seen as an extension of von Neumann’s legacy. Just as he envisioned applying rigorous computation to strategic problems, we now envision applying machine learning to dynamic problems like identifying insurgents in a crowd, predicting an adversary’s moves, or optimizing complex logistics under fire. The paradigm shift is similar in scale. In the mid-20th century, militaries that embraced electronic computing leapt ahead in command-and-control, intelligence, and engineering – those that lagged were left at a serious disadvantage. Likewise, in the 21st century, militaries that harness AI for a decision advantage will outpace those that do not. AI systems can sift through sensor feeds, intelligence reports, and battlefield data far faster than any team of staff officers, flagging patterns and anomalies that would otherwise be missed. This human-machine symbiosis has the potential to amplify cognition on the battlefield, much as early computers amplified calculation. It moves warfighting into a realm of information speed and complexity management that von Neumann could only hint at with game theory and primitive computers. In short, AI is positioned to do for perception and reasoning what computing did for arithmetic – enabling a new leap in military effectiveness. The challenge, as with ENIAC, is to integrate this technology wisely, guided by strategic leaders who understand its potential. In that sense, reframing AI from a feared threat into a force multiplier echoes von Neumann’s own advocacy for embracing new technology to secure a competitive edge in national security.
Implications for Defense Education and Talent Development
Realizing AI’s potential as a strategic enabler will require a profound transformation in defense education and training. Future military leaders must be as comfortable with algorithms and data as past generations were with maps and compasses. This means Professional Military Education (PME) institutions – service academies, staff colleges, war colleges, and technical schools – are updating curricula to build AI literacy at all levels. AI literacy involves understanding the basics of how artificial intelligence works, its applications and limitations, and being able to critically evaluate AI-enabled systems [20]. As one recent study on PME integration argues, AI literacy among faculty and students is now a “strategic imperative” to prepare officers for an AI-driven battlefield. Concretely, courses on topics like data science, machine learning fundamentals, and human-machine teaming are being introduced alongside traditional strategy and leadership classes. For example, the Naval Postgraduate School has launched an “Artificial Intelligence for Military Use” certificate program that educates military professionals on key AI concepts and applications, from sensors and imagery analysis to war-gaming and logistics [21]. Notably, this program does not require a coding background – reflecting an understanding that even non-technical officers need a working knowledge of AI to make informed decisions about procurement and deployment. Similar initiatives are underway at other institutions, aiming to produce officers and DoD civilians who can bridge the gap between operators and data scientists and effectively champion AI projects.
In addition to technical skills, ethical and strategic judgment regarding AI must be woven into the education of military leaders. Just as the ethics of nuclear weapons or cyber operations are covered in curricula, the unique ethical questions posed by AI deserve attention. PME courses are beginning to incorporate case studies on algorithmic bias, autonomous weapons, and the legality of AI-driven targeting under the Law of Armed Conflict. The goal is to instill “ethical AI fluency” – ensuring that officers not only understand what AI can do, but also the moral and legal frameworks guiding its use. Students might debate scenarios, for instance, about an autonomous drone engaging a target without a direct human command, examining how DoD’s AI Ethics Principles (responsibility, equity, traceability, reliability, governability) should apply. By grappling with these issues in the classroom, future commanders and planners will be better prepared to make tough calls about AI employment in the field. They learn that embracing AI does not absolve them of accountability – on the contrary, it requires more educated oversight. The military’s emphasis on leadership with integrity extends into the AI era: an officer needs the knowledge to question an AI recommendation, recognize when the data might be flawed or the algorithm biased, and insist on appropriate human control measures. Thus, courses in ethics, law, and policy are evolving to cover AI, ensuring the warrior ethos and professional norms adapt to include stewardship of intelligent machines.
Another critical aspect of defense education in the AI age is fostering interdisciplinary and interagency training. AI in national security isn’t confined to the Department of Defense alone – it spans the intelligence community, homeland security, defense industry, and academia. Recognizing this, PME institutions and training commands are increasing exchanges and joint learning opportunities. For example, the DoD has partnered with universities (like MIT and others) to offer specialized AI courses to military cohorts, and it convenes events such as the AI Partnership for Defense which bring together allied military officers and defense civilians to share AI lessons learned [22]. On the interagency front, one can envision combined training where military analysts and, say, CIA or NSA analysts learn side by side about applying AI to intelligence fusion – building networks of expertise that span organizational boundaries. Such cross-pollination is vital because the challenges of AI (from data sharing to ethics) often require a whole-of-government approach. A Naval officer who understands how the Department of Homeland Security uses AI for critical infrastructure protection, or an Air Force officer who grasps the FBI’s perspective on algorithmic bias, will be better equipped to collaborate during joint operations and crises.
Crucially, faculty development and leader development programs are adapting to empower this educational shift. Instructors at war colleges and service schools are being encouraged to familiarize themselves with AI tools and concepts so they can mentor students effectively. U.S. Army War College faculty, for instance, documented their experience of gradually integrating AI into their teaching – highlighting that faculty comfort with AI is a prerequisite to student education. Within the operational forces, commanders are also pushing “digital literacy” initiatives down the ranks. A notable example is U.S. Special Operations Command, which recently had about 400 of its leaders complete a six-week MIT-affiliated course on AI and data analytics. The intent is to create a leadership cadre that not only understands the technology but “demands it,” actively pulling AI solutions into the field. This top-down and bottom-up approach to education – from generals to junior officers and enlisted technicians – will cultivate a culture where AI is seen as an essential tool in the arsenal. In summary, defense education is being reimagined for the information age: blending technical literacy, ethical grounding, and joint cooperation to produce military and intelligence professionals who can harness AI’s power responsibly and creatively in service of national security.
Governance and Risk Management of Military AI
As the U.S. military integrates AI into critical operations, robust governance and risk management frameworks are paramount to ensure these technologies remain strategic enablers and not liabilities. The Department of Defense has proactively set guardrails through high-level principles and policies. In 2020, the DoD adopted a set of Ethical Principles for AI, which articulate how AI systems should be developed and used in accordance with the military’s legal and ethical values. These five principles — Responsible, Equitable, Traceable, Reliable, and Governable — now guide all DoD AI projects. In practice, they mean that humans must remain accountable for AI decisions, AI outcomes should be as free from bias as possible, systems should be transparent and auditable, they must be rigorously tested for safety and effectiveness, and there must always be the ability to disengage or shut off an AI system that is behaving unexpectedly. For example, the “Responsible” principle explicitly states that DoD personnel will exercise appropriate levels of judgment and care when deploying AI and will remain answerable for its use. This institutionalizes a “human-in-the-loop” (or at least “on-the-loop”) mandate, ensuring that AI augments human decision-making rather than replaces it in any uncontrolled way.
Implementing these principles requires concrete governance measures. The Pentagon’s Joint AI Center (now CDAO) has been charged as a focal point for coordinating AI ethics implementation, including standing up working groups to develop detailed guidelines and tools for compliance. One focus area is algorithmic transparency – making AI systems as explainable as possible to their human operators. The “Traceable” principle addresses this, mandating that AI technologies be developed such that relevant personnel possess an appropriate understanding of how they work, including insight into training data and logic. This is leading to investments in explainable AI research for defense applications, so that a commander can ask not just “What is the AI recommending?” but “Why is it recommending that?”. For instance, if an AI tool flags a particular vehicle as hostile, commanders want confidence in the basis for that judgment (sensor signatures, behavior patterns, etc.), rather than accepting a “black box” output. Explainability builds trust and helps humans and AI collaborate more effectively – a lesson learned from early deployments like Project Maven, where analysts had to validate AI-generated target cues. It also enables troubleshooting: if an AI system makes a questionable suggestion, engineers and operators can audit the decision process to identify potential biases or errors (aligning with the Equitable principle’s aim to minimize unintended bias).
Risk management of military AI systems spans technical, operational, and strategic levels [23]. Technically, one risk is the reliability and robustness of AI models. In battlefield conditions, data can be noisy, adversaries can attempt to deceive AI (through camouflage, decoys, or cyber means), and systems may encounter scenarios not covered in training. The DoD addresses this through extensive testing and evaluation regimes. Per the “Reliable” principle, each AI capability must have well-defined uses and be tested for safety and effectiveness within those use cases. For example, before an AI-driven target recognition system is fielded, it undergoes trials across different environments (desert, urban, jungle, etc.) to evaluate performance and failure modes. Recent conflicts have provided cautionary tales: simplified AI tools reportedly had mixed results in the Russia-Ukraine war, sometimes misidentifying objects (e.g., classifying heavy machinery as trees or falling for inflatable decoys) when faced with weather or camouflage conditions beyond their original training. Human analysts outperformed these nascent systems in complex scenarios, underscoring that current AI is far from infallible and must be used with human oversight. To mitigate such risks, DoD policy emphasizes continuous operator training and system tuning – AI models should be updated with new data, and users must understand the system’s limitations. Moreover, the “Governable” principle requires that AI systems be designed with the ability to detect and avoid unintended consequences, and crucially, to disengage or deactivate if they start to act anomalously. This is essentially an insistence on a “kill switch” or fallback control for autonomous systems, which is vital in weapons platforms to prevent accidents or escalation. In sum, engineering robust AI means planning for failures: building redundancy, fail-safes, and manual override options into any critical AI-enabled system.
On the operational and strategic risk front, DoD leaders are aware that AI could introduce new uncertainties even as it solves problems. One concern is the acceleration of decision cycles potentially leading to humans being outpaced. If an AI can identify and recommend engagement with a target in seconds, there’s a risk that command and control might not properly vet actions in time. The U.S. approach to this is “human machine teaming” – using AI to speed up information processing, but still requiring a human decision at the trigger point for lethal force, consistent with DoD Directive 3000.09 which governs autonomous weapons. This aligns with broad expert consensus that human judgment must remain central: RAND researchers, for instance, note a “broad consensus regarding the need for human accountability” in the use of military AI, recommending that responsibility clearly rest with commanders and human control span the entire system lifecycle. Another risk is strategic instability: if one side’s AI gets an advantage, there’s pressure on adversaries to respond quickly (or even preemptively). The DoD is approaching this by coupling its pursuit of AI with confidence-building measures and international dialogue. The U.S. has publicly committed to the lawful, ethical use of AI in warfare and is engaging allies and partners to do likewise. By championing principled AI use, the U.S. hopes to set norms that reduce the risk of inadvertent escalation – for example, by agreeing that humans will supervise any AI that can initiate lethal action, or that early-warning AI systems will be designed to avoid false alarms.
Additionally, governance involves accountability and oversight mechanisms within the military. Just as there are safety boards for accidents, there may be review boards for AI incidents or anomalies. The Defense Department is instituting processes to review AI programs for ethical compliance and is considering certification regimes (analogous to operational test & evaluation for hardware) for AI systems before deployment. The chain of command is being educated that owning an AI tool doesn’t diminish their responsibility for outcomes; if an autonomous vehicle or a decision aid makes a mistake, commanders are expected to investigate and address it just as they would a human error. This is reinforced by the ethical principle that DoD personnel “remain responsible for the development, deployment, and use” of AI. In practical terms, that could mean developing doctrine and TTPs (tactics, techniques, and procedures) for AI use – e.g., specifying that a human must verify an AI-generated target before engagement, or that there be at least two human checkpoints for any fully automated process in live operations.
In summary, U.S. defense planners are actively putting frameworks in place so that AI is used safely, ethically, and effectively. The Pentagon’s approach is one of controlled experimentation: push the envelope with AI to gain its advantages, but do so under strict human oversight, with constant testing, and guided by a strong ethical compass. This governance mindset reframes AI from a feared “black box” risk into a well-supervised partner for the warfighter. It acknowledges risks – technical glitches, enemy counter-AI tactics, legal ambiguities – and seeks to mitigate them through responsible design and policy. With these measures, the U.S. aims to reap the strategic benefits of AI (speed, scale, insight) while upholding the values and control that have long guided the use of advanced technologies in national security.
Strategic Competition and Decision Superiority in the AI Era
Artificial Intelligence has emerged as a central arena of strategic competition, much like nuclear technology or space exploration were in earlier eras. Today, the competition is perhaps most intense between the United States and its near-peer rival China, with profound implications for global security and decision superiority on future battlefields. China has explicitly prioritized AI in its national and military strategy, seeking to become the world leader in AI by 2030 and to transform the People’s Liberation Army (PLA) into a “world-class military” by mid-century, in part through what it calls the “intelligentization” of warfare [24]. A key facet of China’s approach is its policy of Military-Civil Fusion, which marshals the nation’s robust civilian tech sector in direct support of military AI development. Unlike the U.S., where private tech companies and the Pentagon cooperate but are separate, China’s centralized model blurs this line – private AI firms are effectively co-opted into serving PLA needs. This has allowed China to tap advanced research and commercial innovations at speed. In recent years, the PLA has established joint military-civilian AI laboratories, funded tech competitions to encourage dual-use AI innovations, and stood up dedicated units to integrate commercial tech into PLA operations. The results are telling: according to one study by Georgetown’s CSET, the PLA now procures the majority of its AI-related equipment from China’s private tech companies rather than traditional state-owned defense enterprises. In other words, China is harnessing the dynamism of its AI startup ecosystem under a top-down strategic directive – a combination that has yielded rapid progress in areas like facial recognition surveillance, autonomous drones, and AI-assisted command systems for the PLA.
The United States, for its part, is determined not to cede its historical advantage in military technology and decision-making superiority. American defense officials have stated plainly that AI is critical to future military preeminence. A 2024 Army report noted that AI is the one technology that will largely determine which nation’s military holds the advantage in coming decades. This recognition has led the U.S. to craft its own strategy to win the “race” for military AI, albeit by leveraging America’s strengths: innovation, alliances, and a values-driven approach. The U.S. is pursuing what might be termed a “responsible offset” – seeking to out-innovate adversaries in AI while maintaining robust ethics and stability measures. Practically, this involves significant investments in R&D (the Defense Department requested over $1.8 billion for AI/ML in the 2024 budget), new organizational structures like the CDAO to unify efforts, and closer collaboration with the private sector. The Pentagon knows that many cutting-edge AI breakthroughs originate in companies like Google, Microsoft, OpenAI, or myriad startups. Unlike China’s state-driven fusion, the U.S. approach incentivizes cooperation through initiatives such as the Defense Innovation Unit (DIU) and AFWERX/Army Futures Command tech hubs, which aim to fast-track commercial AI tech into U.S. military use. A recent bold initiative is Deputy Secretary Kathleen Hicks’ “Replicator” program, announced in late 2023, which aims to field “multiple thousands” of AI-enabled autonomous systems across multiple domains (air, land, sea) within 18-24 months. Replicator’s goal is to leverage autonomy and AI at scale to counter the numerical advantages that China might deploy in a conflict (for example, swarms of inexpensive drones could act as a force multiplier to blunt a larger naval fleet or saturate an adversary’s air defenses). By rapidly scaling such capabilities, the U.S. seeks to ensure it can offset adversary advantages – much as it did with precision weapons in the past – and complicate any opponent’s war plans.
Decision superiority – the ability to observe, orient, decide, and act faster and more effectively than an adversary (the OODA loop concept) – is a core focus of AI competition. AI has the potential to accelerate the OODA loop to unprecedented speeds. For the side that masters this, AI can provide a decisive edge in command and control. Imagine a future conflict scenario: AI algorithms instantly fuse multi-source intelligence (satellite imagery, electronic intercepts, social media, etc.), identify emerging threats, and present command with optimized courses of action, all in real time. The commander enabled by such AI support can make decisions inside the enemy’s decision cycle, forcing the adversary into a reactive stance. This is essentially what Project Maven and similar ISR AIs foreshadow – compressing a targeting process that once took hours into minutes or less. Faster decision-making, however, is only an advantage if paired with accurate and informed decision-making. Here lies a nuanced competition: it’s not just about acting quickly, but about acting wisely with AI-provided insight. The U.S. is thus investing in AI that improves not only speed but the quality of situational awareness – for instance, AI that can predict an adversary’s next moves or detect subtle patterns in adversary behavior that humans might miss. This could dramatically improve the U.S. military’s ability to anticipate and shape a confrontation rather than just react.
For deterrence, the message that emanates is powerful: a military that can think and act faster across domains can credibly threaten to neutralize an opponent’s actions before they bear fruit. U.S. defense leaders believe integrating AI into the force will bolster deterrence by projecting confidence that America can “prevail on future battlefields” despite challenges. The flip side is that if the U.S. were perceived as lagging in AI, adversaries like China (or Russia) might be tempted to press advantages, thinking the U.S. unable to respond in time. Thus, maintaining a leadership position in AI is seen as critical to preventing conflict as much as winning one. Indeed, a technologically superior force equipped with AI decision-support and autonomy could deter aggression by making any attack plan against it too uncertain or likely to fail.
That said, the AI arms race also carries deterrence dilemmas. One concern analysts note is that when both sides have high-speed, automated decision systems, there’s a risk of escalation if those systems lack sufficient human override. A minor incident could be misinterpreted by an AI as a full-blown attack requiring immediate response, leading to a rapid spiral – a scenario sometimes called “flash war.” Avoiding this requires careful strategy. The U.S. and other responsible powers will need to establish rules of the road for military AI, perhaps new agreements or at least tacit understandings (analogous to Cold War arms control in spirit, if not in formal treaty). Confidence-building measures, like transparency about certain defensive AI systems or hotlines to clarify ambiguities, could mitigate the risk that ultra-fast AI systems push humans out of the loop in crisis decision-making. In the competition with China, this means that even as the U.S. develops AI to maintain superiority, it also seeks dialogue on norms – for example, the Pentagon has indicated interest in talks about AI safety and crisis communications to reduce chances of an accidental clash due to AI misjudgment. Balancing competitive urgency with strategic stability is tricky but vital. The U.S. aims to win the AI race by demonstrating not only better technology but also stronger governance of that technology, thereby persuading allies and neutral countries to align with the U.S. vision of AI-enhanced security rather than China’s. As former Google CEO Eric Schmidt (who chaired the Defense Innovation Board) remarked, U.S. leadership in articulating ethical AI principles shows the world that democracies can adopt AI in defense responsibly. In the long run, this could translate into a coalition advantage – if U.S. allies trust American AI systems and agree on their use, it amplifies collective deterrence against aggressors who might use AI in destabilizing ways.
In conclusion of this competitive landscape: AI is becoming a cornerstone of what strategists term the new “Revolution in Military Affairs.” It promises to reshape how wars are deterred, fought, and won. Both Washington and Beijing know that superiority in AI could mean faster and more precise operations, better coordinated forces, and more resilient systems – in short, an edge in almost every dimension of conflict. The United States, leveraging its open society and innovative economy, is striving to maintain its edge by integrating AI across defense while upholding the rule of law and international norms. China, with its state-driven approach, is rapidly challenging that edge. The outcome of this competition will significantly influence global power balances. Decision superiority in the next conflict may belong to whichever nation can most effectively blend human and artificial cognition into its way of war. For the U.S., the task is ensuring that it is our forces, educated and empowered by AI, that can observe first, understand first, decide first, and act decisively, thereby deterring conflict or ending it on favorable terms if it must be fought.
Conclusion and Recommendations
The exploration from computing to cognition – from John von Neumann’s ENIAC to today’s AI – illustrates a clear thesis: artificial intelligence, managed correctly, is not a menacing “third offset” to be feared, but rather a strategic enabler that the United States can harness to enhance national security. Far from replacing the human element, AI can augment American defense capabilities in profound ways: accelerating decision-making, optimizing resource use, and uncovering insights in oceans of data that would overwhelm human analysts. To fully realize this potential, however, the U.S. must reframe its mindset and approaches. AI should be viewed not as a mysterious black box or a mere buzzword, but as a set of powerful tools – tools that require investment in people, sound governance, and visionary planning to integrate effectively. In short, as this paper has argued, the conversation needs to shift from “How might AI threaten us?” to “How can we smartly leverage AI to stay ahead of threats?”. The following forward-looking recommendations are offered to concrete stakeholders in the defense and intelligence community to drive this shift:
Professional Military Education (PME) Institutions – Build an AI-Ready Force: PME institutions should lead the way in cultivating a force that is literate in AI and comfortable with emerging technology. This means updating curricula continuously to include not just fundamentals of AI, but case studies of its use in warfare, ethical decision exercises, and practical training on AI-enabled systems. Military academies and ROTC programs can introduce cadets to AI through STEM courses and wargames featuring autonomous systems. Intermediate and senior service colleges (like Command and Staff Colleges and War Colleges) should require coursework on technology and innovation, ensuring that future battalion commanders and generals alike can champion data-driven approaches. Faculty development is critical – instructors need opportunities (and incentives) to stay current on tech trends, perhaps via sabbaticals with industry or AI research labs. PME schools can also establish partnerships with civilian universities for joint courses or certification programs in AI (similar to the NPS certificate described earlier). Beyond formal curricula, wargaming and exercises should incorporate AI elements: for example, a joint wargame where officers must employ AI tools for logistics or intelligence and deal with adversary AI capabilities in the scenario. By learning in a sandbox environment, leaders will gain intuition about AI’s strengths and pitfalls. Finally, PME institutions should instill a mindset of lifelong learning in technology – given the pace of AI advancement, one-off education isn’t enough. Officers and NCOs will need continuous refreshers, which could be delivered through online courses, micro-certifications, and periodic tech immersion programs throughout their careers. The outcome sought is a U.S. military ethos that values digital competency on par with marksmanship or tactical acumen, producing leaders who confidently wield AI-enabled capabilities as extensions of their command.
Defense Planners and Policymakers – Integrate AI into Strategy and Force Design: For those in the Pentagon, Joint Staff, and combatant commands who shape requirements, doctrine, and budgets, the mandate is to fully integrate AI considerations into all levels of planning. At the strategic level, this means incorporating AI development goals into defense strategy documents and threat assessments. Planners should routinely ask: How does AI change the game in this mission area? and What must we do to stay ahead? For example, war planners should account for AI-driven enemy tactics and how U.S. forces will counter or exploit them. The deliberate planning process can include red-teaming with AI: use adversarial perspective AI models to simulate how a foe might use AI against us, and develop counters accordingly. In capability development, the Joint Capabilities Integration and Development System (JCIDS) should treat AI and data as critical enablers for every new platform or system. Requirements for a new aircraft or ship, for instance, should explicitly outline how it will leverage AI for maintenance, targeting, or autonomous functions. Resource allocation must back up these priorities – sustained R&D funding for military AI, including investments in test infrastructure (data libraries, simulation environments) and secure, scalable compute resources for the services. Defense planners should also emphasize open architecture and interoperability for AI systems so that different platforms and allies can share data and AI services seamlessly, avoiding stovepipes. Experimentation units (like the Army AI Task Force or Air Force’s Project Arc) should get robust support to prototype and field AI solutions quickly, with feedback loops to doctrine writers. Meanwhile, policy-makers need to refine and publish clear doctrines or concepts of operations (CONOPS) for AI-enabled warfare (e.g., how do we fight with human-machine teams? what is the doctrine for autonomous wingmen drones in an air campaign?). These guidelines will help front-line units incorporate AI tools into their SOPs in a disciplined way. Another key recommendation for defense planners is to continue engaging allies: include AI interoperability and data-sharing agreements in alliance talks (NATO, etc.), conduct combined exercises with AI components, and share best practices on ethics and safety. By shaping international standards proactively, the U.S. and its partners can collectively mitigate risks (like uncontrolled autonomous weapons) and present a united front in the face of adversaries’ AI use. In essence, planners must ensure that AI is woven into the fabric of force design and strategy, not treated as a niche or add-on – it should be as integrated as joint operations doctrine itself.
Federal Intelligence Community Leadership – Leverage AI for Decision Advantage: For leaders in the intelligence agencies (CIA, NSA, DIA, NGA, etc.), AI offers an unprecedented opportunity to enhance analytic capabilities and strategic warning, but it requires bold action to adapt decades-old analytic processes. First, intelligence agencies should accelerate the adoption of AI and machine learning for processing the ever-growing volume of data (“big data”) in espionage and open-source intelligence. This includes deploying AI to automatically transcribe and translate foreign communications, flag anomalies in financial transactions or shipping data, generate summaries of vast social media feeds, and identify patterns in satellite imagery (NGA is already doing some of this with illegal fishing detection, for example). By automating low-level tasks, AI frees human analysts to focus on higher-level judgment and synthesis. Augmented analysis tools – like AI assistants that can answer natural language questions or test hypotheses against data – should become standard issue for analysts, with training on how to use them effectively. Intelligence community (IC) leaders also need to invest in talent: hiring data scientists and computational experts, and upskilling current analysts with data literacy (similar to the military’s efforts). Joint duty rotations between IC agencies and the DoD’s AI units (or even tech companies under appropriate safeguards) could cross-pollinate expertise.
Moreover, the IC must develop frameworks for evaluating AI-derived intelligence. Analysts are trained in sourcing and skepticism; now they will need tradecraft for evaluating algorithmic outputs (e.g., understanding confidence levels, potential biases in training data, and error rates of AI models). IC agencies might create an “AI validation unit” that rigorously tests analytic algorithms and guards against false positives or adversary deception of our AI. Speaking of deception: intel leaders should assume that adversaries will try to mislead U.S. AI systems (through spoofing, deepfakes, etc.), so counter-deception techniques and deepfake detection become crucial new intelligence disciplines. A forward-looking recommendation is for the Director of National Intelligence (DNI) to champion a National Intelligence AI Strategy that parallels the DoD’s efforts – aligning all 18 IC elements on common standards for AI ethics, data-sharing (within the bounds of law), and rapid technology insertion. Such a strategy could establish centralized resources like a high-performance computing cloud and classified big data repositories accessible to all IC analysts, leveling the playing field so even smaller agencies can use advanced AI tools without massive organic infrastructure. Finally, intelligence leadership should integrate AI into warning and crisis response mechanisms. AI prediction models might help anticipate geopolitical instability or militarization by identifying subtle indicators far in advance. During fast-moving crises, AI decision-support could help senior officials explore scenarios (“If adversary does X, likely responses Y and Z”). However, these tools must be rigorously vetted and always placed under human supervision to avoid overreliance on machine prognostication. The IC’s ethos of considered judgment and avoidance of surprise can be well-served by AI, but only if embraced with the same diligence applied to other intel methods.
Cross-cutting Recommendation – Cultivate a Culture of Innovation and Adaptation: Across PME, defense planning, and intelligence analysis, a unifying recommendation is to foster a culture that prizes innovation, agility, and informed risk-taking with AI. The federal national security enterprise can draw lessons from the tech sector here: encourage pilot projects, allow “fast failure” and learning in controlled environments, and reward individuals who find creative AI applications to mission problems. Senior leaders should communicate a consistent vision that AI is a priority – not to replace warfighters or analysts, but to empower them. This involves addressing organizational inertia and fear: some personnel worry AI will make their roles obsolete or that mistakes with AI will be career-ending. Leaders must allay these fears by highlighting AI successes, sharing knowledge of AI limitations openly, and framing adoption as an imperative to stay ahead of adversaries like China (whose investments we cannot ignore). Initiatives like hackathons, AI challenge problems, or innovation competitions within agencies can spark bottom-up solutions – for example, an Army brigade S-2 (intelligence officer) develop a machine learning model to predict insurgent attacks from incident data, and higher HQ can amplify and resource that idea if it shows promise. The DoD and IC should also streamline bureaucratic processes that hinder tech adoption (acquisition reform is beyond our scope, but rapidly acquiring and fielding software and AI updates is crucial). Modernizing infrastructure is part of culture too – ensuring deployed units have connectivity and computing to use AI tools, and analysts have access to data forward at the speed of relevance.
In all these efforts, maintaining the American ethical high ground is essential. Reframing AI as an enabler also means communicating – to the force, the public, and the world – that the U.S. will use AI in alignment with democratic values and laws. This stance not only differentiates the U.S. from authoritarian competitors but also builds trust internally that the AI revolution will not run roughshod over moral considerations. It’s heartening that DoD leadership has embraced ethical AI principles and that military thinkers emphasize keeping humans in control. Carrying this onward, ethics training, legal oversight, and international agreements on AI in warfare will reinforce that AI adoption by the U.S. strengthens both our capabilities and our principles.
Conclusion: “From Computing to Cognition” is more than a catchy phrase – it encapsulates the journey the U.S. defense enterprise must continue on. In the 20th century, those who exploited computing power gained a decisive edge; in the 21st, those who master AI will shape the future of security. The United States has the opportunity to lead this next revolution, just as it did the last, by embracing AI as a force multiplier across education, operations, and strategy. By investing in our people’s skills, establishing strong ethical and practical governance, and out-innovating our adversaries, we make certain that AI becomes a source of American strategic advantage. The recommendations above chart a path for military educators, defense planners, and intelligence professionals to collaboratively drive this transformation. The message is clear: AI is here to stay – and if we integrate it wisely, creatively, and responsibly, it will magnify the effectiveness of U.S. national security institutions while preserving the values that distinguish us on the world stage. In the final analysis, technology wars are won not by the machines, but by the humans who wield them best. The United States can and must be the nation that wields AI to sharpen our insight, quicken our decision-making, and strengthen our security, thereby turning a perceived risk into a strategic cornerstone for decades to come.
Disclaimer: This research project uses data derived from open-source materials like public intelligence assessments, government publications, and think tank reports. This report is based solely on my personal insights, hypothetical scenarios, and independent analysis. It does not contain any sensitive or classified information and does not reflect the views of my employer. This report’s purpose is to serve as an exercise in research, analysis, and critical thinking.
Framing the Future
As the global security environment grows increasingly complex, United States (US) national defense strategies need to evolve beyond traditional threat frameworks. While Russia (RU), China (CN), Iran (IR), and North Korea (NK) – the “Big Four” – remain primary national security concerns, they don’t represent the full spectrum of emerging kinetic threats the US may face over the next two decades. A range of other state and non-state threat actors are developing the capabilities and intent to challenge US interests militarily, often in unexpected theaters or through less conventional means. Additionally, evolving political dynamics, specifically within Western alliances, raise important questions about future alliance cohesion and possible conflict scenarios. This analysis differs from my typical threat summary and aims to anticipate where the next significant kinetic threats may arise, in hopes of helping defense planners, policymakers, and allied stakeholders think proactively about where to invest their attention, resources, and strategy over the 2025-2045 horizon.
When assessing future military threats to the US, analysts often focus their attention on the Big Four adversaries due to their prominent capabilities and hostile postures. The next 20 years, however, could also see other countries and non-state actors posing significant kinetic threats to the US or her allies. This report provides a forward-looking analysis of these potential threats, examining both state and non-state threat actors. The report will emphasize capability and intent to engage in or support military conflict against the US or allies, with a special emphasis on Western European nations and any internal political developments, alliance fragmentation, or rearmament trends that could shift today’s partners into tomorrow’s potential adversaries. Each actor or category of actors is discussed with their rationale (why they are considered), an estimate risk level, and plausible scenarios for conflict, highlighting less-obvious dangers that could emerge by 2045, based on credible geopolitical and defense analysis.
State-Based Threat Actors (Outside the “Big Four”)
Pakistan – Nuclear Armed State with Internal Extremism
Rationale: Pakistan possesses a significant military and has a history of both cooperation and conflict with US interests. It’s been a US ally in counterterrorism, yet elements of Pakistan’s security apparatus have supported militant groups that undermine US goals. Notably, Pakistan’s Inter-Services Intelligence (ISI) is known to have supported the Afghan Taliban even as the US fought them, contributing to the Taliban’s eventual victory over the US-backed Kabul government in 2021. It’s this duplicity that bred deep distrust in Washington. Pakistan’s internal stability is under constant strain from Islamist extremism, economic volatility, and fragile democratic institutions. US officials have long worried that Pakistan’s nuclear arsenal could end up in extremist hands during a crisis. Washington has developed contingency plans to secure or seize Pakistan’s nuclear weapons if such a scenario appeared imminent – highlighting the seriousness with which the US views this risk.
Risk Level: Moderate. A deliberate state-level Pakistani attack on the US is unlikely. The combination of nuclear capability, state-sponsored militant networks, and internal instability, however, creates a volatile mixture that could trigger an indirect or unintentional kinetic confrontation: Pakistan could be the flashpoint for a regional war (ie, another India-Pakistan conflict) or a source of nuclear proliferation and terrorism.
Possible Scenarios:
Internal collapse and nuclear crisis: Political unrest, extremist takeover, or an economic collapse could create conditions in which Pakistani nuclear weapons are at risk of falling into the hands of jihadist actors. The US launches a covert or overt military operation to secure these weapons, prompting an armed resistance from Pakistani military factions or militant groups.
Militant provocation of war: A Pakistani-based terrorist group, like Lashkar-e-Taiba or Jaish-e-Mohammed, carries out a mass-casualty attack in India or on US personnel abroad. This could trigger and Indo-Pakistani conflict with potential US involvement – either in support of India, in defense of the US, or to prevent nuclear escalation.
Proxy conflict in Afghanistan: ISIS-K or other extremist groups could exploit border regions between Pakistan and Taliban-controlled Afghanistan to stage attacks. If Pakistan covertly supports such groups while the US engages them, kinetic confrontation could result, especially if the US strikes targets on Pakistani soil without consent.
Turkey – NATO Ally on a Divergent Path
Rationale: Turkey currently occupies a unique position as a longtime NATO member that in more recent years has pursued an increasingly independent and assertive foreign policy. Under President Erdogan, Turkey has shifted toward authoritarianism at home and taken actions that often conflict with US interests. Examples include:
Military incursions into northern Syria to target Kurdish forces allied with the US.
The controversial purchase of the Russian S-400 air defense system, which undermined NATO interoperability.
Heightened tensions with fellow NATO members like Greece over Aegean airspace, maritime boundaries, and military presence on islands.
According to the Council on Foreign Relations and other strategic analysis, Turkey and the US “no longer share overarching threats or interests that bind them together”. Some even describe the relationship as having shifted from ambivalent partnership to open antagonism in some spaces. Despite being a NATO ally, Turkey’s trajectory introduces significant strategic uncertainty for the alliance.
Turkey’s military is large, battle-tested, and capable, giving it significant operational autonomy in regional conflicts. If Turkish strategic interests increasingly diverge from NATO’s or align with adversarial powers like Russia or Iran, the risk of friction or even kinetic conflicts with the US or allied nations rises considerably.
Risk Level: Moderate. Direct hostilities between the US and Turkey are not wanted by either side and remain unlikely under typical conditions. The accumulation of friction points, combined with Turkey’s growing defense ties with non-NATO actors and active engagements in multiple regional conflicts, increases the probability of an accidental or indirect clash. The long-term risk is that Turkey could gradually shift from ally to quasi-adversary.
Possible Scenarios:
Clash in the Eastern Mediterranean: A maritime dispute in the Aegean or Eastern Mediterranean leads to a military incident between Turkey and Greece. Rival military vehicles or weapon systems could collide or exchange fire near contested waters. This would put NATO’s credibility at stake. If Turkey is to be perceived as the aggressor, the US may be compelled to support Greece militarily or diplomatically, especially under an administration that favors alliance solidarity. The potential for miscalculation is higher.
Syria entanglement: Turkey launches a major offensive into Syria to combat Kurdish groups like the YPG, which the former considers terrorists but which the US has partnered with against ISIS. If embedded US forces come under Turkish fire, this could result in direct combat. A 2019 Turkish operation already forced US troops to retreat from some positions in Syria. A future offensive scenario may not leave that option open.
NATO split or departure: In a future scenario where Turkey becomes more autocratic or strategically aligned with RU, it could block NATO consensus on key issues or even formally withdraw from the alliance. In a crisis, Turkey could deny US forces access to critical bases such as Incirlik or cooperate militarily with US adversaries. If US and Turkish forces end up operating in the same theater but backing opposing factions, direct conflict could result.
Venezuela – Anti-US Regime and Regional Destabilizer
Rationale: Venezuela, under Nicola Maduro’s regime, has aligned itself with several US adversaries and actively undermined regional stability. Although currently bogged down in economic collapse and political repression, Venezuela remains a security concerns for multiple reasons:
It has developed strategic ties with IR, RU, and CN, including military cooperation and arms transfers.
It has reportedly hosted operatives from Hezbollah and other IR-linked proxy groups.
It maintains one of Latin America’s largest militaries and has stockpiled modern RU and CN weaponry, including air defenses and armored vehicles.
Its government regularly makes aggressive claims on neighboring territory, notably the oil-rich Essequibo region of Guyana.
Caracas’ close ties with IR are particularly worrisome. Analysts have warned about the growing influence of Tehran-Hezbollah-Caracas axis. IR tankers and advisors have helped Venezuela skirt sanctions, while Hezbollah is alleged to use Venezuela for fundraising and logistics. These developments, in addition to Maduro’s hostility toward the US, raise the possibility that Venezuela could become a platform for proxy operations or regional confrontation.
Risk Level: Low to Moderate
While Venezuela lacks the capability to directly threaten the US mainland, it could indirectly provoke kinetic conflict by destabilizing the region or enabling terrorism. If its aggression toward neighbors or support for non-state actors escalates, the US could be drawn into direct conflict.
Possible Scenarios:
Border war with Guyana: Venezuela escalates its territorial claim over the Essequibo region, where US-based ExxonMobil is developing significant oil infrastructure. In a worse-case scenario, the Venezuelan military crosses into Guyanese territory or targets offshore drilling platforms. Given the US’ economic and diplomatic support for Guyana, a kinetic response is plausible.
Terror network hub: If Venezuela indeed allows Hezbollah or other US-designated terrorist groups to use its territory for fundraising, training, or even plotting attacks, it may eventually provoke US military action. A realistic trigger: a Hezbollah cell in Venezuela plots an attack on a US embassy in the region or on Miami-based exile groups. The US, viewing Caracas as a state-sponsor of terrorism, could conduct strikes on training camps or sanction a naval blockade to stop Iranian weapons shipments. Actions like this could be met by Venezuelan force; its air defenses, for instance, firing on US aircraft.
Internal meltdown and intervention: Venezuela’s ongoing economic collapse worsens, leading to mass refugee outflows and violent factional fighting. If instability spills into Colombia or the Caribbean, a coalition including the US and Colombia might intervene militarily to stabilize parts of Venezuela or to secure its oil facilities. US troops could find themselves in combat against Venezuelan military units or militias in an urban warfare setting. In this scenario, Cuba might also get involved, at least in an advisory capacity, further complicating the conflict.
Syria – Proxy Battleground with Persistent Threats
Rationale: Although Syria lacks the conventional military might it once had, it still remains a dangerous and unstable node within a larger regional power struggle. The fall of the Assad regime in December 2024 after opposition forces captured Damascus and Bashar al-Assad fled to Russia has left the country fractured. In place of a centralized government, several distinct factions now vie for control, including opposition militias, Kurdish groups, remnants of ISIS, and Iranian-backed forces. US special operations personnel remain active in the region, particularly in eastern Syria, where the risk of kinetic conflict is still high.
The vacuum left by Assad’s departure has been exploited by Iran’s Islamic Revolutionary Guard Corps (IRGC) and Hezbollah, both of which are expanding their footprint in the country. IR views Syria as an important corridor for moving weapons and personnel to southern Lebanon, increasing tensions with Israel and heightening the chances of strikes and counterstrikes. Syrian air defenses have engaged Israeli aircraft and remain a latent threat to US operations. All the while, RU retains a military presence, further complication deconfliction efforts.
Risk Level: Moderate to High
While Syria is no longer a unified state capable of initiating a large-scale conflict, the power vacuum has created a dynamic and fragmented battlespace full of high-risk actors. The continued US military presence in eastern Syria creates persistent potential for kinetic escalation, whether through direct attacks, miscalculation, or regional spillover.
Possible Scenarios:
Militia attacks on US bases: IR-backed groups operating in the above-mentioned power vacuum continue launching rockets and drones aimed at US forces in eastern Syria. A successful strike causing significant damage or casualties triggers a large-scale US retaliatory campaign targeting militia infrastructure and command nodes.
Israeli-Iranian war spillover: Should existing hostilities between Israel and Iran escalate, Syrian territory would likely serve as a launch platform for attacks against Israel as it is now dominated by pro-Iranian factions. The US could be drawn into the conflict to defend Israeli assets or disrupt Iranian logistical networks.
Russian or Turkish miscalculations: RU maintains bases and airspace rights in western Syria, while Turkey continues operations against Kurdish factions in the north. A mistaken strike, collision, or contested airspace maneuver involving US forces may spark an unintended military exchange, especially in a post-Assad environment lacking reliable communication channels.
North African States – Fragile Stability and Kinetic Flashpoints
Rationale: North Africa remains a region of considerable strategic importance, connecting sub-Saharan Africa, the Mediterranean, and the Middle East. Even though most North African governments are not hostile to the US, several states face internal instability, radicalization risks, and power influence that could lead to kinetic conflict involving US forces or strategic interests.
Libya continues to be divided between rival government backed by competing external actors (e.g., RU, Turkey, Egypt, and the UAE). Armed groups maintain control of large swaths of territory, and recent reports indicate that Wagner Group remnants and other RU-aligned mercenaries remain operational in the east. Weapons trafficking, smuggling, and jihadist activity persist across Libya’s borders.
Egypt, a longtime US military partner, is facing deepening authoritarianism and economic strain. While currently aligned with US interests, there are some concerns about a future political shift or social unrest that could threaten stability. Additionally, Egypt’s proximity to Gaza, Israel, and Sudan places it in a volatile regional corridor.
Algeria, albeit more stable, has aligned itself more closely with RU in recent years, expanding military cooperation and defense purchases. Its regional rivalry with Morocco over the Western Sahara and its potential to exploit instability in neighboring Sahel countries make Algeria a wildcard.
Risk Level: Low to Moderate
Most North African countries are unlikely to initiate kinetic conflict with the US, but the potential for the US to be drawn into regional instability, especially under a humanitarian pretext or counterterrorism mandate, is significant.
Possible Scenarios:
Counterterrorism in Libya: A resurgent ISIS cell launches a high-profile attack on US or European targets in North Africa. The US responds with special operations raids or airstrikes in Libya, potentially clashing with RU- or Turkish-aligned militias operating in the same space.
Egyptian collapse or coup: Economic meltdown or widespread protests trigger a military coup or civil war in Egypt. The US, fearing disruption to the Suez Canal traffic, threats to Israel, or attacks on US personnel, considers intervention or support for stabilization operations.
Western Sahara escalation: Algeria and Morocco engage in a proxy conflict over Western Sahara. While not likely to escalate into full-scale war, US diplomatic and security interests in the region could result in advisory or ISR support operations that escalate in contested airspace.
Non-State Actors
Salafi-Jihadist Terror Networks (e.g., ISIS, Al-Qaeda, JNIM)
Rationale: Salafi-jihadist networks, despite the weakening of centralized leadership, remain a persistent and adaptable threat to US interests globally. Such groups are decentralized, ideologically motivated, and capable of exploiting failed states or ungoverned regions to launch attacks or destabilize friendly regimes.
ISIS, while territorially defeated in Syria and Iraq, still maintains operational branches such as ISIS-Khorasan (ISIS-K) in Afghanistan and Pakistan, as well as affiliates in the Sahel, Somalia, and Southeast Asia. Al-Qaeda’s franchises continue to plot external attacks and exploit fragile states. US and allied intelligence services repeatedly warned that these groups seek to stage mass-casualty events and inspire homegrown terrorism.
Risk Level: Moderate to High
While yes, these groups lack the conventional capability to challenge the US militarily, their potential to kill US personnel, destabilize allies, or provoke conflict through terrorism is considerable. In areas where US troops operate near jihadist strongholds, the risk of ambushes or base attacks remains elevated.
Possible Scenarios:
ISIS-K external operations: A cell operating our of Afghanistan or Pakistan successfully executes an attack on US diplomatic or commercial targets in the Middle East or Europe. The US responds with strikes inside Taliban-controlled areas, creating tension with local authorities and risking escalation with Pakistan.
Sahel collapse: Jihadist groups overrun military bases or entire towns in Mali, Burkina Faso, or Niger. France withdraws from the region, and the US is forced to decide if she should send troops back in under a counterterrorism umbrella. These missions naturally carry kinetic risks, especially with local militaries weakened by coups and corruption.
High-profile hostage scenario: US or allied citizens are taken hostage by a jihadist group in a failed state. A rescue operation is launched, resulting in a conflict with militants and, possibly, confrontation with a regional power backing the terrorist group indirectly (e.g., Yemen or Libya).
Rationale: Iran’s strategic doctrine relies heavily on asymmetric warfare and the use or proxy militias to advance its interests while maintaining plausible deniability. These groups are typically well-armed, ideologically aligned with Iran’s goals, and capable of executing complex military operations. They have often targeted US and allied personnel, infrastructure, and shipping routes across the Middle East.
Hezbollah possesses an arsenal rivaling that of many national entities, including drones, precision-guided munitions, and surface-to-surface missiles (SSMs). The group is deeply embedded in Lebanese politics but operates with operational independence, especially when executing attacks against Israel. It has also trained militias in Iraq, Syria, and Yemen.
The Houthis in Yemen have evolved from an insurgent movement into a heavily armed force capable of striking US Navy vessels, Saudi oil infrastructure, and Red Sea shipping routes. They have increasingly demonstrated long-range missile and drone capabilities, often with suspected Iranian support.
Iraqi militias aligned with the Popular Mobilization Forces (PMF), many of which maintain loyalty to the IRGC-QF, regularly target US forces in Iraq and Syria. All of these groups operate under the radar, blending in to state security structures while conducting attacks through deniable means.
Risk Level: High
Such groups do not seek open war with the US but are consistently willing to engage in kinetic activity that could trigger escalation.
Possible Scenarios:
Red Sea escalation: The Houthis continue targeting international shipping in the Red Sea and Bab al-Mandeb Strait. After a successful missile strike on a US Navy destroyer or commercial tanker, the US launches a broader military campaign to neutralize Houthi missile and drone stockpiles in northern Yemen, ushering in counterstrikes and threatening escalation with Iran.
Iraqi base attacks: Iranian-backed militias in Iraq carry out a coordinated drone and rocket assault on a US base, killing American personnel. The US responds with strikes inside Iraq, prompting Iraq’s government to demand withdraw and triggering a broader political and military crisis.
Hezbollah mobilization: In the event of war between Israel and Hamas or Iran, Hezbollah opens a second front on Israel’s northern border using its long-range missile arsenal. The US responds with air defense assets, logistics support, or possibly even strikes on Hezbollah command nodes.
Rationale: Transnational criminal organizations (TCOs) are not typically viewed through a kinetic military threat lens, however, their operational capabilities have evolved dramatically. These organizations now possess military-grade weapons, armored vehicles, unmanned aerial systems (UAS), and command structures resembling insurgent forces. Some cartels have even started fielding paramilitary units that engage in direct conflict with Mexican security forces.
Groups like the Jalisco New Generation Cartel (CJNG) and Sinaloa Cartel have repeatedly challenged the Mexican government’s authority, assassinated public officials, and operated sophisticated cross-border smuggling activities. They also operate clandestine weapons manufacturing, use encrypted communications, and employ former military personnel. The cartels’ growing control of territory near the US border raises concerns about spillover violence, especially if US law enforcement or military assets are directly targeted.
In a significant policy shift, President Donald Trump officially designated major Mexican cartels as Foreign terrorist Organizations (FTOs) in early 2025. This move allows the US government expanded legal authority to apply counterterrorism frameworks, including kinetic military options, against these groups. While the designation remains politically controversial, it highlights a growing consensus that TCOs are evolving beyond organized crime and increasingly resemble paramilitary threats.
Risk Level: Moderate. Cartels are unlikely to intentionally provoke a military conflict with the US. Their growing militarization, proximity to US borders, and involvement in cross-border violence make kinetic engagement an increasing possibility.
Possible Scenarios:
Cross-Border incursion or retaliation: A high-profile attack or kidnapping of US citizens in northern Mexico prompts a limited US military raid or drone strike against a cartel compound. The operation results in firefights with cartel gunmen and political backlash from Mexico.
Insurgent-style uprising: A major Mexican state sees total collapse of local governance due to cartel dominance. Cartels deploy armored vehicles and man-portable air-defense systems (MANPADS) to challenge the Mexican military. In coordination with Mexican authorities, the US sends special operations advisors or air assets to assist.
Cartel-linked terror plot: A cartel collaborates with a jihadist network or rogue state actor to smuggle explosive devices or weapons into the US. If a successful attack is traced back to a cartel logistics network, political pressure could motivate a sustained kinetic campaign targeting cartel leadership and infrastructure inside Mexico.
Disclaimer:This post is based on unclassified, open-source reporting and reflects my personal analysis and interpretations. The views expressed here are my own and do not represent the views or positions of my employer.
Background
Russian GRU military intelligence Unit 29155 (aka Cadet Blizzard, Ember Bear, FrozenVista, UNC2589) is a covert subunit of the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU), primarily tasked with conducting high-stakes and clandestine operations abroad. Established under the GRU, Unit 29155 gained public attention due to its involvement in activities that align with Russia’s asymmetric warfare objectives, particularly in Europe, Ukraine, and NATO-affiliated regions. Unit 29155 operates in several domains, from traditional espionage and sabotage to cyber operations.
Unit 29155 has significantly intensified operations since 2020, pivoting from covert actions in Europe toward a greater emphasis on cyber operations with a focus on undermining Ukraine and NATO allies through espionage, data manipulation, and sabotage.
Primary TTPs
Espionage and Data Theft
Unit 29155 conducts extensive espionage campaigns aimed at gathering intelligence from NATO countries, European union members, and multiple nations in Latin America and Central Asian. They’ve exploited critical infrastructure and government systems leveraging reconnaissance tools like Nmap and Shodan to scan for vulnerabilities and gather intelligence [2].
Sensitive information obtained through these operations are occasionally leaked or shared publicly in order to damage the reputations of their victims as part of influence efforts [3].
Destructive Operations
Unit 29155 was tracked as the group deploying the destructive WhisperGate malware, disguised as ransomware but meant to erase victim data. This wiper was used in targeting of Ukrainian governmental and critical infrastructure entities. This activity provided evidence of a clear shift to sabotage tactics aligned with Russian military objectives early in the Russia/Ukraine conflict.
Destructive attacks have also been directed towards logistics operations supporting Ukraine, as seen in repeated attacks against infrastructure crucial to NATO and EU support for Ukraine [2].
Infrastructure Scanning/Domain Enumeration
Unit 29155 engaged in over 14,000 documented cases of domain scanning, targeting NATO infrastructure and EU entities. The scanning has been described as preparatory, often identifying weak points for later exploitation efforts. Open-source and custom tools like Acunetix, WPScan, and VirusTotal were commonly used for this reconnaissance [3].
Cybercriminal Overlap
Not wholly unique to Unit 29155, but rather the broad spectrum of Russian state-sponsored APT groups, researchers report collaboration with known cybercriminal elements, employing non-GRU actors to facilitate operations. This working relationship extends the group’s reach and allows it to exploit technical expertise outside formal military ranks while obscuring attribution. It is also believed that this particular unit consists of primarily junior personnel and so may operate at a less sophisticated level than other groups like APT28 or APT29 [4].
Mitigations and Recommendations
Cyber defenders across critical sectors are encouraged to implement mitigations against known tactics:
Prioritize patching of known vulnerabilities and enforce multi-factor authentication (MFA).
Monitor networks for unusual scanning or reconnaissance activity and segment networks to mitigate lateral movement, post infiltration.
Use intrusion detection tools to monitor for technical indicators of compromise (IOCs) relating to Unit 29155.
Unit 29155’s evolution highlights a blend of traditional espionage with enhanced cyber and sabotage capabilities, particularly in relation to high-stakes geopolitical targets. The expanded use of cyber tactics show the importance for affected nations and organizations to maintain vigilance and robust cyber defenses.