Tag: national security

  • Russian MiG-31s Violate NATO Airspace

    Russian MiG-31s Violate NATO Airspace

    Summary: On 19 September 2025, three Russian MiG-31 fighters violated Estonian airspace near Vaindloo Island, remaining inside NATO territory for about twelve minutes before being intercepted by Italian F-35s deployed under NATOs Baltic Air Policing mission. The aircraft entered without flight plans, had their transponders off, and failed to communicate with air traffic control, prompting a rapid NATO response.

    Estonia reported the jets penetrated up to five nautical miles into its territory. NATO officials framed the incident as another deliberate provocation, testing alliance readiness along the eastern flank. Reports indicate these MiG-31s were carrying Kinzhal hypersonic missiles during the incursion.

    Analysis: Russia is deliberately testing the NATO alliance by sending strategic assets into allied territory to measure response times and resolve. Putin likely views NATOs restraint as an opportunity to exploit through unconventional warfare and hybrid tactics. These incidents are likely to also shape his perception of alliance weakness, influencing future decisions in possible future conflicts in the Baltics or APAC region.

    Sources

    Reuters: https://www.reuters.com/business/aerospace-defense/nato-member-estonia-says-three-russian-jets-violated-its-airspace-2025-09-19/

    AP News: https://apnews.com/article/443df0c37ff2254fcc33d5425e3beaa6

    Türkiye Today: https://www.turkiyetoday.com/world/3-russian-jets-enter-estonian-airspace-nato-scrambles-f-35s-3207176

  • Disrupting Cartels: A Multi-Approach Strategy

    Disrupting Cartels: A Multi-Approach Strategy

    Military raids and high-profile arrests make headlines, but they do not end the business of cartels. Mexican and South American trafficking organizations operate like multinational corporations: diversified revenue streams, global supply chains, and deep local recruitment pipelines. Long-term disruption will require a different approach. The US must pursue strategies that make the cartel business model financially unsustainable and logistically difficult. This means combining proven tactics with fresh ideas.

    The points below are presented as broad concepts to help spark discussion, rather than full write-ups. Bullet points allow the ideas to be absorbed quickly, keep the focus on the main themes, and give room for others to share their perspectives or expand on them with their own insights.

    Hit the Money

    Cartels are profit-driven, so hitting their finances directly is one of the most effective tactics.

    • Sanctions: Use the Foreign Narcotics Kingpin Act and related tools to freeze assets and bar cartel associates from the global financial system.
    • AML enforcement: Monitor wire transfers, front companies, trade-based laundering, and crypto flows.
    • Asset forfeiture: Seize properties, accounts, and equipment tied to trafficking.
    • Gatekeeper accountability: Extend AML requirements to lawyers, accountants, and company formation agents who unintentionally aid laundering.
    Source: https://www.fbi.gov/news/stories/operation-targets-sinaloa-drug-cartel-

    Pressure the Supply Chains

    Without precursor chemicals, weapons, and reliable transport, cartel profits collapse.

    • Precursor controls: Tight licensing, end-user declarations, and transaction reporting for fentanyl and meth ingredients.
    • Transport disruption: Increase inspections at land, sea, and air points. Use risk-scoring for parcels and coordinated seizures to impose losses.
    • Weapon flow prevention: Enforce straw purchase laws, track high-volume ammo sales, and inspect southbound cargo for firearms.
    Map illustrating the flow of fentanyl precursors from China to the U.S., Mexico, and Canada, highlighting the trafficking routes used by drug cartels. Source: https://www.heritage.org/china/report/holding-china-and-mexico-accountable-americas-fentanyl-crisis

    Strengthen Law Enforcement and Legal Tools

    Treat cartels as the national security threat they are.

    • Legal designations: Label major cartels as Foreign Terrorist Organizations to unlock broader prosecution authorities.
    • Multi-charge prosecutions: Use corruption, extortion, racketeering, and terrorism statutes alongside drug laws.
    • Joint task forces: Expand US-Mexico intelligence-sharing, vetted police units, and targeted extraditions.

    Undercut Recruitment

    Cartels can replace jailed or killed members quickly. Cutting off their manpower is essential.

    • Economic investment: Develop infrastructure, job opportunities, and vocational training in high-risk regions.
    • Community programs: Support local leadership, protect activists, and fund youth initiatives.
    • Public messaging: Counter the narco “glamor” with real accounts of cartel life and its short, violent reality.
    • Exit pathways: Offer reduced sentences or amnesty for low-level members who defect.
    Map illustrating the narcotics trafficking flows and operational zones of major cartels in Mexico, highlighting cities of concentration and ports of entry. Source: https://www.start.umd.edu/tracking-cartels-infographic-series-major-cartel-operational-zones-mexico

    Leveraging Technology and Intelligence

    Modern cartels use drones, encrypted comms, and cyber tools; the response must be smarter.

    • Surveillance: Deploy drones, thermal imaging, and satellite analytics to detect labs, routes, and cultivation sites.
    • Data analysis: Use AI to flag suspicious trade, travel, or financial activity linked to trafficking networks.
    • Cyber disruption: Infiltrate encrypted networks, disable cartel IT infrastructure, and track crypto transactions.
    • Fusion centers: Integrate federal, state, and Mexican partners to rapidly act on shared intelligence.
    Members of the Jalisco New Generation Cartel in Michoacán State, Mexico, in 2022. Source: https://www.nytimes.com/2025/06/30/world/americas/sinaloa-cartel-mexico.html

    Conclusion

    Cartels are resilient because they operate across multiple domains: finance, logistics, community, and technology. Disrupting one area temporarily hurts them; attacking all at once can slowly erode their power. The US can combine financial sanctions, supply chain disruption, legal pressure, recruitment prevention, and intelligence innovation into a long-term strategy. Success will not be a single decisive victory, but a steady squeeze that makes cartel operations unprofitable and unsustainable.

  • Iranian APTs and the Next Phase of Infrastructure Risk

    Iranian APTs and the Next Phase of Infrastructure Risk

    In the wake of escalating tensions in the Middle East this past spring, Iranian state-sponsored hackers turned their focus toward a new frontier: US critical infrastructure.

    From May through June 2025, cybersecurity telemetry revealed a 133% surge in Iran-attributed cyber activity targeting US industrial and operational technology (OT) environments. These campaigns hit transportation and manufacturing sectors, but energy and water infrastructure remain long-standing targets. While espionage remains a primary objective, the evidence increasingly suggests Iran is preparing for more overt disruption.

    Strategic Escalation

    Iran’s cyber posture has always mirrored its geopolitical environment. In Spring 2025, that meant responding to Israeli and US airstrikes with asymmetric cyber operations. Groups like APT33 (Elfin), APT34 (OilRig), and MuddyWater (Static Kitten) ramped up traditional espionage, while more aggressive actors like CyberAv3ngers and Fox Kitten (tied to recent Pay2Key.I2P ransomware operations) pursued OT-focused sabotage and ransomware deployment.

    Iran’s messaging through pseudo-hacktivist fronts and deepening ties with ransomware operators clearly framed this activity as retaliation for “Western aggression.” That framing is part of a broader Iranian cyber doctrine that views critical infrastructure compromised as a form of coercion and deterrence.

    In parallel with APT activity, pro-Iranian hacktivists ramped up operations against US defense and critical infrastructure sectors. Groups like “Mr. Hamza” claimed responsibility for defacing and leaking data tied to defense contractors, including Raytheon technologies (RTX), following US involvement in strikes against Iranian facilities. While attribution remains murky, these operations often mirror Iranian state objectives and timelines, suggesting coordination or at least ideological alignment. The targeting of US DIB entities serves Tehran’s broader goal of projecting reach and retaliation across both digital and strategic domains.

    Pre-Positioning

    Iran’s shift toward OT environments is the most significant development.

    • MuddyWater and APT33 continued to exfiltrate intellectual property from manufacturing and defense-adjacent industries.
    • CyberAv3ngers targeted water control systems and other ICS devices with their custom malware, IOControl, discovered embedded in US and allied OT environments.
    • Fox Kitten evolved into a ransomware-as-a-service operator with an 80% (up from 70%) profit-share for affiliates targeting the US or Israel.

    Alongside collecting information, these actors are also establishing persistence. In many cases, backdoors were quietly planted and left dormant; signaling an intent for future activation should the need arise.

    ActorAffiliationFocusObjective
    MuddyWaterMOISAerospace & Defense, Utilities, Gov, Civil & NGOsEspionage
    APT33IRGCAerospace & Defense, Energy, Gov, HealthcareEspionage and Access
    CyberAv3ngersIRGCWater, ICS, FinanceDisruption
    Fox KittenUnkownIT/OT GatewaysRansomware-as-a-service
    OilRigMOISFinance, GovCredential Theft

    Implications for the US DIB

    Iran’s campaigns are displaying a willingness to target logistics, aerospace, and manufacturing suppliers that support US and Israeli defense sectors. The Defense Industrial Base (DIB) should expect more of this; not only from state-sponsored actors, but from criminal or hacktivist affiliates acting on behalf of Iran’s IRGC or MOIS cyber arms.

    Some immediate implications:

    • DIB contractors should hunt for Iranian TTPs and malware like IOControl and DNSpionage.
    • OT segmentation, remote access policies, and endpoint hygiene are foundational.
    • Incident response (IR) planning must include scenario-based escalation modeling: what happens if the access Iran gains today becomes a wiper event tomorrow?

    US Response: Shields Up

    Initially, the federal response may have felt quieter than prior cyber alerts like those during the Ukraine conflict but the signals were still there.

    On LinkedIn, Jen Easterly, former CISA Director, reactivated the Shields Up mantra within hours of US strikes on Iranian nuclear sites. Her post explicitly warned US critical infrastructure operators to expect:

    • Credential theft and phishing
    • ICS-specific malware
    • Wipers masquerading as ransomware
    • Propaganda-laced hacktivist campaigns

    Easterly urged sectors to segment OT networks, patch internet-facing systems, enforce MFA, rehearse ICS isolation, and actively monitor ISAC channels.

    The various critical infrastructure-related ISACs followed suit. And while no single campaign bannered over the response, the defense posture matched the moment.

    Jen Easterly emphasizes the importance of cybersecurity vigilance for US critical infrastructure in response to recent Iranian cyber activities.

    So What’s Next?

    Iran’s recent activity represents a shift in focus, not necessarily a shift in capability. The targeting of OT environments and critical infrastructure may reflect aspirational doctrine as much as operational readiness. While there’s no conclusive evidence that Iranian actors have staged disruptive payloads in U.S. networks, the direction of their targeting and tooling, particularly the development of ICS and OT-specific malware, suggests a growing interest in operational disruption, and not just information gathering.

    For the US defense and critical infrastructure communities, this creates a clear mandate to prepare for the next phase before it arrives.

    • Monitor beyond the perimeter: Iranian threat actors have historically gained access through default credentials, exposed devices, and lateral movements through flat networks.
    • Expect dual-use operations: Intelligence collection and pre-positioning are not mutually exclusive.
    • Reassess assumptions: Iranian groups are traditionally viewed as less sophisticated than Russian or Chinese APTs, but recent coordination and tooling suggest they’re evolving quickly.

    In short, we’re seeing a doctrinal pivot. Iran is exploring offensive options in OT environments, and testing how far it can go without triggering escalation. This makes detection, attribution, and sector-wide coordination more important than ever.

    References

    https://www.nozominetworks.com/blog/threat-actor-activity-related-to-the-iran-conflict

    https://claroty.com/team82/research/inside-a-new-ot-iot-cyber-weapon-iocontrol

    https://www.cisa.gov/news-events/news/joint-statement-cisa-fbi-dc3-and-nsa-potential-targeted-cyber-activity-against-us-critical

    https://therecord.media/iran-state-backed-hackers-industrial-attacks-spring-2025

  • [Case Study] Turkey’s Nuclear Energy Development Proliferation Risk Profile 

    [Case Study] Turkey’s Nuclear Energy Development Proliferation Risk Profile 

    Preface

    This document is a strategic nonproliferation analysis modeled after the IAEA’s State Evaluation Report (SER) format. Developed as part of an academic project, it assesses a specific country’s nuclear capabilities, incentives for proliferation, and potential safeguards challenges. The goal is to simulate real-world intelligence analysis and offer policy-relevant insights on nuclear risk and verification needs.

    Introduction

    Turkey occupies a unique strategic position at the crossroads of Europe and the Middle East, neighboring several current or former weapons of mass destruction (WMD)-proliferating states. As a longstanding NATO member under the US nuclear umbrella, Turkey’s security has historically relied on alliance commitments, including the stationing of an estimated 50 US B61 nuclear bombs at Incirlik Air Base. At the same time, Turkey has pursued nuclear energy ambitions for several decades as part of its economic growth and energy security strategy. Turkey is a Non-Nuclear-Weapon State party in good standing under the Nuclear Proliferation Treaty (NPT) and has been public in its support of nonproliferation norms. Occasional remarks made by Turkey’s leadership, however, have raised concerns about its long-term intentions. This research paper will provide a comprehensive analysis of Turkey’s nuclear energy development. It will survey Turkey’s nuclear program and infrastructure, examine potential incentives and pathways for proliferation, identify indicators of any deviation from peaceful commitments, and review verification mechanisms. The goal is to synthesize current information and offer a policy-relevant assessment of the proliferation risks associated with Turkey, in line with international nonproliferation frameworks.

    State Profile and Nuclear Program

    Background and Nuclear History

    Turkey’s interest in nuclear technology dates to the 1950s with plans for nuclear power formulated as early as 1970. During the Cold War, Turkey’s role as a NATO frontline state against the Soviet Union emphasized its strategic importance, but nuclear weapons were supplied by the US under NATO sharing agreements rather than developed internally. Turkey established the Turkish Atomic Energy Authority (TAEK) in 1982 to supervise nuclear research and development (R&D). In the following decades, Turkey made several attempts to launch nuclear power projects, but these early bids were canceled or delayed due to financial, regulatory, and political hurdles. It wasn’t until the 2010s that Turkey’s nuclear power ambitions gained some traction, showcasing a high-level political push to reduce heavy dependence on imported energy and to nurture economic growth.

    Nuclear Facilities and Fuel Cycle

    Turkey doesn’t yet operate any nuclear power reactors, but construction is underway. The country’s first nuclear power plant, at Akkuyu on the Mediterranean coast, is being built by Russian state-owned Rosatom under a build, own, operate (BOO) model. The Akkuyu Nuclear Power Plant will consist of four VVER-1200 pressurized water reactors (4,800 Mwe total) with construction beginning in 2018, and Unit 1 expected online in 2025, with the remaining units coming online through 2028. A second plant was planned at Sinop on the Black Sea coast in partnership with a French Japanese consortium, but a 2018 feasibility study deemed the project’s cost and schedule unfeasible under the original terms. Since then, Turkey has explored other potential partners for Sinop, including more talks with Russia in late-2022 to possibly construct four reactors there. A third site at Igneada has also been under discussion with Chinese firms offering to build reactors using US-derived technology.

    Map highlighting key locations for Turkey’s nuclear power projects, including Akkuyu, Sinop, and Igneada.

    Beyond power reactors, Turkey’s nuclear infrastructure includes research and training reactors. A small TRIGA Mark-II research reactor (250 kW) has operated at Istanbul Technical University (ITU) since 1979. Another research reactor, the 5 MW TR-2 at the Cekmece Nuclear Research and Training Center near Istanbul, commissioned in 1981, was used for research and isotope production. The TR-2 originally ran on high-enriched uranium (HEU), but in 2009 was shut down to undergo conversion to low-enriched uranium (LEU) as part of nonproliferation efforts. The reactor’s HEU fuel was returned to the US in 2009, and Turkish authorities have since implemented safety upgrades; regulatory approval to restart TR-2 with LEU has been sought, with additional plans to resume operations to support research and isotope needs. These moves have eliminated weapons-grade HEU from Turkey, aligning with global minimization of civilian HEU. Aside from these reactors, Turkey doesn’t currently operate facilities for sensitive nuclear fuel-cycle processes like uranium enrichment or reprocessing, and it has no known capability to produce nuclear fuel indigenously. All fuel for future power reactors will be supplied through foreign partners (i.e., Rosatom for Akkuyu) under long-term contracts. The Akkuyu agreement includes a provision to establish a fuel fabrication plant in Turkey, which would enable local assembly of nuclear fuel, though the plant would still rely on important enriched uranium from Russia. Turkey has an estimated few thousand tonnes U of domestic uranium resources in central Anatolia; a modest supply. The Temrezli in-situ leach uranium mining project was explored by foreign firms, but the government revoked the licenses in 2018, stalling the project. In 2024, Turkey showed interest in securing uranium supply abroad, signing a cooperation pact with Niger to allow Turkish companies to explore Niger’s uranium mines. Turkish officials, including the foreign and energy ministers, visited Niger in mid-2024 seeking access to its high-grade uranium deposits. It’s these efforts that reflect Turkey’s desire to ensure fuel supply for its “nascent nuclear-power industry” and potentially to gain experience in the front end of the fuel cycle, though any moves toward indigenous enrichment remains a longer-term and scrutinized prospect (Sykes, P., Hoije, K., 2024).

    Future Plans for Nuclear Energy

    Looking forward, nuclear energy plays a central role in Turkish strategy to diversify its electricity mix and lessen dependence on imported natural gas and coal. The government’s current plans see three nuclear power plant sites in operation by the mid-20230s (Akkuyu, Sinop, and a third site) with a total of up to 12 reactor units (approx. 15 GWe capacity). As of December 2024, Akkuyu’s four units are under active construction with Rosatom financing and owning a majority stake. As for Sinop, Turkey has initially partnered with a Japanese French consortium (Mitsubishi Heavy Industries, Itochu, and EDF/Areva) to build ATMEA-1 reactors, but cost estimates ballooned (over $44 billion) leading to that consortium’s withdrawal in 2018. Turkey has since kept Sinop on the agenda, even courting Russia to take it over, but no final agreement has been reached. Meanwhile, China has emerged as a leading contender for the third Turkish plant with negotiations in mid-2023 involving Chinese state companies proposing to build reactors (possibly Hualong One designs) at Igneada in Thrace. A project like this might involve US-derived technology through China General Nuclear’s partnership with Western firms. The timeline for Sinop and Igneada projects remains uncertain as both depend on financing terms, technology selection, and Turkish political will to commit further resources. Still, President Erdogan has repeatedly affirmed Turkey’s intent to become a nuclear energy country, even stating an ambition for “three nuclear power plants by 2030” in public remarks. To build the necessary human capital, Turkey has sent forth hundreds of students abroad for nuclear engineering education. Since 2011, Rosatom has sponsored Turkish students at Russian universities to staff Akkuyu; as of 2025, dozens of Turkish graduates have earned nuclear engineering degrees in Russia and returned to work at the plant. Similar training initiatives exist with other partner countries, creating a pipeline of skilled personnel. While aimed at peaceful energy development, this growing base of nuclear expertise and infrastructure provides capabilities that could, under different political circumstances, be relevant to a weapons program. Later in the paper I will expand on dual use.

    Akkuyu Nuclear Power Plant: Turkey’s first and advanced nuclear facility, demonstrating the nation’s commitment to energy diversification and security.

    Nuclear Regulatory Framework

    Turkey has recently overhauled its nuclear regulatory system to meet international standards as it works through nuclear power. Historically, the Turkish Atomic Energy Authority (TAEK) functioned as both a promoter and regulator of nuclear activities. In July 2018, Turkey created an independent Nuclear Regulatory Authority, or Nukleer Duzenleme Kurumu (NDK), transferring most of TAEK’s regulatory and licensing duties to this new body. The NDK regulates nuclear power plant safety, security, and all fuel cycle-related activities, issuing licenses and conducting inspections in line with IAEA guidelines. TAEK’s role was reduced to managing radioactive waste and decommissioning issues, and in 2020 TAEK was further consolidated into the Turkey Energy, Nuclear and Mining Research Institute (TENMAK). TENMAK now acts as the national R&D organization for nuclear science, energy, and mineral resources, inheriting TAEK’s research institutes. The Atomic Energy Commission (AEC), chaired by a high-level official, oversees all nuclear activities and advises the government on policy. Some other relevant bodies include the Ministry of Energy and Natural Resources (which sets energy policy) and the Energy Market Regulatory Authority (EMRA), which handles electricity market licensing and would approve electric generation licenses for nuclear plants. Turkey has also updated its nuclear liability and safety laws in line with international conventions, being a signatory of the Paris Convention on Third Party Liability for nuclear damage. Regarding nuclear security, Turkey has welcomed international peer reviews. The IAEA conducted International Physical Protection Advisory Service (IPPAS) missions in 2003 and 2021, which reviewed Turkey’s nuclear security regime. The 2021 mission noted Turkey’s adherence to IAEA nuclear security guidance and incorporation of the 2005 Amendment to the Convention on Physical Protection of Nuclear Material (CPPNM), which Turkey ratified in 2015. Overall, Turkey’s regulatory framework is being strengthened to support the safe expansion of nuclear energy, with clear separation of promotion (TENMAK) and regulation (NDK) functions as per international best practices. The framework provides the basis for ensuring that Turkey’s nuclear activities stay under effective control and exclusively peaceful.

    Nonproliferation Treaty Obligations and International Commitments

    Turkey has a long-standing commitment to global nonproliferation regimes. It became a party to the NPT as a non-nuclear-weapon state in 1979 and implemented a Comprehensive Safeguards Agreement with the IAEA in 1981. Under these safeguards, all nuclear material and facilities in Turkey are subject to IAEA monitoring to verify they are not used for weapons. Turkey was an early adopter of the IAEA Additional Protocol (AP), signing it in 2000 and putting it into effect in 2001. The AP grants the IAEA expanded rights of access and information, allowing for inspections of undeclared sites and verification of the absence of clandestine nuclear operations. Turkey’s implementation of the AP has allowed the IAEA to reach a broader conclusion since 2012 that Turkey has no undeclared nuclear material activities present. This provides confidence in Turkey’s compliance with its nonproliferation obligations. In addition to the NPT, Turkey also signed the Comprehensive Nuclear-Test-Ban Treaty (CTBT) in 1996, pledging not to conduct nuclear explosion tests. Turkey is also a party to international initiatives aimed at preventing WMD proliferation. It has been a member of the Nuclear Suppliers Group (NSG) since 2000, and of the Zangger Committee since the 1990s. These memberships commit Turkey to implement strict controls on exports of nuclear and dual-use materials, making sure they are not diverted to weapons programs. Likewise, Turkey joined the Missile Technology Control Regime (MTCR) in 1997 to curb the spread of ballistic missiles capable of delivering WMDs. As a chemical weapons possessor in the past, Turkey signed and ratified the Chemical Weapons Convention (CWC) and completed the destruction of its limited chemical stockpile, and it adheres to the Biological Weapons Convention (BWC) while no known biological programs exist in the country. Maybe most importantly, Turkey, like all UN member states, is bound by UN Security Council Resolution 1540, which requires national laws to prevent non-state actors from acquiring NBC weapons. Turkey had welcomed Resolution 1540 and submitted multiple national reports on its implementation, detailing measures such as export controls, border security, and criminalization of proliferation activities. Although Turkey is not a member of any formal nuclear-weapon-free zone, it has voiced support in international forums for the establishment of a WMD-free zone in the Middle East. Turkey’s stance has been that all countries in its region (including Israel and Iran) should forego WMD, aligning with its broader advocacy for disarmament and a fair nonproliferation regime.

    To summarize, Turkey’s official posture is firmly embedded in the global nonproliferation regime: it has comprehensive IAEA safeguards and an Additional Protocol in force, and it participates in all major export control and nonproliferation initiatives. These obligations form a strong legal barrier to diversion of its booming nuclear energy program for non-peaceful uses. However, in the next section we will look at regional security context and Turkey’s evolving strategic calculus could, under some conditions, create incentives to reconsider these commitments.

    Proliferation Pathways

    Strategic Incentives for Nuclear Weapons

    Under existing conditions, Turkey does not actively seek nuclear weapons. That said, analysts have identified several scenarios in which Turkey’s incentives could shift toward proliferation. The most cited trigger is a nuclear-armed Iran. Turkey and Iran are regional rivals balancing each other’s influence; if Iran were to openly acquire nuclear weapons or become a threshold nuclear state, Turkey could feel a heightened security threat and pressure to respond accordingly. The prospect of a nuclear Iran has already spurred debates in Turkey’s strategic community about Turkey’s vulnerability and the reliability of external protection. While NATOs nuclear umbrella currently covers Turkey, President Erdogan has voiced doubts about its long-term credibility, questioning whether it is acceptable that others are free to have nuclear-tipped missiles while Turkey cannot. This sentiment suggests a perceived inequity in the nonproliferation order and a desire for greater strategic autonomy. If Turkey’s confidence in NATO security guarantees diminishes, their leaders might reassess the costs and benefits of an independent deterrent. Calls to remove US nuclear weapons from Incirlik have increased in recent years. If those weapons were removed without an adequate alternative security arrangement, Turkey could perceive a deterrence gap.

    Regional dynamics beyond Iran also play into Turkey’s strategic calculus. Turkey borders Syria and is in proximity to Israel; one a former proliferation and the other an undeclared nuclear state. Erdogan has rhetorically pointed to Israel’s nuclear arsenal as an unfair threat in the region, although Israel’s weapons have existed for decades and are likely not the primary driver for Turkey today. More relevant are Turkey’s great-power neighbors: Russia’s aggressive posturing in Ukraine and Syria and its nuclear saber-rattling unsettle the security environment. Although Russia is a partner of Turkey’s energy projects, their geopolitical interests diverge in places like Syria, the Caucasus, and the Black Sea. A nuclear capability could be seen by some Turkish strategists as an equalizer to deter a nuclear-armed Russia or to assert Turkey’s leadership in a multipolar Middle East. Additionally, domestic and prestige factors could serve as incentives. Under Erdogan’s administration, Turkey as embraces a narrative of “New Turkey” and neo-Ottoman strategic independence. Possessing advanced technology or even nuclear weapons can be viewed as a status symbol of great power. Some proliferation theories suggest countries may pursue nuclear weapons partly to bolster national pride or international standing. Erdogan’s 2019 statement, “there is no developed nation in the world that doesn’t have them”, shows a misconception but also possibly a prestige-driven itch: he compared nuclear armament with being a developed, powerful nation, implying Turkey should not be left behind. Domestically, pursuing nuclear weapons might rally nationalist support by asserting Turkey’s sovereignty against Western double standards, although it would conflict with Turkey’s international commitments and likely invite sanctions or isolation that most Turkish citizens would deem unacceptable.

    In weighing these incentives, it is important to note that Turkey’s powerful military and bureaucratic establishment have historically prioritized alignment with NATO and adherence to the NPT. For decades now, the Turkish General Staff and diplomats were staunch defenders of nonproliferation, partly to maintain NATO cohesion and EU accession prospects. Turkey’s civil-military balance, however, has shifted under Erdogan, with civilian nationalist and assertive leadership consolidating control. If the political leadership decided a nuclear deterrent was necessary for national survival or prestige, domestic opposition from the traditional secular elite or military might not be as decisive a constraint as in the past. Still, any such decision would be fraught with risk, potentially jeopardizing Turkey’s security ties and economy. Most analysts assess Turkey is unlikely to go nuclear unless the strategic environment changes drastically; for example, if Iran openly crosses the nuclear threshold or the NATO security guarantee erodes beyond repair. Even in those cases, Turkey may first pursue middle options like developing latent capability or a civilian fuel cycle that hedges toward weapons before outright weaponization.

    Potential Proliferation Pathways

    If Turkey were to seek nuclear weapons, how could it technically proceed given its current capabilities and constraints? One pathway could be the uranium enrichment route. Turkey has significant experience with nuclear materials at the reactor level but currently lacks enrichment facilities. However, Turkey has consistently asserted its “right to enrich” under the NPT for peaceful purposes. In a proliferation scenario, Turkey may invoke an energy security rationale to establish an indigenous uranium enrichment program seemingly to fuel future power reactors. This could begin overtly as a small pilot enrichment facility under safeguards. An indicator of such intent was Turkey’s pursuit of raw uranium sources like Niger. Acquiring uranium ore is only logical if you plan to fabricate fuel or enrich it domestically rather than relying on foreign supply. A suspiciously timed deal for large quantities of uranium or the import of enrichment-related technology would set off alarms. Were Turkey to secretly acquire or build centrifuges, it might leverage foreign expertise. There is historical precent for illicit procurement networks using Turkey as a transit point. An example would be components for Pakistan’s AQ Khan network passed through Turkish companies in the early 2000s. Turkey could potentially seek external assistance for a weapons effort from allies like Pakistan, which has an established nuclear arsenal. Speculation exists that Pakistan and Turkey, sharing strong defense ties, may cooperate if Turkey decided to proliferate. There is currently no public evidence of any Pakistani commitment to aid a Turkish nuclear weapons program, and Pakistan would face intense international backlash if it openly transferred such technology. More likely, Turkey may try to indigenously develop the pieces of a fuel cycle. For example, this could include a covert centrifuge R&D project hidden within its civil nuclear research institutes. Turkey’s well-educated nuclear engineers could form the backbone of a secret program, though designing efficient centrifuges or obtaining high-strength materials in secret would be a significant challenge under trade surveillance.

    Another pathway is the plutonium route, but this appears less practical for Turkey. Turkey’s power reactors at Akkuyu are light-water reactors under IAEA safeguards. Any diversion of spent fuel for plutonium reprocessing would likely be detected, and Turkey lacks a reprocessing plant. The acquisition or construction of a clandestine reprocessing facility would be tough to conceal. Turkey also has no heavy water reactors which produce bomb-suitable plutonium more efficiently; if it suddenly announced plans for a research reactor of the type that could yield significant plutonium, that would raise red flags. A theoretical scenario could involve Turkey repurposing its research reactor activities: for example, producing small quantities of plutonium in the TR-2 reactor’s fuel, which is now LEU, not ideal for weapons-grade plutonium production, and the reactor is small. This is an unlikely route given safeguards scrutiny and low output. A more dramatic approach would be for Turkey to obtain a complete weapon or some fissile material from outside the country. While not likely, I cannot entirely dismiss scenarios like stealing or seizing the US B61 bombs at Incirlik in a crisis. Those bombs, however, are under US control with Permissive Action Links and would be rendered unusable if seized; such actions would also damage US-Turkey relations and bring about global censure. Alternatively, Turkey could try to buy a weapon or fissile material on the black market. This, too, is remote given today’s monitoring and the lack of any known willing seller aside from North Korea, which Turkey would be extremely unlikely to engage.

    A more subtle proliferation strategy for Turkey might by a nuclear hedge; developing nuclear latency without overt weaponization. This could involve the buildup of all components short of the bomb. These components could be a domestic enrichment capability, a stockpile of LEU, advances in missile delivery systems, and even civil nuclear naval propulsion research which could act as a loophole to withdraw material from safeguards as it uses highly enriched fuel. Turkey has already been building up its ballistic missile program, including the production of the Bora-1 (280km short-range ballistic missile (SRBM)), testing of the Tayfun missile (over 500km) in 2022, and plans to extend this to 1,000 km. While officially for conventional deterrence, such longer-range missiles could be adapted to deliver nuclear warheads in the future. Turkey’s pathway to a bomb, if it ever chose to pursue one, would likely begin with leveraging its civil nuclear program to acquire enrichment technology under seemingly legal pretenses, or less likely, turning to covert external procurement. Each path faces significant technical and political obstacles and would probably be detected before yielding a proper weapon. I will expand on this further in the next section.

    Official animation depicting Turkish Bora-1 ballistic missile being fired from a mobile launcher.

    Indicators and Verification Mechanisms

    With Turkey’s extensive treaty commitments, any move toward nuclear weapons development would generate observable indicators detectable by international monitors or intelligence. Potential indicators of deviation from peaceful use include both changes in policy behavior and technical anomalies:

    • Policy and legal indicators: An obvious indicator would be if Turkey’s government openly signaled intent to leave or undermine its nonproliferation obligations. For example, withdrawing from the NPT or the IAEA Safeguards Agreement would be an unmistakable warning and an escalatory step. Short of withdrawal, Turkey could cease implementation of the AP or refuse IAEA inspections that it previously accepted, on grounds of sovereignty or reciprocity. Such behavior would strongly suggest clandestine activity. Heightened nationalist rhetoric, like repeated presidential statements about the right to nuclear weapons or hints that Turkey might need its own deterrent if regional threats grow, would reinforce concerns. While Erdogan’s past remarks were one instance, a continuing pattern of such statements or inclusion of nuclear options in doctrinal discussions would indicate a policy shift.
    • Undeclared facilities and/or activities: On the more technical side, the emergence of any undeclared nuclear facility would be a red flag. Under the AP, Turkey must declare any new nuclear-related site. Discovery (through satellite imagery or other intelligence gathering methods) of a suspicious installation could indicate a secret enrichment plant. Additionally, construction of unusual scientific facilities like a heavy water production plant or a large radiochemistry lab that could handle plutonium with no clear civilian justification would raise alarms. Turkey’s extensive territory and tunneling expertise mean a covert site is not impossible, but it would be challenging to operate such a facility without detection in the long term, given overhead surveillance and the need to procure specialized equipment internationally. Analysts would scrutinize high-resolution satellite images for telltale signs such as security perimeters, ventilation stacks, waste streams at research sites, etc.
    • Procurement anomalies: A more subtle sign of proliferation intent could be illicit procurement. If Turkish entities start seeking unusual dual-use materials or technology inconsistent with their known civilian programs, this would be a key indicator. Examples could include attempts to purchase high strength maraging steel, frequency converters, vacuum pumps, or ring magnets suitable for gas centrifuges, outside of normal channels. Turkey’s membership in NSG means it has pledged export controls, but procuring imports for itself may involve covert channels.
    • Scientific and technical publications: Clues often emerge from the scientific community. If Turkish nuclear scientists begin publishing research on enrichment techniques, laser isotope separation, high-temperature plutonium chemistry, or warhead design physics, it might indicate state encouragement of expertise in weapons-relevant areas. Open-source analysts monitor publications and patent filings for such patterns. A historical parallel is how Iranian scientists’ papers on neutron initiators and uranium metallurgy were early giveaways of weapons-relevant R&D. For Turkey, any sudden surge in advanced nuclear fuel cycle research beyond what is needed for power reactor operation would be notable. The Turkish government’s tight control over research institutions might limit open publishing, but international collaborations or conference presentations could inadvertently reveal new focus areas.
    • Other behavioral signs: Turkey might seek to harden or diversify its delivery systems as a precursor. Testing of longer-range missiles or developing indigenous satellite launch vehicles could be dual-use for nuclear delivery. Turkey’s pursuit of air and missile defense could also be seen as an effort to protect against Israel/Iran missiles in a world where nuclear deterrence factors in. While not a concrete indicator of proliferation, a heavy emphasis on ballistic missile capability combined with nuclear rhetoric would deepen suspicion.

    To detect and respond to these indicators, the international community relies on a suite of verification systems and monitoring approaches. These include:

    • IAEA safeguards and the AP: If Turkey remains under its current agreements, the IAEA is the first line of defense. The IAEA conducts regular inspections at declared facilities to verify that no nuclear material is diverted. Inventory checks and surveillance ensure that all enriched uranium and spent fuel is accounted for. Under the AP, the IAEA can request complementary access to any site, even non-nuclear sites, to investigate indications of nuclear related activities. As an example, inspectors can visit a university lab or industrial facility on short notice if they suspect nuclear material might be present. They may also carry out environmental sampling, swiping surfaces and air for traces of nuclear isotopes that might indicate clandestine work. Turkey’s broad cooperation has so far meant the IAEA has not reported any irregularities. If evidence arose, like foreign intelligence tips about a hidden lab, the IAEA could invoke a special inspection to clarify the situation, though this requires Board of Governors approval if the state resists.
    • National and allied intelligence: NATO allies, particularly the US, maintain intelligence efforts regarding Turkey’s strategic programs. Signals intelligence (SIGINT) and human intelligence (HUMINT) could pick up conversations or orders related to secret nuclear activities. For example, communication with foreign suppliers about sensitive equipment or unusual military orders to prepare tunnels could be intercepted. Throughout the Iranian nuclear crisis, Western intelligence often uncovered facilities before the IAEA was informed. A similar watch on Turkey would likely reveal early moves toward weaponization. Turkey’s integration in Western defense networks might make covert activities harder to hide from its allies. If such intel were obtained, allies would most likely approach Turkey privately at first, and if concerns continued, raise the issue at the IAEA Board or UN Security Council.
    • Open-source and non-governmental monitoring: In today’s information age, independent researchers and NGOs play an important role. High-resolution commercial satellite imagery is readily available; think tanks like the Institute for Science and International Security or Turkey’s own EDAM could analyze new construction. If a large building pops up at the Kucukcekmece Nuclear Research Center, for example, with no declared purpose, analysts will likely flag it. Turkey’s media and academia may also leak information if scientists are reassigned to secret projects or if there’s an unexplained budget surge for a strategic program. Despite political pressures, Turkey maintains a varied press environment where some investigative journalists continue to pursue sensitive military stories, unless national security laws silence them.
    • International legal systems: If clear evidence of a proliferation attempt emerged, the issue would likely escalate to the UN Security Council (USNC) to authorize stronger verification or enforcement. The IAEA could refer Turkey to the UNSC for non-compliance, as it did with Iran in 2006, if Turkey were found breaching safeguards. The UNSC could then mandate more aggressive inspections or demand Turkey halt certain activities. In extreme cases, sanctions could be imposed to dissuade further progress. One tool could be a bespoke monitoring mechanism like the Joint Comprehensive Plan of Action (JCPOA) model used for Iran, involving extensive verification beyond the AP like continuous monitoring of centrifuge production. Reaching that stage, however, would indicate a severe breakdown of trust. Before it escalates that far, Turkey’s partners would likely exercise diplomatic pressure and offer incentives to keep Turkey within the nonproliferation fold.

    There have been no signs to date that Turkey has undertaken any covert nuclear weapons-related work. The IAEA has continuously drawn the broader conclusion that all nuclear material in Turkey remains in peaceful use. Turkish transparency reinforces confidence in its compliance. That said, maintaining vigilance is prudent. The verification systems described ensure that if Turkey did ever pivot toward proliferation, it would most certainly face early detection and international intervention long before actual weaponization. This alone serves as a strong deterrent against any covert programs.

    Conclusion

    Turkey’s nuclear trajectory truly epitomizes the dual-use dilemma at the core of the nonproliferation regime where a country is pursuing a legitimate nuclear energy program while navigating a volatile security environment and harboring great power aspirations. My analysis finds that Turkey’s proliferation risk, at present, remains low as the country is deeply embedded in treaties like the NPT and relies on NATO security guarantees, giving it strong incentives to abstain from nuclear weapons. Turkey’s nuclear energy program is under strict international oversight, and recent steps show its commitment to purely peaceful use. Turkey’s unique regional posture, however, means its strategic calculus may change if the balance erodes. President Erdogan’s hits at the unfairness of the current order seems to suggest that Turkish restraint should not be taken for granted if proliferation cascades begin in the region.

    From a policy perspective, a few measures can aid in keeping Turkey’s proliferation risk in check. First, sustaining NATOs assurances to Turkey is important since clear commitments and missile defense cooperation can mitigate the country’s security fears that might otherwise spur a nuclear option. The continued presence of NATO nuclear sharing serves as a material reminder that Turkey is protected, and allies should quietly engage Ankara on the role these weapons play and conditions under which their removal would be considered. Second, the international community should support Turkey’s civil nuclear program in such a way that minimizes proliferation-prone capabilities. This can include offering fuel supply guarantees, so Turkey feels no need to enrich uranium, and assisting with spent fuel management. Negotiating a fuel take-back agreement for Akkuyu’s spent fuel, for example, would remove stockpiles of plutonium-bearing material from Turkey. Additionally, encouraging Turkey to source fuel through multilateral frameworks or international fuel banks would reinforce the norm against national enrichment.

    Third, robust diplomacy with Turkey regarding regional threats can address the root motivators. If Iran’s nuclear impasse worsens, involving Turkey in solutions will be important so that Turkey feels its security concerns are heard and managed collectively rather than having to fend for itself. Turkey has, in the past, played a role in diplomatic efforts, for instance, the 2010 Tehran fuel swap initiative with Brazil. Reintegrating Turkey as a constructive partner in nonproliferation initiatives, rather than a potential adversary, is the smarter play. Domestically, Turkey could be encouraged to continue demonstrating leadership in nonproliferation by ratifying the CTBT and actively participating in proposals for a Middle East WMD-Free Zone. These steps would bolster Turkey’s international image as a responsible stakeholder, countering any domestic narrative that might favor a weapons path.

    Finally, the international community needs to maintain vigilant monitoring of Turkish nuclear activities, but in such a way as not to alienate or unjustly accuse. The existing verification tools are adequate, but should Turkey’s behavior change, then preemptive diplomacy is needed to address issues before mistrust spirals out of control. Open lines of communication between Turkish authorities and the IAEA will help clarify any technical questions, like informing the IAEA of any new nuclear research projects under AP declarations to avoid misconceptions.

    To conclude, Turkey today presents a low proliferation risk and in many ways is a model of a non-nuclear-weapons state investing in nuclear power under proper safeguards. Its domestic regulatory reforms and international cooperation on nuclear security are positive indicators. The risk profile is not static, and it depends on geopolitical developments. The evolution of Iran’s nuclear program, the status of Turkey’s relations with the West, and the internal political shifts will all affect Turkey’s strategic choices. Proliferation in Turkey is not inevitable, nor is it likely in the near term, but it is conditional. By understanding those conditions and reinforcing the barriers, the international community can ensure that Turkey continues to find that the benefits of nonproliferation outweigh any perceived gains of developing a nuclear weapon. Keeping Turkey within the nonproliferation regime strengthens regional stability and upholds the integrity of a global norm that, as President Erdogan himself argued at the UN, should apply equally to all. In the end, Turkey’s case displays the importance of addressing the security and prestige concerns that drive proliferation, thereby preserving its role as a responsible actor in the pursuit of nuclear technology for peaceful purposes.

    References

    Ağbulut, Ü. (2019). Turkey’s electricity generation problem and nuclear energy policy. https://www.researchgate.net/publication/332099832_Turkey’s_electricity_generation_problem_and_nuclear_energy_policy

    Akkuyu Nuclear. (2021). 43 Turkish specialists received higher education diplomas in nuclear power engineering. Akkuyu Nuclear. https://akkuyu.com/en/news/43-turkish-specialists-received-higher-education-diplomas-in-nuclear-power-engineering

    Bureau of Nonproliferation. (2003). The Nuclear Suppliers Group (NSG). U.S. Department of State. https://2001-2009.state.gov/t/isn/rls/fs/3054.htm

    Ciddi, S., & Stricker, A. (2025). FAQ: Is Turkey the next nuclear proliferant state? Foundation for Defense of Democracies. https://www.fdd.org/in_the_news/2025/02/05/faq-is-turkey-the-next-nuclear-proliferant-state

    Gesellschaft für Anlagen- und Reaktorsicherheit (GRS). (2023). Nuclear energy in Turkey. https://www.grs.de/en/nuclear-energy-turkey-04072023

    International Atomic Energy Agency. (2021). IAEA completes nuclear security advisory mission in Turkey. International Atomic Energy Agency. https://www.iaea.org/newscenter/pressreleases/iaea-completes-nuclear-security-advisory-mission-in-turkey

    Jewell, J., & Ates, S. A. (n.d.). Introducing nuclear power in Turkey: A historic state strategy and future prospects. Energy Strategy Reviews. https://doi.org/10.1016/j.esr.2015.03.002

    Landau, E., & Stein, S. (2019). Turkey’s nuclear motivation: Between NATO and regional aspirations. Institute for National Security Studies. https://www.inss.org.il/publication/turkeys-nuclear-motivation-between-nato-and-regional-aspirations

    Nuclear Suppliers Group. (n.d.). Frequently asked questions. https://www.nuclearsuppliersgroup.org/index.php/en/resources/faq

    Nuclear Threat Initiative. (n.d.). Turkey. NTI. https://www.nti.org/countries/turkey/

    Republic of Türkiye Ministry of Foreign Affairs. (n.d.). Arms control and disarmament. https://www.mfa.gov.tr/arms-control-and-disarmament.en.mfa

    Shokr, A., & Dixit, A. (2017). Improved safety at Turkey’s TR-2 research reactor: IAEA peer review mission concludes. International Atomic Energy Agency. https://www.iaea.org/newscenter/news/improved-safety-at-turkeys-tr-2-research-reactor-iaea-peer-review-mission-concludes

    Sykes, P., & Höije, K. (2024). Turkey eyes Niger mining projects amid competition for uranium. Mining.com. https://www.mining.com/web/turkey-eyes-niger-mining-projects-amid-competition-for-uranium/

    Turkish Minute. (2025, February 4). Turkey’s short-range Tayfun missile said to surpass 500-kilometer range in latest test. Turkish Minute. https://www.turkishminute.com/2025/02/04/turkeys-short-range-tayfun-missile-surpass-500-kilometer-range-latest-test

    UNIDIR, VERTIC. (n.d.). Türkiye. Biological Weapons Convention National Implementation Measures Database. https://bwcimplementation.org/states/turkiye

    United Nations. (2021). National report submitted by Turkey in accordance with article VIII of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT/CONF.2020/E/37). https://www.un.org/sites/un2.un.org/files/2021/11/npt_conf.2020_e_37.pdf

    Ülgen, S. (2010). Preventing the proliferation of WMD: What role for Turkey? Centre for Economics and Foreign Policy Studies (EDAM). https://edam.org.tr/wp-content/uploads/2010/06/Preventing-the-Proliferation-of-WMD-What-Role-for-Turkey.pdf

    U.S. Government Accountability Office. (2005). Nuclear nonproliferation: IAEA has strengthened its safeguards and nuclear security programs, but weaknesses need to be addressed (GAO-06-93). https://www.gao.gov/assets/a248101.html

    World Nuclear Association. (2024). Nuclear power in Turkey. https://world-nuclear.org/information-library/country-profiles/countries-t-z/turkey

  • US-Led Strikes on Iranian Nuclear Sites: Fallout for China’s Influence and Regional Nuclear Strategy

    US-Led Strikes on Iranian Nuclear Sites: Fallout for China’s Influence and Regional Nuclear Strategy

    Background: Operation Midnight Hammer

    On 13 June 2025, Israel launched a surprise air offensive against Iran, bombing a series of nuclear and military installations after alleging Tehran was on the verge of nuclear weapons capability. Over the next week, intense exchanges ensued: Iran’s IRGC retaliated with hundreds of rockets and drones targeting Israeli cities, while skirmishes flared across Syria and Lebanon via Iran-aligned militias. The conflict escalated dramatically on 21 June 2025 when US President Donald Trump announced Operation Midnight Hammer, a US air and missile strike against three of Iran’s most critical nuclear facilities. All three sites (Fordow, Natanz, and Isfahan) were integral to Iran’s nuclear fuel cycle and their selection was evidence of a sweeping effort to cripple Iran’s ability to produce weapons-grade material.

    Notably, both Fordow and Natanz were under IAEA safeguards at the time of the strikes, meaning they were monitored with cameras, periodic inspections, and seals under the terms of Iran’s Comprehensive Safeguards Agreement. While these facilities had enriched uranium up to 60%, they remained within the bounds of Iran’s NPT obligations, though deeply controversial.

    Iran’s immediate response was militarily limited but symbolically charged. In the early hours of 23 June Tehran fired a volley of ballistic missiles at Al Udeid Air Base in Qatar, the largest U.S. base in the Gulf. The attack was preceded by advance warning and ultimately caused no casualties, a fact President Trump pointed to in calling Iran’s response “weak”. Nevertheless, the message was clear: Iran meant to show it could strike American assets in the region. Simultaneously, Iran’s parliament convened an emergency session in which hardline lawmakers voted to authorize closure of the Strait of Hormuz, a move that, if implemented, would choke off 1/5 of global oil shipments. This vote was largely posturing but it demonstrated Iran’s leverage over global energy markets and signaled how far it might go if fighting continued.

    By 24 June, intensive behind-the-scenes diplomacy, reportedly involving Oman, Russia, and China, yielded a fragile ceasefire. President Trump announced that Israel and Iran had agreed to pause hostilities, with Israel phasing out airstrikes and Iran halting missile fire. Israeli warplanes stood down later that day, ending ten days of open warfare. The truce, however, remained shaky. Within hours of the ceasefire taking effect, Iranian proxies in Gaza and Lebanon launched isolated rocket salvos, and an Iranian missile strike landed in the Israeli city of Beersheba, causing civilian casualties.

    For Iran, the outcome was bittersweet. On one hand, they survived the most concerted US-Israeli military action against it in decades; Iran’s leadership even declared victory once the ceasefire held, with Supreme Leader Ali Khamenei boasting that Iran had “slapped the US in the face” by resisting its demands. On the other hand, the physical damage to Iran’s nuclear program was significant. Post-strike satellite imagery showed heavily damaged buildings at Natanz and Fordow, and Western intelligence assessed that Iran’s enrichment capability had been set back by at least a year or two. US officials characterized the strikes as successful in destroying key infrastructure, while also emphasizing that no strike can destroy the knowledge in Iranian scientists’ heads. As the dust settled, Washington dispatched envoys to rally international support for stricter containment of Iran’s nuclear activities, even as Tehran dug in on its right to peaceful nuclear technology. This set the stage for the strategic implications now unfolding in the region, particularly regarding China’s role and the reactions of Iran’s regional rivals.

    Strategic Insights

    • The US strikes jeopardize China’s investments in Iran and undercut Beijing’s role as regional mediator. While China condemned the attacks, it continues backing Iran economically an diplomatically. Beijing is expected to avoid direct confrontation while reinforcing ties to Tehran via energy trade, technology transfer, and coordinated diplomatic resistance to US pressure.
    Satellite image depicting damage to Iran’s nuclear facility following recent US airstrikes.
    • Iran’s nuclear know-how and stockpiles remain intact despite facility damage. If Tehran resumes covert nuclear work, regional rivals like Saudi Arabia, Turkey, and Egypt may accelerate nuclear “hedging” via civilian programs and dual-use technologies. The strikes risk triggering a latent arms race.
    • Attacking safeguarding facilities raises global legal and strategic concerns. Iran could reduce IAEA cooperation or even withdraw from the NPT. Regional states now question the value of treaty compliance if it doesn’t shield them from military action.
    • The crisis pulls Beijing and Moscow closer to Tehran. Both shielded Iran at the IAEA and could deepen covert cooperation in military tech and trade. China’s Belt and Road Initiative (BRI) ambitions in the region are now tethered to Iran’s resilience and regional stability.
    A detailed map illustrating China’s Belt and Road Initiative, showcasing the global infrastructure network involving railroads, ports, and pipelines.
    • The strikes boost US-Israel deterrence credibility in the short term, but also embolden Iran’s asymmetric response (ie proxy militias, cyber threats, and maritime disruptions). Gulf states remain diplomatically cautious but are reinforcing ties with U.S. defense structures

    Watchlist: Things to Monitor

    IndicatorWhat It Signals
    Iran reduces IAEA access (ie expels inspectors or disables cameras)A move toward clandestine nuclear activity or NPT withdrawal
    Saudi or Turkish announcements on enrichment or reactor projectsStrategic hedging or quiet proliferation intent
    Chinese tech transfers or sanctions-evasion trade with IranStrengthened Iran-China alignment despite Western pressure
    Strait of Hormuz naval activity or proxy mobilizationIranian asymmetric retaliation and escalation risk
    Gulf states request new US air/missile defense assetsDeepening military alignment amid regional insecurity

    Analyst Comment

    From an intelligence perspective, the June 2025 Iran strikes represent a watershed that will reverberate through Middle East geopolitics in the short and mid term. The operation achieved a tactical objective in damaging Iran’s nuclear infrastructure, but it also unleashed a cascade of second-order effects. Chief among them is a likely redoubling of Iran’s determination to obtain a credible deterrent, nuclear or otherwise, to guard against regime-threatening strikes in the future. In turn, this is catalyzing reactions among Iran’s rivals to hedge their bets, potentially ushering the region into a new phase of latent proliferation.

    The role of great powers has been pretty illuminating. China’s response, in particular, shows the primacy of interests over ideology in its foreign policy. Beijing’s vocal condemnation of US aggression was expected, but more telling is what China does next. So far, China appears committed to quietly propping up Iran’s economy and defense industrial base to ensure Tehran remains a thorn in Washington’s side and a viable participant in China’s Eurasian economic plans while carefully avoiding overt confrontation with the US or alienation of the Gulf states. This dual-track approach will test China’s diplomatic agility and will be a turning point in its Middle East footprint. Either China will emerge as a more assertive power brokering outcomes in regional conflicts, or it will retreat to the sidelines if costs outweigh gains. Early indicators (evacuation of Chinese nationals and calls for talks) seem to suggest a preference for limiting exposure, but Beijing is certainly learning from this crisis and will adjust its long-term strategy (for example, accelerating efforts to settle oil trades in yuan to reduce vulnerability to US sanctions pressure, as hinted by its increased use of RMB in dealings with Iran).

    For the United States and its allies, the near-term requirement is to manage escalation and prevent Iran’s retaliation from sparking a broader war. This will mean hardening bases, improving regional early warning systems and processes, and coordinating closely with partners on contingency responses. Diplomatically, it will be imperative to capitalize on the leverage gained over Iran. If Iran is more isolated or its program set back, now is the time to negotiate firmer limits or at least interim arrangements to remove the most dangerous materials from its soil. The US Special Envoy has already signaled openness to talks focusing on Iran’s enrichment levels and stockpile, which would be a face-saving way for Iran to step back from the nuclear brink in exchange for sanctions relief once it regroups. Whether Iran’s leadership feeling humiliated is willing to engage is uncertain, but the ceasefire offers a narrow window for diplomacy before hardliners on all sides gain the upper hand.

    A final note on non-proliferation: the integrity of the global regime is arguably at its most vulnerable point since the North Korean withdrawals of the early 2000s. If the Middle East heads into a proliferation cascade, the credibility of the NPT will suffer worldwide. To counter this, innovative solutions should be pursued. These would include a US-led initiative for a Middle East security guarantee (a nuclear umbrella covering Israel and key Arab states to negate their need for independent arsenals), or a rejuvenated push for regional disarmament talks that include Israel’s capabilities, a topic long taboo but maybe less so in the face of multiple potential nuclear actors emerging.

    For intelligence terms, we will be watching for the morning after indicators: Does Iran move materiel to secret sites? Do Saudi Arabia or Turkey suddenly announce new “research” reactors or mining projects? Do China and Russia sign new defense deals with Iran? Each of these will tell us how far the dominoes could fall. As of now, the short-term implications are clear: heightened tensions, hedging, and alignment shifts. The mid-term implications, whether this results in a fundamentally more nuclearized and polarized Middle East, or a sobered return to the negotiating table, will depend on the deftness of diplomacy in the weeks ahead and the willingness of regional actors to step back from the precipice.

    Stay tuned for more in-depth analysis on Chinese strategic influence in the Middle East, regional nuclear hedging, diplomatic alignments, and regional deterrence dynamics in a writeup to come.

    Additional Reading

    https://www.reuters.com/world/china/china-says-us-attack-iran-has-damaged-its-credibility-2025-06-22/

    https://www.reuters.com/business/energy/chinas-heavy-reliance-iranian-oil-imports-2025-06-24/

    https://www.al-monitor.com/originals/2025/05/iran-boosts-highly-enriched-uranium-production-iaea

    https://thediplomat.com/2025/06/war-in-iran-chinas-short-and-long-term-strategic-calculations

    https://foreignpolicy.com/2025/06/23/iran-china-gulf-states-strait-hormuz

    https://mei.edu/publications/special-briefing-israel-strikes-irans-nuclear-program

    https://specialeurasia.com/2025/06/24/china-bri-israel-iran-conflict

    https://bloomberg.com/graphics/2025-us-strikes-damage-iran-nuclear-sites-satellite-image/

  • Shortcut to Superpower? Rethinking Intelligence and Learning in the Age of AI

    Shortcut to Superpower? Rethinking Intelligence and Learning in the Age of AI

    If I can get the information faster and more efficiently with AI, is that really a bad thing?

    In national security, cyber defense, and intelligence work, speed and accuracy aren’t luxuries, they’re requirements. The faster an analyst can detect, assess, and act on information, the more resilient our posture becomes. So, it’s worth asking: if tools like AI can help us get to those insights faster, does it matter how we got there?

    This isn’t just a classroom debate anymore. It’s a matter of operational advantage that I’m afraid adversarial states may be addressing quicker.

    Intelligence Work is Changing

    In the traditional model, analysts were trained to research exhaustively and reason independently. Today, the volume of data is overwhelming, the velocity of conflict is increasing, and the information space is more contested than ever. Memorizing doctrine or manually parsing SIGINT is outdated.

    AI changes the workflow. It doesn’t remove critical thinking; it simply relocates it. Instead of spending hours searching for the right piece of intel or policy precedent, analysts can use AI to surface patterns, contextualize alerts, and propose early assessments. That frees up cognitive space to focus on what it means and what to do next.

    Another key shift in modern intelligence work is the sheer volume of internally generated reporting, ranging from post-incident summaries and investigative writeups to tactical threat advisories. Over time, these internal repositories have grown so vast that referencing older yet still-relevant documents in future reporting becomes a major challenge. Analysts often know the insight exists somewhere in the backlog, but tracking it down quickly, especially under time pressure, is inefficient or even unfeasible.

    This is where private, domain-specific AI models trained exclusively on an organization’s own corpus can change the game. By indexing historical reports and enabling semantic search across them, these models can retrieve and summarize relevant findings in seconds. For example, if a threat actor resurfaces after a long dormancy, the AI can instantly surface prior incidents, TTPs, and internal commentary, giving analysts a head start and ensuring continuity across time. Rather than reinventing the wheel, intelligence teams can build on their own institutional knowledge more effectively. While some organizations may already employ this functionality, I believe most companies and agencies have yet to adopt it at scale; at least for now.

    The Real Threat Isn’t AI, It’s Passive Use

    Threat actors are already using AI to generate disinformation, automate phishing, and map attack surfaces. If defenders don’t leverage the same tools, they fall behind.

    The real concern isn’t that AI makes us weaker thinkers. It’s that some people will use it to skip thinking entirely. I wouldn’t say that’s the AI’s fault, it’s the user’s intent. A disengaged mind won’t be saved or spoiled by technology. A sharp one, however, can be enhanced.

    Stategic Implications

    In a contested world both geopolitically and informationally, the competitive edge doesn’t go to the one who remembers the most. It goes to the one who can interrogate input, synthesize perspectives, and act decisively. AI, used correctly, accelerates the process.

    National security professionals, educators, and leadership teams should embrace AI not as a crutch, but as a force multiplier. Train people not just to consume answers but to pressure-test them. To ask better questions. To turn good input into greater output.

    Final Thought

    Whether you’re an analyst, policymaker, or digital defender, the real skill today isn’t thinking in isolate, it’s knowing how to think with assistance. The people who learn that now will be the ones driving strategy tomorrow.

  • Russia’s Void Blizzard Targets the West’s Digital Backbone

    Russia’s Void Blizzard Targets the West’s Digital Backbone

    Microsoft Threat Intelligence has surfaced a new Russia-affiliated cyber actor: Void Blizzard, also tracked as LAUNDRY BEAR. Active since at least April 2024, this group is focused on long-term espionage targeting sectors critical to Western governments, infrastructure, and policy-making.

    Void Blizzard is not just another APT clone or cluster moniker. It represents an evolution in operational flexibility and tradecraft, shifting from relying on stolen credentials bought off the dark web to more aggressive adversary-in-the-middle (AitM) phishing campaigns. These newer efforts leverage typosquatted domains mimicking Microsoft Entra portals to harvest authentication tokens and compromise enterprise identities.

    Target Profile

    Void Blizzard’s campaign focus aligns closely with Russian state priorities. It has gone after targets in:

    • Defense and government agencies
    • Transportation and healthcare infrastructure
    • NGOs, education institutions, and intergovernmental organizations
    • Media and IT service providers

    While some activity overlaps with known Russian actors like APT29, Void Blizzard appears to operate as a distinct cell, coordinating within a larger ecosystem of state-sponsored espionage.

    Notable Tactics

    • Credential-based access remains a preferred entry point, but the shift to AitM phishing is a signal of increasing confidence and offensive posture.
    • Microsoft Entra impersonation suggests a deliberate focus on trusted identity systems, highlighting how fragile authentication flows can be under targeted pressure.
    • Operational consistency across NATO states and Ukraine further indicates strategic alignment with geopolitical goals, not just opportunistic targeting.

    Analyst Comments

    If you’re in defense, energy, public health, or civil society work, Void Blizzard’s tradecraft should raise alarm bells. Organizations should be:

    • Auditing Entra ID and authentication logs for anomalies tied to session replay or suspicious SSO activity
    • Deploying phishing-resistant MFA such as FIDO2 keys
    • Training users to identify lookalike URLs and domain spoofing, particularly in password reset or login prompts
    • Tracking overlaps with other Russian campaigns, especially Star Blizzard and Midnight Blizzard, to catch infrastructure reuse or strategic convergence

    Final Thoughts

    Void Blizzard is not flashy, but it is serious. It demonstrates how Russia continues to evolve its cyber espionage toolkit beneath the noise of more destructive attacks. In an era of hybrid conflict, groups like Void Blizzard are the quiet operatives laying groundwork for geopolitical advantage. They definitely won’t be the last.

    See Microsoft’s full report: https://www.microsoft.com/en-us/security/blog/2025/05/27/new-russia-affiliated-actor-void-blizzard-targets-critical-sectors-for-espionage/

  • [Deep Dive] Cyber Tactics and Counterterrorism Post-9/11

    [Deep Dive] Cyber Tactics and Counterterrorism Post-9/11

    Disclaimer: This research uses data derived from open-source materials like public intelligence assessments, government publications, and think tank reports. This report is based solely on my personal insights and independent analysis. It does not contain any sensitive or classified information and does not reflect the views of my employer. This report’s purpose is to serve as an exercise in analysis and critical thinking. 

    Introduction

    Since 9/11, the global terrorism threat landscape has expanded from traditional kinetic attacks to include cyber approaches. Terrorist groups like Al-Qaeda, ISIS, Hamas, and Hezbollah have increasingly adopted digital tools for propaganda, recruitment, surveillance, and humble cyber operations. This shift has pressured counterterrorism (CT) strategies to evolve, integrating cybersecurity, intelligence, and offensive capabilities to address both physical and digital threats.

    Evolution of Terrorist Cyber Capabilities

    In the early 2000s, jihadist groups used the internet mainly for communications and propaganda. By 2014, ISIS had transformed its online presence by actively exploiting social media and encrypted messaging apps to recruit followers, spread propaganda, and coordinate activity beyond traditional battlefields. Though their cyber skills remained limited, some supporters engaged in doxing (public release of personal information), defacements, and minor breaches. A notable case involved a Kosovo hacker passing stolen U.S. personnel data to ISIS [1]. More recently, terrorist networks have begun experimenting with AI tools for media production, reconnaissance, recruitment, and influence operations.

    Groups like ISIS-K, Hamas, and Hezbollah have explored AI-generated videos and deepfakes to amplify their messaging. Hamas has also used fake dating apps to hack phones, and Hezbollah has engaged in cyber espionage aligned with Iranian interests. These adaptations primarily support propaganda and recruitment, not large-scale cyberattacks.

    Traditional vs Cyber Terrorism

    Cyber capabilities have not replaced traditional terrorism but serve as force multipliers. Cyber tools are used to support kinetic attacks, plan operations, and magnify impact. Examples include cyber-assisted target identification and using drones for surveillance or attacks. Analysts conclude that terrorists aim to pair physical destruction with digital disruption. These tactics are not unique to the narrow view of Middle Eastern, or Islamic extremist, terrorist groups, but are also employed by modern Russian intelligence supporting their war with Ukraine.

    Counterterrorism Strategy Shifts

    1. Cybersecurity integration: Governments treat cyber as central to CT. Coordination between state agencies and the private sector protects critical infrastructure (ISACs, CISA, Infragard, etc).
    2. Digital Intelligence and Surveillance: Intel agencies use AI and data analytics to monitor online radicalization and terrorist planning. Tools flag extremist content and behaviors on encrypted platforms.
    3. Offensive Cyber Operations: States have launched direct cyberattacks on terrorist infrastructure. Operation Glowing Symphony by US Cyber Command disrupted ISIS media operations [2].
    4. Online Radicalization Prevention: Governments promote alternative narratives and partner with communities to counter online extremism.
    5. Infrastructure Protection and Crisis Response: CT planning now includes simulations of cyber-physical attacks. Agencies collaborate to ensure emergency response continuity.

    Persistent Challenges

    One of the primary challenges in countering cyber-assisted terrorism is actor attribution. In cyberspace, it is often difficult to determine who is behind an attack, especially when threat actors use anonymization techniques or false flag operations. A disruption to infrastructure or a breach of data originate from a lone hacker, a terrorist cell, or a hostile state, complicating response strategies and legal recourse. This ambiguity forces intelligence agencies to closely examine digital footprints, motives, and affiliations before responding, often in real time.

    Resource limitations and skill gaps also slow down effective CT operations in cyber. Traditional law enforcement and CT units often lack the deep technical expertise needed to triage malware, decrypt communications, or conduct forensics on seized devices. Recruiting and retaining cyber talent remains difficult for public agencies, especially as adversaries continue to innovate rapidly using widely available technology. The widespread use of encrypted communication platforms like Telegram and Signal compounds the problem, allowing terrorists to organize and recruit while remaining hidden from surveillance.

    Another pressing issue is the overwhelming volume of data. Every day, analysts must sift through massive amounts of online content to detect meaningful threats. AI tools can assist but are prone to false positives and blind spots, sometimes flagging harmless content or missing cleverly disguised plots. Legal and jurisdictional barriers further complicate enforcement efforts, especially when attackers operate across multiple countries. Existing laws are often outdated or inconsistent with the pace of modern cyber threats. Finally, terrorist groups remain highly adaptive, quickly shifting tactics, platforms, and tools in response to enforcement measures. This constant innovation challenges even the most capable security agencies, requiring them to remain agile and proactive in their strategies.

    Conclusion/Policy Implications

    Cyberterrorism has not replaced traditional terrorism but increasingly complements it. CT efforts now require a holistic approach integrating digital capabilities with conventional methods. Policymakers should focus on:

    • Cross-sector partnerships
    • Legal modernization
    • Investment into talent and tech
    • Infrastructure resilience

    The post-9/11 period demonstrates that success in CT depends on anticipating how terrorists will exploit emerging technologies and being ready to disrupt both their online and offline operations.

    References

    [1] Doxing and Defacements: Examining the Islamic State’s Hacking Capabilities – Combating Terrorism Center at West Point

    [2] https://icct.nl/sites/default/files/2023-01/Chapter-29-Handbook-.pdf

    https://icct.nl/publication/exploitation-generative-ai-terrorist-groups

    https://www.theguardian.com/world/2018/jul/03/israel-hamas-created-fake-dating-apps-to-hack-soldiers-phones

    https://www.dhs.gov/sites/default/files/2024-10/24_0930_ia_24-320-ia-publication-2025-hta-final-30sep24-508.pdf

  • Asymmetric Cyber Threats: Lessons from Guerrilla Warfare

    Asymmetric Cyber Threats: Lessons from Guerrilla Warfare

    The Digital Guerrilla

    When you think of cyber warfare, you often imagine digital equivalents of tanks, missiles, and grand battles between major powers. In reality, however, the cyber conflict we see today looks less like Normandy and more like a slow-burning insurgency.

    State-sponsored actors, whether they be from Russia, China, Iran, or North Korea, rarely go toe-to-toe with superior Western cyber defenses in a direct, conventional fight. Instead, they operate in the shadows, using asymmetric tactics meant for low-cost, high-yield disruption. Their methods resemble the playbook of guerrilla fighters throughout history: blend in, strike vulnerable targets, and exploit the defender’s size and rigidity.

    In today’s post, I’ll unpack how these cyber operations mirror classic guerrilla warfare and why this analogy is so interesting and matters for defenders.

    Guerrilla Warfare 101

    It’s all about fighting smarter, not harder. It’s the art of the weak harassing the strong. Following the great stalemates of trench warfare in WWI, insurgent groups have leveraged mobility, surprise, and intimate knowledge of the terrain to outmaneuver larger, better-equipped militaries.

    Characteristics of guerrilla warfare include:

    • Asymmetry: Small groups using unconventional methods to challenge superior foes.
    • Deniability: Fighters blend into civilian populations, making attribution and retaliation tougher.
    • Hit-and-run tactics: Ambushes, sabotage, quick raids, always moving.
    • Psychological ops: Targeting public morale, misinformation.
    • Terrain advantage: Mastery of local geography to evade and frustrate conventional forces.

    Sounds familiar, right? Swap out “fighters” for “APT groups“, “civilian populations” for “cybercriminal groups“, and “terrain” for “network infrastructure“, and you’ve got a pretty solid picture of today’s cyber landscape.

    Guerrilla Tactics in Action: State-Sponsored Cyber Threats

    Asymmetry in the Digital Domain

    State-sponsored groups like Russia’s APT28 and APT29 or North Korea’s Lazarus Group rarely match US or allied cyber capabilities head-on. They exploit the cost asymmetry. For a few thousand dollars in phishing kits, compromised VPNs, leased botnets, or commercial malware, they can inflict millions in damages, steal sensitive data, or shape public narratives. The defender’s dilemma? Defending every endpoint and supply chain vector costs exponentially more than launching simple, repeatable attacks.

    Deniability and Proxy Warfare

    Just as guerrillas hide among civilians, cyber operators mask their identities using compromised infrastructure, false flags, or contracting work out to cyber criminal elements and impressionable anarchists in the case of Russian GRU’s Unit 29155 who incite anarchy and sabotage through various Telegram channels to Ukrainian youth. North Korea’s use of 3rd party IT freelancers to infiltrate Western companies is another prime example. The plausible deniability muddies attribution, delays response, and allows our adversaries to operate with relative impunity.

    Hit-and-Run in Cyberspace

    Watering hole attacks, defacements, and smash-and-grab data theft mirror the guerrilla’s ambush. Breach a vulnerable vendor, pivot to the target, exfiltrate quickly, and vanish while defenders are left scrambling. These aren’t prolonged sieges, they’re opportunistic raids meant to probe weaknesses and sow chaos.

    Information Warfare as PsyOps

    Iranian and Russian cyber units have elevated disinformation to an art form. Influence operations targeting elections, societal divisions, or corporate reputations function as digital equivalents of guerrilla psychological operations. The goal isn’t always tangible damage; sometimes it’s just to erode trust and create confusion or panic.

    Mastering the Digital Terrain

    In guerrilla conflicts, knowing the terrain is everything. In cyberspace, that “terrain” includes compromised networks, 3rd party vendors, poorly monitored endpoints, and the dark web. State-sponsored groups map this terrain meticulously, identifying soft targets and exploiting global infrastructure for cover.

    Some Case Studies: Cyber Guerrilla Warfare in Practice

    In 2025, there are now plenty of examples to pull from but some of the more recent, notable cases include:

    Russia’s (FSB) COLDRIVER/Callisto/Star Blizzard

    Operating between cyberespionage and influence, this group exemplifies cyber guerrilla tactics. With recent reporting detailing their persistent targeting of Western NGOs, think tanks, and academia reflects a strategy of sustained harassment. They focus on undermining soft targets, shaping narratives, and stealing sensitive (not always classified) information that feeds broader geopolitical campaigns.

    North Korea’s IT Worker Fraud

    The DPRK has combined traditional APT activities with an insurgent-style infiltration campaign: fraudulent IT workers securing remote jobs at Western firms. Once inside, these operatives act as insider threats with direct access to networks, sidestepping conventional perimeter defenses. This tactic parallels how insurgents embed within civilian populations to evade detection and execute attacks from within. In this case, funding the regime’s weapons programs, among other motivations.

    Iran’s APT33/35/42

    Iranian threat groups excel at opportunistic targeting, often focusing on vulnerable sectors like oil & gas, transportation, and academia. Their attacks prioritize disruption, espionage, and influence, mirroring guerrilla strategies of infrastructure sabotage and psychological impact over decisive victories.

    Volt Typhoon: An Occupational Model

    China’s Volt Typhoon operations showcase a more sophisticated “occupation” model. Rather than smash-and-grab, their campaigns are long-term entrenchments in U.S. critical infrastructure, designed for persistent access and latent sabotage potential. This is less hit-and-run, more like guerrilla fighters establishing fortified zones in contested territory.

    Why the Guerrilla Warfare Analogy Matters

    Understanding cyber threats through the lens of guerrilla warfare reframes how we think about defense and deterrence.

    • Misaligned Defenses: Conventional cyber defenses are analogous to defending cities with large armies while insurgents roam freely in the countryside. Static defenses are insufficient against agile, persistent adversaries.
    • Deterrence is Harder: You can deter a nation’s military with superior firepower. Deterring a deniable, decentralized cyber guerrilla force is a different challenge.
    • Hybrid Warfare Context: These cyber guerrilla tactics don’t exist in a vacuum. They’re part of broader hybrid strategies, supporting kinetic operations, diplomatic pressure, or internal destabilization efforts.

    Mitigation?

    This is tough one as mitigation against guerrilla tactics requires more than simply building bigger walls or buying more security tools. Some things worth considering:

    • Persistent threat hunting
    • Implement honeypots
    • Coordination/collaboration across government, private sector, and civil society
    • Publicly naming and sanctioning enablers

    Tactics Snapshot

    • Phishing (social engineering)
    • Credential Harvesting (Supply chain raids)
    • Watering Hole Attacks (sabotaged Infrastructure)
    • Supply Chain Subversion (indirect targeting)
    • Wiper Malware (destructive sabotage)

    Conclusion

    Guerrilla warfare didn’t disappear with the end of colonial insurgencies or Cold War proxy wars. It evolved and found a new battleground on the web. Today’s state-sponsored cyber operations mirror the asymmetric tactics of historical insurgencies in that they’re cheap, deniable, persistent, and designed to frustrate superior foes. For defenders like us, recognizing this parallel is less academic and more essential for adapting strategy, resource allocation, and useful threat modeling.

    The digital guerrilla is no longer just a rebel in the jungle. They’re a sanctioned asset, behind a keyboard, operating in the blurred space between espionage, sabotage, and information warfare.

  • [Deep Dive] AI as a Force Multiplier in Modern Warfare

    [Deep Dive] AI as a Force Multiplier in Modern Warfare

    Disclaimer: This research project uses data derived from open-source materials like public intelligence assessments, government publications, and think tank reports. This report is based solely on my personal insights, hypothetical scenarios, and independent analysis. It does not contain any sensitive or classified information and does not reflect the views of my employer. This report’s purpose is to serve as an exercise in research, analysis, and critical thinking. 

    Purpose: This paper argues for the reframing of AI as a strategic tool, not an existential threat, and outlines how US defense education institutions must evolve to prepare future leaders for operationalizing AI in national security environments. 

    Executive Summary: Artificial intelligence (AI) is transforming the strategic, operational, and educational dimensions of national defense. While public discourse often gravitates toward extremes, the reality is more pragmatic: AI is becoming foundational infrastructure in modern warfare. As such, the Deportment of Defense (DoD) and its professional military education (PME) institutions must adapt to cultivate leaders who understand, integrate, and govern AI systems effectively.

    This paper argues for a shift in how AI is conceptualized within defense circles. Drawing historical parallels to the role of ENIAC during World War II, I contend that AI should be seen less as an independent cognitive entity and more as a strategic enabler – one that augments decision-making processes across all echelons of command. The report outlines current defense applications of AI, analyzes institutional barriers to integration within PME, identifies governance challenges, and positions AI literacy as a cornerstone of future competitive advantage.

    Key recommendations include embedding AI case studies and simulations into curriculum, developing interagency and industry-academic partnerships, and enforcing principles of explainability and human-in-the-loop oversight. Ultimately, preparing warfighters and strategists for the AI era requires a comprehensive modernization of defense education grounding in technical fluency, ethical judgement, and operational relevance.

    Introduction: AI has rapidly moved from theoretical construct to operational reality. Once confined to academic laboratories and speculative fiction, AI now underpins critical functions in logistics, intelligence, command-and-control (C2), and cybersecurity. As the US and its adversaries invest heavily in AI for strategic advantage, the defense community must make a pivotal choice: will AI be treated as a black box novelty managed by contractors, or as a core component of national defense doctrine managed by trained leaders?

    This paper adopts a strategic lens to answer this question, using the legacy of early computing – particularly ENIAC’s wartime role – as a historical analogue. Just as ENIAC revolutionized how ballistic trajectories were computed, enabling faster and more precise battlefield decisions, AI today offers unprecedented opportunities to extend cognitive reach. But the key to unlocking this potential lies not just in technology, but in human leadership.

    The central thesis is that AI must be embedded into defense education as both subject and tool. PME institutions need to produce not only tacticians and strategists, but also technologically literate leaders who understand AI’s strengths, limitations, and ethical implications. By framing AI as infrastructure we position it where it belongs: at the heart of 21st century defense readiness.

    The sections that follow will explore the evolution of AI narratives, real world applications in defense, barriers to educational integration, risk governance, and the implications of strategic competitive in the age of AI.

    Public Fears and Dystopian AI Narratives

    Public discourse around AI often gravitates toward sensational fears – from Hollywood’s Terminator-style takeovers to worries of mass unemployment. Surveys have shown that a majority of American approach AI with trepidation. For example, in 2023 Pew [1] found 52% of US adults were more concerned than excited about growing AI use (versus only 10% more excited).

    Survey results showing U.S. adults’ concerns about artificial intelligence in daily life, with a clear majority indicating more concern than excitement.

    Common public concerns include, but are not limited to:

    • Existential “AI Takeover” Scenario: Dystopian scenarios loom large. In one poll, 63% of US adults voiced worry that AI could lead to harmful behavior, and a similar share feared AI systems might “learn to function independently from humans” [2]. Over half (55%) even believed AI could pose a risk to the very existence of the human race. Such views reflect the enduring influence of science fiction tropes. The 1984 film The Terminator, for instance, “popularized fears of unstoppable machines” and cemented the notion of AI as an existential threat in the public imagination [3]. Some decades later, its imagery of a rogue superintelligence (Skynet) remains shorthand for AI doom in media narratives. 
    • Mass Unemployment and Social Disruption: Another prevalent fear is that AI and automation will displace human workers on a massive scale. Among Americans more concerned than excited about AI, the risk of people losing jobs is the top reason for their concern. As an example, about 83% of Americans expect that driverless vehicle adoption would eliminate jobs like rideshare and delivery drivers. This anxiety extends itself beyond blue-collar work with white-collar workers also worrying that advances in generative AI could render their skills obsolete. Media coverage often highlights these scenarios of AI-induced economic upheaval, reinforcing public apprehension that “the robots” will leave humans unemployed.
    • Loss of Human Control and Ethical Misuse: People also fear humans could lose control over AI systems, leading to unpredictable or unethical outcomes. High-profile AI incidents and dystopian portrayals have primed the public to be wary of autonomous decision-making. In surveys, large majorities express concern that increasing AI use will erode privacy or be deployed in ways they are not comfortable with. Ethical campaigns have seized on these fears – for instance, advocacy groups invoking “killer robot” imagery push for bans on lethal autonomous weapons, tapping into public unease about machines making life-and-death decisions [4]. The vivid narrative of a moral boundary crossed by ungoverned AI resonates widely, even if actual military policy still mandates human oversight of use-of-force decisions.

    These dystopian or exaggerated perceptions are amplified by popular media and entertainment. While they reflect genuine concerns, they often overshadow the more mundane reality of what current AI can (and cannot) do. The result is a public narrative skewed toward worst-case scenarios – one that stands in stark contrast to how strategic decision-makers view AI.

    Defense Strategists’ Perspective: AI as a Tool, Not a Terminator

    Great catchline, I know. At the strategic level – particularly within U.S. defense and national security circles – artificial intelligence is predominantly seen as a force multiplier and necessary enabler, rather than a sentient threat. The Department of Defense (DoD) views AI as a technology to be harnessed in order to maintain a competitive edge. The Pentagon’s official strategy frames AI as transformative in augmenting human capabilities and improving military effectiveness, not replacing human judgment outright [5]. Key leaders emphasize integration over fear:

    • Maintaining a Competitive Edge: The DoD’s Third Offset Strategy explicitly aimed “to exploit all the advances in artificial intelligence and autonomy and to insert them into the Defense Department’s battle networks” as a means to preserve U.S. military superiority [6]. Rather than dwelling on speculative dangers, defense planners focus on how AI can change the character of warfare to the U.S.’s advantage. The 2018 National Defense Strategy anticipated that AI will significantly alter warfighting, and accordingly officials like Lt. Gen. Jack Shanahan (first director of the Joint AI Center) argued the United States “must pursue AI applications with boldness and alacrity” to retain strategic overmatch. In this view, failing to embrace AI is the bigger risk, as adversaries racing ahead in AI could threaten U.S. security.
    • AI as a Practical Enabler: Inside the Pentagon, AI is treated as a suite of powerful tools – from data-crunching algorithms to intelligent decision-support systems – that can streamline operations and enhance human decision-making. Officials stress that current AI is narrow and task-specific, not an all-powerful brain. For example, the Joint Artificial Intelligence Center (JAIC) was established in 2018 specifically to accelerate the DoD’s adoption and integration of AI across missions [7]. JAIC’s mandate has been to serve as an AI center of excellence providing resources and expertise to military units, underlining that AI’s role is to assist warfighters and analysts. As JAIC Director Lt. Gen. Michael Groen put it, “We seek to push harder across the department to accelerate the adoption of AI across every aspect of our warfighting and business operations”. This illustrates the prevailing mindset that AI is a general-purpose capability to be infused into logistics, intelligence analysis, maintenance, training, and other domains to make the force more effective and efficient.
    • Augmentation, Not Autonomy Run Amok: Defense leaders are generally cognizant of public fears and have repeatedly clarified that their pursuit of AI is not about ceding control to machines. DoD policies (such as directives on autonomous weapons and the 2020 AI Ethical Principles) insist on meaningful human oversight of AI-driven systems. In practice, the military’s near-term AI projects are largely focused on decision support, automation of tedious tasks, and optimizing workflows – far from Hollywood’s rogue robots. As one Navy official noted, much of AI’s impact will come through “mundane applications… in data processing, analysis, and decision support,” rather than any dramatic battlefield androids. The internal narrative frames AI as a collaborative technology: an aid to human operators that can sift intelligence faster, predict maintenance needs, or simulate scenarios – ultimately empowering human decision-makers, not displacing them. This perspective stands in stark relief against the “AI takeover” trope; instead of fearing AI’s agency, defense strategists worry about not using AI enough to keep pace with rivals.

    In summary, U.S. defense decision-makers tend to regard AI as a critical enabler to be integrated responsibly into military and security operations. The emphasis is on opportunity – leveraging AI to enhance national security – tempered by pragmatic risk management (ensuring reliability, ethics, and control), rather than on existential danger. This measured, tool-oriented outlook differs markedly from public dystopian narratives, focusing on AI’s strategic utility rather than its threat to humanity.

    Think Tank Perspectives: Weighing Risks Versus Strategic Integration

    Leading national security think tanks and research centers (RAND, CNAS, CSET, and others) have analyzed AI’s implications and generally echo the need to avoid hyperbole. Their reports often strike a balance – acknowledging legitimate risks from military AI, yet cautioning against exaggerated fears that could hinder innovation. Several consistent themes emerge from expert analyses:

    • AI as Transformative, but not Apocalyptic: Analysts note that while AI will shape the future of warfare, it is better understood as a continuum of technological evolution rather than a revolution that overnight yields super intelligent machines. A recent Center for a New American Security (CNAS) study argues that comparisons to an “AI arms race” are overblown – in reality, military adoption of AI today “looks more like routine adoption of new technologies” in line with the decades-long trend of incorporating computers and networking into forces [8]. In other words, there is momentum behind AI integration, but not the kind of breakneck, uncontrolled spiral that sci-fi scenarios or headlines might suggest. The report underscores that current military AI is a general-purpose technology akin to an improved computer, not a doomsday weapon in itself.
    • Concrete Risks: Safety, Bias, and Escalation: Think tank assessments tend to focus on tangible risks that come with deploying AI – e.g. system failures, vulnerabilities, or inadvertent escalation – rather than speculative sentience. A RAND Corporation analysis of military AI highlighted issues like reliability in high-stakes contexts and the need for testing to prevent accidents [9]. Similarly, CNAS has pointed out the risk that flawed AI could misidentify threats or act unpredictably in complex environments, which could increase the chance of accidents or even unintended conflict if not managed. These are serious concerns, but notably within the realm of technical and strategic problem-solving – addressable by policy, human oversight, and international norms – as opposed to uncontrollable AI revolt. By highlighting such issues, experts aim to ensure integration is done responsibly, without invoking a need to halt AI advancements altogether.
    • Strategic Integration as Imperative: On the whole, expert communities frame AI as an indispensable element of future national security, one that must be integrated strategically and swiftly. The consensus is that the U.S. cannot afford to fall behind in AI adoption, given competitors like China investing heavily in military AI. For instance, a RAND report on DoD’s AI posture emphasized scaling up AI experiments and talent to maintain U.S. tech superiority. Think tanks frequently describe AI as a “general-purpose technology” that will underpin intelligence analysis, cybersecurity, logistics, and more – a foundation for military power much like electricity or the internet. As such, their recommendations often focus on accelerating AI integration (through funding, R&D, public-private partnerships) while instituting safeguards (ethical guidelines, testing regimes, confidence-building measures internationally) rather than entertaining the idea of slowing or banning military AI outright.

    In think tank narratives, there is an implicit push to reframe the conversation about AI in national security. Rather than viewing AI itself as the threat, the emphasis is on the risk of misusing or not using AI. Experts urge policymakers to mitigate the real risks – such as unintended escalation or AI failures in weapons – through norms and oversight, but at the same time to push beyond public fear-based reluctance so that beneficial AI applications are not lost. This balanced perspective reinforces the notion that AI, handled correctly, is a net strategic enabler, not a harbinger of doom.

    Narrative Gaps in Policy, Investment, and Education

    The divergence between public fears and defense-sector views of AI has tangible effects on policymaking, defense investments, and even the education of the national security workforce. A threat-centric narrative can create frictions – from public resistance to military AI projects, to slowed adoption – whereas an enabler-centric narrative could foster more proactive policy and innovation. Several notable impacts of the differing narratives include:

    • Public Opinion Shaping Policy Debates: Heightened public fear of AI can translate into political pressure for restrictive policies. Lawmakers attuned to their constituents’ dystopian anxieties may call for strict regulations or bans on certain AI uses (e.g. autonomous weapons) before the technology is fully understood. For instance, the visceral “killer robot” trope has fueled campaigns at the United Nations to ban lethal autonomous systems preemptively. While ethical in intent, such moves – driven by worst-case imagery – could limit the military’s ability to develop AI for defensive or benign uses (like active protection systems) if not carefully negotiated. On the flip side, when expert communities and defense leaders advocate AI as a strategic necessity, they push for policies that invest in AI R&D and set guidelines for responsible use rather than prohibition. This tug-of-war between dystopian narratives and strategic imperatives plays out in policy forums. The outcome can affect everything from budget allocations to the rules governing AI development. A climate of fear might spur oversight (e.g. Congressional hearings grilling AI programs for potential dangers), whereas a reframed narrative highlighting AI’s national security benefits could build public and bipartisan support for sustained investment.
    • Tech Industry Engagement and Investment: The narrative gap also directly impacts collaboration between the government and the tech industry – a critical relationship for defense AI innovation. A stark example was Google’s withdrawal from the Pentagon’s Project Maven in 2018 after employee protests. Google engineers, influenced by concerns that their work on AI could contribute to lethal drone operations, argued it ran afoul of the “Don’t be evil” ethos. Facing internal revolt and public criticism, Google opted to cancel its AI contract with DoD [10]. This incident sent shockwaves through the defense community. It demonstrated how a workforce steeped in dystopian AI fears or moral concerns can impede defense AI projects, even those aimed at non-lethal tasks like imagery analysis. The MITRE Corporation analyzed this rift and noted that thousands of tech employees objected to their companies partnering with the military, perceiving it as “going against their values”. Similar pushback hit other firms (Microsoft, Amazon) in cases where AI or tech contracts for defense raised alarm among staff. The result is a chilling effect on defense tech investment: companies become hesitant to bid on AI programs that might spark public relations issues or staff resignations. This dynamic hampers DoD’s access to top AI talent and tools. Defense strategists recognize that sustaining U.S. military AI leadership requires close cooperation with the private sector (which leads in AI innovation) – but that cooperation is harder to forge when the public narrative paints such work as contributing to dystopia. Bridging this gap is thus seen as essential for investment and innovation.
    • Defense Education and Talent Development: Within military and defense educational institutions, there is a concerted effort to counter hype and fear with sober, informed understanding of AI. Leaders acknowledge that some segments of the public – and even the workforce – are uneasy about AI.  address this, defense educators are reframing the narrative for the next generation of officers and analysts. A U.S. Naval War College conference in 2019 was pointedly titled “Beyond the Hype: Artificial Intelligence in Naval and Joint Operations,” aiming to dispel misconceptions and highlight practical applications of AI as a tool. Scholars and military practitioners at that event discussed real-world use cases and limitations of AI, rather than science-fiction fantasies, implicitly teaching that AI is a technology to be mastered, not feared. Likewise, the DoD has launched AI education initiatives to raise the baseline knowledge across the force. The 2020 DoD AI Education Strategy called for integrating AI into professional military education curricula and training programs, ensuring personnel have a basic grasp of AI capabilities and ethics. This not only prepares the workforce to use AI effectively, but also helps inoculate them against sensationalized notions. By normalizing AI as another subject of proficiency – alongside cybersecurity or electronics – the defense community is building a culture that views AI rationally and focuses on operational advantages and safeguards. In short, defense education efforts seek to narrow the narrative gap by producing leaders who can engage with AI’s opportunities and risks in a nuanced way, rather than defaulting to pop-culture driven extremes.

    The effects of narrative are thus self-reinforcing. Public fears, if unaddressed, can slow or skew policy and scare off key partners, which in turn could hinder the U.S. from fully leveraging AI for security. Recognizing this, many defense stakeholders argue that winning the “hearts and minds” on AI – both within the force and among the public – is becoming as important as the technology itself. This sets the stage for reframing AI’s role in national security.

    Reframing AI as a Strategic Enabler

    Given the evidence, a clear lesson for the defense community is the need to shift the narrative on artificial intelligence from one of looming threat to one of strategic enablement. The goal of such reframing would be to align public perception with the reality that AI, managed correctly, is a tool that can enhance security and prosperity, not an out-of-control adversary. Support for this reframing argument is found in both policy analysis and practice:

    • Emphasizing Benefits and Mission Outcomes: Defense agencies are beginning to tell a more positive, concrete story about AI’s role. Rather than speak in abstractions, they highlight how AI can save lives by improving search-and-rescue, or how it reduces routine workload for troops. This kind of messaging helps the public and Congress see AI as directly contributing to safer, more effective military operations. A MITRE study in 2020 specifically urged DoD leaders to communicate a compelling narrative about “the value of defending the country with honor” using modern technologies like AI, and to stress the Department’s commitment to ethical deployment of these tools.  showcasing adherence to ethics and human oversight, the Pentagon can alleviate fears of ungoverned AI. For example, DOD’s adoption of AI is often coupled with a Responsible AI framework – sending the message that the U.S. will use AI in line with its values, not as a reckless killer robot. Making such assurances public and transparent can build trust and counteract dystopian impressions.
    • Bridging the Cultural Divide: AI as an enabler also involves closing the gap with the tech sector and general workforce. This means engaging Silicon Valley and young technologists on shared values and national security needs. Success stories of AI-public sector collaboration are being lifted up to change minds. For instance, highlighting how an AI tool developed by a tech firm helped U.S. forces deliver aid more efficiently, or how a machine-learning model is saving maintenance costs in the Air Force, can illustrate AI’s positive impact. Think tanks and industry leaders suggest that public-private partnerships on AI should be promoted in the narrative  – to show that working on defense AI can be a force for good, protecting soldiers and civilians alike. The hope is that as more technologists see AI projects in defense yielding constructive results (and not just weapons), the stigma diminishes and investment flows more freely. In tandem, DoD is adjusting its own messaging to be more receptive to ethical concerns, rather than dismissive. Instead of waving away protests, defense advocates are increasingly acknowledging the need to earn trust. This cultural dialogue is part of reframing AI as a shared mission for security, as opposed to a government venture that the public should fear.
    • Aligning Narrative with Reality: Fundamentally, the reframed narrative must continually point out that the “science-fiction” view of AI is misaligned with the current reality. As experts note, most military AI systems are more akin to smart assistants than independent actors. Driving this point home can correct misperceptions. The contrast between a fictional Skynet and real-world AI applications (like predictive maintenance algorithms) is stark – a reframed narrative leverages that contrast to reduce undue alarm. Defense educators and communicators therefore stress separating fact from fiction: acknowledging genuine AI-related risks (e.g. algorithm bias or adversary use of AI for disinformation) but clarifying that these are challenges manageable through policy and engineering, not reasons to halt progress. As it has been put before, even AI-enabled weapons “lack the malevolent sentience of Skynet,” and keeping humans in the loop is the prudent path – so we should focus on maintaining control and ethics rather than fearing an uprising. This kind of messaging directly tackles the Terminator mythos, reframing the issue around human responsibility and strategic advantage.

    In conclusion, repositioning AI in the public and policy narrative as a strategic enabler – a powerful tool under human direction – is critical for the United States to fully benefit from the AI revolution in defense. The chasm between public fear and military optimism can be narrowed by education, transparency, and consistent examples of AI’s value. Strategic-level decision makers and thought leaders increasingly advocate this reframing because they recognize that without public buy-in and understanding, even the best AI technology may fail to be adopted. The background evidence presented here supports the argument that AI is not an autonomous menace to be halted, but a strategic asset to be guided and governed wisely. Reframing the narrative in this way can help ensure robust policymaking, sustained investment, and an informed defense workforce – all oriented toward integrating AI in service of national security, responsibly and effectively.

    Operational Use of AI in the US Defense Sector

    AI technologies are already being fielded across multiple domains of U.S. defense operations, enhancing everything from intelligence analysis to maintenance and cybersecurity. One high-profile example is Project Maven, launched in 2017 as the Department of Defense’s “Algorithmic Warfare” initiative. Project Maven uses machine learning to process the vast streams of drone surveillance video and satellite imagery to identify potential targets with far greater speed than traditional methods [11]. By rapidly classifying objects (e.g. distinguishing hostile tanks from civilian trucks) and integrating those insights into battlefield command systems, Maven dramatically compresses the kill chain. Human operators remain in the loop to validate targets, but the AI enables them to go from analyzing only ~30 targets per hour to as many as 80, according to some reports [12]. Deployed in conflict zones like Iraq, Syria, and Yemen, Maven has proven its value by narrowing target lists for airstrikes and even helping U.S. Central Command locate enemy rocket launchers and vessels in the Middle East. These real-world results illustrate how AI can increase operational tempo and precision in intelligence, surveillance, and reconnaissance (ISR) missions, augmenting human analysts and decision-makers.

    To scale such successes across the force, the Pentagon stood up the Joint Artificial Intelligence Center (JAIC) in 2018 (now reorganized under the Chief Digital and AI Office) with a mandate to accelerate AI adoption for “mission impact at scale” [13]. The JAIC coordinated DoD-wide AI efforts, developing prototypes in areas like predictive maintenance, humanitarian assistance, and warfighter health, and ensuring that lessons learned in one military service could benefit others. For example, in the realm of predictive maintenance, the Air Force’s Rapid Sustainment Office worked with industry to deploy an AI-based Predictive Analytics and Decision Assistant (PANDA) platform as a new “system of record” for aircraft maintenance [14]. PANDA aggregates data from aircraft sensors, maintenance logs, and supply records, then uses machine learning models to predict component failures and optimal maintenance scheduling. This data-driven approach has measurably improved readiness: in one case involving the B-1 bomber fleet, an AI predictive maintenance tool completely eliminated certain types of unexpected breakages and cut unscheduled maintenance labor by over 50%. These efficiencies translate to higher aircraft availability and operational reliability – a clear example of AI acting as a force multiplier for logistical and sustainment activities.

    AI is also bolstering U.S. capabilities in less visible but critical domains such as cyber operations. Modern cyber defense involves monitoring enormous volumes of network data and responding to threats in milliseconds. Here, AI algorithms help identify anomalous patterns and intrusions far faster than human operators alone. Military cyber units are experimenting with machine learning systems that flag suspicious network behavior and even autonomously execute initial countermeasures. As one Army Cyber Command technology officer observed, AI is beginning to shift the advantage to the defender in cyberspace, partially countering the traditional dominance of offense [15]. Fast-running AI detection tools can contain attacks or malware in real time, making it “much harder for the offensive side” to succeed. At the same time, strategists recognize that AI is a dual-edged sword in cyber warfare: the same technology could enable more sophisticated phishing, deepfake-induced misinformation, or automated hacking by adversaries. This has prompted the DoD to invest in AI for cybersecurity while also researching defenses against AI-driven threats.

    Across the services, a variety of other AI applications are moving from pilot projects into operational use. The Navy and Coast Guard, for instance, have begun employing computer vision algorithms to scan satellite and radar data for illicit maritime activities (such as smuggling or illegal fishing) that previously went unnoticed [16]. The Army is testing AI-enabled battle management systems that fuse sensor inputs to recommend battlefield courses of action, effectively providing decision support to commanders. Even the U.S. Special Operations community has embraced AI tools for tasks like ISR analysis, language translation, and mission planning. In 2023, U.S. Special Operations Command pivoted towards aggressive adoption of AI, open-sourcing certain software and pushing deployment to the tactical edge [17]. Leaders at SOCOM rate their recent progress as substantial, but acknowledge more work is needed to integrate AI into legacy systems and train personnel to use these tools effectively. Such case studies – from Project Maven’s target recognition to PANDA’s maintenance forecasting and cyber anomaly detection – underscore that AI is no longer just a theoretical future capability. It is already enhancing operational readiness and efficiency across the U.S. defense enterprise, augmenting human warfighters in handling the growing speed and complexity of modern military missions.

    From ENIAC to AI: Johns von Neumann’s Legacy and the Next Cognitive Revolution

    History shows that transformative technologies can radically enhance military capability when paired with visionary integration. A useful parallel to today’s AI revolution is the advent of electronic computing during and after World War II – a revolution epitomized by the work of John von Neumann on the ENIAC computer. Von Neumann, a Hungarian-American mathematician and polymath, was a key figure in the Manhattan Project and an early computing pioneer who recognized the strategic potential of automation in calculations [18]. In 1944, he became involved in the U.S. Army’s ENIAC project (Electronic Numerical Integrator and Computer), which was the first general-purpose electronic computer. ENIAC was initially built to compute artillery firing tables – a laborious task that previously required teams of human “computers” working with mechanical calculators and often struggling to keep up with wartime demands. By automating these computations, ENIAC could perform in seconds what took people hours or days, fundamentally changing the pace of wartime calculations. In fact, one of ENIAC’s first assignments in 1945 was running simulations for the feasibility of the hydrogen bomb, a top-secret program that would have been impractical without electronic computing power [19]. This breakthrough demonstrated how high-speed computing became a strategic enabler, allowing the United States to solve complex problems (like nuclear weapon design and ballistic trajectories) that were previously intractable or painfully slow.

    Historical image of John von Neumann, a pioneering figure in computing and influential figure in military strategies.
    The ENIAC, the world’s first electronic large-scale general-purpose digital computer, symbolizes the dawn of computing technology.

    John von Neumann’s influence went beyond the engineering of ENIAC; he also conceptualized how computers could serve as cognitive aids to strategists and planners. He pioneered the stored-program architecture (now known as the von Neumann architecture) that underlies virtually all modern computers, and he’s considered a father of game theory – bringing a new mathematical rigor to defense strategy. Under von Neumann’s guidance, early computers were used not only for crunching numbers but also for tasks like weather forecasting and systems analysis, essentially the forerunners of today’s data-driven decision-support systems. The early computing revolution turned what were once human-only intellectual tasks into human-machine collaborative tasks, greatly increasing speed and accuracy. For example, the time to produce complex firing tables or decrypt enemy codes dropped dramatically as machines took over the repetitive calculations. Military planning began to incorporate computational modeling, from logistics to nuclear targeting, augmenting human judgment with machine precision.

    Today’s artificial intelligence represents the next phase of cognitive augmentation in warfare – a step beyond what von Neumann’s generation achieved with manual programming and calculation. If ENIAC and its successors gave commanders unprecedented computational power, AI offers something arguably even more profound: the ability for machines to learn, adapt, and assist in decision-making in real time. This can be seen as an extension of von Neumann’s legacy. Just as he envisioned applying rigorous computation to strategic problems, we now envision applying machine learning to dynamic problems like identifying insurgents in a crowd, predicting an adversary’s moves, or optimizing complex logistics under fire. The paradigm shift is similar in scale. In the mid-20th century, militaries that embraced electronic computing leapt ahead in command-and-control, intelligence, and engineering – those that lagged were left at a serious disadvantage. Likewise, in the 21st century, militaries that harness AI for a decision advantage will outpace those that do not. AI systems can sift through sensor feeds, intelligence reports, and battlefield data far faster than any team of staff officers, flagging patterns and anomalies that would otherwise be missed. This human-machine symbiosis has the potential to amplify cognition on the battlefield, much as early computers amplified calculation. It moves warfighting into a realm of information speed and complexity management that von Neumann could only hint at with game theory and primitive computers. In short, AI is positioned to do for perception and reasoning what computing did for arithmetic – enabling a new leap in military effectiveness. The challenge, as with ENIAC, is to integrate this technology wisely, guided by strategic leaders who understand its potential. In that sense, reframing AI from a feared threat into a force multiplier echoes von Neumann’s own advocacy for embracing new technology to secure a competitive edge in national security.

    Implications for Defense Education and Talent Development

    Realizing AI’s potential as a strategic enabler will require a profound transformation in defense education and training. Future military leaders must be as comfortable with algorithms and data as past generations were with maps and compasses. This means Professional Military Education (PME) institutions – service academies, staff colleges, war colleges, and technical schools – are updating curricula to build AI literacy at all levels. AI literacy involves understanding the basics of how artificial intelligence works, its applications and limitations, and being able to critically evaluate AI-enabled systems [20]. As one recent study on PME integration argues, AI literacy among faculty and students is now a “strategic imperative” to prepare officers for an AI-driven battlefield. Concretely, courses on topics like data science, machine learning fundamentals, and human-machine teaming are being introduced alongside traditional strategy and leadership classes. For example, the Naval Postgraduate School has launched an “Artificial Intelligence for Military Use” certificate program that educates military professionals on key AI concepts and applications, from sensors and imagery analysis to war-gaming and logistics [21]. Notably, this program does not require a coding background – reflecting an understanding that even non-technical officers need a working knowledge of AI to make informed decisions about procurement and deployment. Similar initiatives are underway at other institutions, aiming to produce officers and DoD civilians who can bridge the gap between operators and data scientists and effectively champion AI projects.

    In addition to technical skills, ethical and strategic judgment regarding AI must be woven into the education of military leaders. Just as the ethics of nuclear weapons or cyber operations are covered in curricula, the unique ethical questions posed by AI deserve attention. PME courses are beginning to incorporate case studies on algorithmic bias, autonomous weapons, and the legality of AI-driven targeting under the Law of Armed Conflict. The goal is to instill “ethical AI fluency” – ensuring that officers not only understand what AI can do, but also the moral and legal frameworks guiding its use. Students might debate scenarios, for instance, about an autonomous drone engaging a target without a direct human command, examining how DoD’s AI Ethics Principles (responsibility, equity, traceability, reliability, governability) should apply. By grappling with these issues in the classroom, future commanders and planners will be better prepared to make tough calls about AI employment in the field. They learn that embracing AI does not absolve them of accountability – on the contrary, it requires more educated oversight. The military’s emphasis on leadership with integrity extends into the AI era: an officer needs the knowledge to question an AI recommendation, recognize when the data might be flawed or the algorithm biased, and insist on appropriate human control measures. Thus, courses in ethics, law, and policy are evolving to cover AI, ensuring the warrior ethos and professional norms adapt to include stewardship of intelligent machines.

    Another critical aspect of defense education in the AI age is fostering interdisciplinary and interagency training. AI in national security isn’t confined to the Department of Defense alone – it spans the intelligence community, homeland security, defense industry, and academia. Recognizing this, PME institutions and training commands are increasing exchanges and joint learning opportunities. For example, the DoD has partnered with universities (like MIT and others) to offer specialized AI courses to military cohorts, and it convenes events such as the AI Partnership for Defense which bring together allied military officers and defense civilians to share AI lessons learned [22]. On the interagency front, one can envision combined training where military analysts and, say, CIA or NSA analysts learn side by side about applying AI to intelligence fusion – building networks of expertise that span organizational boundaries. Such cross-pollination is vital because the challenges of AI (from data sharing to ethics) often require a whole-of-government approach. A Naval officer who understands how the Department of Homeland Security uses AI for critical infrastructure protection, or an Air Force officer who grasps the FBI’s perspective on algorithmic bias, will be better equipped to collaborate during joint operations and crises.

    Crucially, faculty development and leader development programs are adapting to empower this educational shift. Instructors at war colleges and service schools are being encouraged to familiarize themselves with AI tools and concepts so they can mentor students effectively. U.S. Army War College faculty, for instance, documented their experience of gradually integrating AI into their teaching – highlighting that faculty comfort with AI is a prerequisite to student education. Within the operational forces, commanders are also pushing “digital literacy” initiatives down the ranks. A notable example is U.S. Special Operations Command, which recently had about 400 of its leaders complete a six-week MIT-affiliated course on AI and data analytics. The intent is to create a leadership cadre that not only understands the technology but “demands it,” actively pulling AI solutions into the field. This top-down and bottom-up approach to education – from generals to junior officers and enlisted technicians – will cultivate a culture where AI is seen as an essential tool in the arsenal. In summary, defense education is being reimagined for the information age: blending technical literacy, ethical grounding, and joint cooperation to produce military and intelligence professionals who can harness AI’s power responsibly and creatively in service of national security.

    Governance and Risk Management of Military AI

    As the U.S. military integrates AI into critical operations, robust governance and risk management frameworks are paramount to ensure these technologies remain strategic enablers and not liabilities. The Department of Defense has proactively set guardrails through high-level principles and policies. In 2020, the DoD adopted a set of Ethical Principles for AI, which articulate how AI systems should be developed and used in accordance with the military’s legal and ethical values. These five principles — Responsible, Equitable, Traceable, Reliable, and Governable — now guide all DoD AI projects. In practice, they mean that humans must remain accountable for AI decisions, AI outcomes should be as free from bias as possible, systems should be transparent and auditable, they must be rigorously tested for safety and effectiveness, and there must always be the ability to disengage or shut off an AI system that is behaving unexpectedly. For example, the “Responsible” principle explicitly states that DoD personnel will exercise appropriate levels of judgment and care when deploying AI and will remain answerable for its use. This institutionalizes a “human-in-the-loop” (or at least “on-the-loop”) mandate, ensuring that AI augments human decision-making rather than replaces it in any uncontrolled way.

    Implementing these principles requires concrete governance measures. The Pentagon’s Joint AI Center (now CDAO) has been charged as a focal point for coordinating AI ethics implementation, including standing up working groups to develop detailed guidelines and tools for compliance. One focus area is algorithmic transparency – making AI systems as explainable as possible to their human operators. The “Traceable” principle addresses this, mandating that AI technologies be developed such that relevant personnel possess an appropriate understanding of how they work, including insight into training data and logic. This is leading to investments in explainable AI research for defense applications, so that a commander can ask not just “What is the AI recommending?” but “Why is it recommending that?”. For instance, if an AI tool flags a particular vehicle as hostile, commanders want confidence in the basis for that judgment (sensor signatures, behavior patterns, etc.), rather than accepting a “black box” output. Explainability builds trust and helps humans and AI collaborate more effectively – a lesson learned from early deployments like Project Maven, where analysts had to validate AI-generated target cues. It also enables troubleshooting: if an AI system makes a questionable suggestion, engineers and operators can audit the decision process to identify potential biases or errors (aligning with the Equitable principle’s aim to minimize unintended bias).

    Risk management of military AI systems spans technical, operational, and strategic levels [23]. Technically, one risk is the reliability and robustness of AI models. In battlefield conditions, data can be noisy, adversaries can attempt to deceive AI (through camouflage, decoys, or cyber means), and systems may encounter scenarios not covered in training. The DoD addresses this through extensive testing and evaluation regimes. Per the “Reliable” principle, each AI capability must have well-defined uses and be tested for safety and effectiveness within those use cases. For example, before an AI-driven target recognition system is fielded, it undergoes trials across different environments (desert, urban, jungle, etc.) to evaluate performance and failure modes. Recent conflicts have provided cautionary tales: simplified AI tools reportedly had mixed results in the Russia-Ukraine war, sometimes misidentifying objects (e.g., classifying heavy machinery as trees or falling for inflatable decoys) when faced with weather or camouflage conditions beyond their original training. Human analysts outperformed these nascent systems in complex scenarios, underscoring that current AI is far from infallible and must be used with human oversight. To mitigate such risks, DoD policy emphasizes continuous operator training and system tuning – AI models should be updated with new data, and users must understand the system’s limitations. Moreover, the “Governable” principle requires that AI systems be designed with the ability to detect and avoid unintended consequences, and crucially, to disengage or deactivate if they start to act anomalously. This is essentially an insistence on a “kill switch” or fallback control for autonomous systems, which is vital in weapons platforms to prevent accidents or escalation. In sum, engineering robust AI means planning for failures: building redundancy, fail-safes, and manual override options into any critical AI-enabled system. 

    On the operational and strategic risk front, DoD leaders are aware that AI could introduce new uncertainties even as it solves problems. One concern is the acceleration of decision cycles potentially leading to humans being outpaced. If an AI can identify and recommend engagement with a target in seconds, there’s a risk that command and control might not properly vet actions in time. The U.S. approach to this is “human machine teaming” – using AI to speed up information processing, but still requiring a human decision at the trigger point for lethal force, consistent with DoD Directive 3000.09 which governs autonomous weapons. This aligns with broad expert consensus that human judgment must remain central: RAND researchers, for instance, note a “broad consensus regarding the need for human accountability” in the use of military AI, recommending that responsibility clearly rest with commanders and human control span the entire system lifecycle. Another risk is strategic instability: if one side’s AI gets an advantage, there’s pressure on adversaries to respond quickly (or even preemptively). The DoD is approaching this by coupling its pursuit of AI with confidence-building measures and international dialogue. The U.S. has publicly committed to the lawful, ethical use of AI in warfare and is engaging allies and partners to do likewise. By championing principled AI use, the U.S. hopes to set norms that reduce the risk of inadvertent escalation – for example, by agreeing that humans will supervise any AI that can initiate lethal action, or that early-warning AI systems will be designed to avoid false alarms.

    Additionally, governance involves accountability and oversight mechanisms within the military. Just as there are safety boards for accidents, there may be review boards for AI incidents or anomalies. The Defense Department is instituting processes to review AI programs for ethical compliance and is considering certification regimes (analogous to operational test & evaluation for hardware) for AI systems before deployment. The chain of command is being educated that owning an AI tool doesn’t diminish their responsibility for outcomes; if an autonomous vehicle or a decision aid makes a mistake, commanders are expected to investigate and address it just as they would a human error. This is reinforced by the ethical principle that DoD personnel “remain responsible for the development, deployment, and use” of AI. In practical terms, that could mean developing doctrine and TTPs (tactics, techniques, and procedures) for AI use – e.g., specifying that a human must verify an AI-generated target before engagement, or that there be at least two human checkpoints for any fully automated process in live operations.

    In summary, U.S. defense planners are actively putting frameworks in place so that AI is used safely, ethically, and effectively. The Pentagon’s approach is one of controlled experimentation: push the envelope with AI to gain its advantages, but do so under strict human oversight, with constant testing, and guided by a strong ethical compass. This governance mindset reframes AI from a feared “black box” risk into a well-supervised partner for the warfighter. It acknowledges risks – technical glitches, enemy counter-AI tactics, legal ambiguities – and seeks to mitigate them through responsible design and policy. With these measures, the U.S. aims to reap the strategic benefits of AI (speed, scale, insight) while upholding the values and control that have long guided the use of advanced technologies in national security.

    Strategic Competition and Decision Superiority in the AI Era

    Artificial Intelligence has emerged as a central arena of strategic competition, much like nuclear technology or space exploration were in earlier eras. Today, the competition is perhaps most intense between the United States and its near-peer rival China, with profound implications for global security and decision superiority on future battlefields. China has explicitly prioritized AI in its national and military strategy, seeking to become the world leader in AI by 2030 and to transform the People’s Liberation Army (PLA) into a “world-class military” by mid-century, in part through what it calls the “intelligentization” of warfare [24]. A key facet of China’s approach is its policy of Military-Civil Fusion, which marshals the nation’s robust civilian tech sector in direct support of military AI development. Unlike the U.S., where private tech companies and the Pentagon cooperate but are separate, China’s centralized model blurs this line – private AI firms are effectively co-opted into serving PLA needs. This has allowed China to tap advanced research and commercial innovations at speed. In recent years, the PLA has established joint military-civilian AI laboratories, funded tech competitions to encourage dual-use AI innovations, and stood up dedicated units to integrate commercial tech into PLA operations. The results are telling: according to one study by Georgetown’s CSET, the PLA now procures the majority of its AI-related equipment from China’s private tech companies rather than traditional state-owned defense enterprises. In other words, China is harnessing the dynamism of its AI startup ecosystem under a top-down strategic directive – a combination that has yielded rapid progress in areas like facial recognition surveillance, autonomous drones, and AI-assisted command systems for the PLA.

    The United States, for its part, is determined not to cede its historical advantage in military technology and decision-making superiority. American defense officials have stated plainly that AI is critical to future military preeminence. A 2024 Army report noted that AI is the one technology that will largely determine which nation’s military holds the advantage in coming decades. This recognition has led the U.S. to craft its own strategy to win the “race” for military AI, albeit by leveraging America’s strengths: innovation, alliances, and a values-driven approach. The U.S. is pursuing what might be termed a “responsible offset” – seeking to out-innovate adversaries in AI while maintaining robust ethics and stability measures. Practically, this involves significant investments in R&D (the Defense Department requested over $1.8 billion for AI/ML in the 2024 budget), new organizational structures like the CDAO to unify efforts, and closer collaboration with the private sector. The Pentagon knows that many cutting-edge AI breakthroughs originate in companies like Google, Microsoft, OpenAI, or myriad startups. Unlike China’s state-driven fusion, the U.S. approach incentivizes cooperation through initiatives such as the Defense Innovation Unit (DIU) and AFWERX/Army Futures Command tech hubs, which aim to fast-track commercial AI tech into U.S. military use. A recent bold initiative is Deputy Secretary Kathleen Hicks’ “Replicator” program, announced in late 2023, which aims to field “multiple thousands” of AI-enabled autonomous systems across multiple domains (air, land, sea) within 18-24 months. Replicator’s goal is to leverage autonomy and AI at scale to counter the numerical advantages that China might deploy in a conflict (for example, swarms of inexpensive drones could act as a force multiplier to blunt a larger naval fleet or saturate an adversary’s air defenses). By rapidly scaling such capabilities, the U.S. seeks to ensure it can offset adversary advantages – much as it did with precision weapons in the past – and complicate any opponent’s war plans.

    Decision superiority – the ability to observe, orient, decide, and act faster and more effectively than an adversary (the OODA loop concept) – is a core focus of AI competition. AI has the potential to accelerate the OODA loop to unprecedented speeds. For the side that masters this, AI can provide a decisive edge in command and control. Imagine a future conflict scenario: AI algorithms instantly fuse multi-source intelligence (satellite imagery, electronic intercepts, social media, etc.), identify emerging threats, and present command with optimized courses of action, all in real time. The commander enabled by such AI support can make decisions inside the enemy’s decision cycle, forcing the adversary into a reactive stance. This is essentially what Project Maven and similar ISR AIs foreshadow – compressing a targeting process that once took hours into minutes or less. Faster decision-making, however, is only an advantage if paired with accurate and informed decision-making. Here lies a nuanced competition: it’s not just about acting quickly, but about acting wisely with AI-provided insight. The U.S. is thus investing in AI that improves not only speed but the quality of situational awareness – for instance, AI that can predict an adversary’s next moves or detect subtle patterns in adversary behavior that humans might miss. This could dramatically improve the U.S. military’s ability to anticipate and shape a confrontation rather than just react.

    For deterrence, the message that emanates is powerful: a military that can think and act faster across domains can credibly threaten to neutralize an opponent’s actions before they bear fruit. U.S. defense leaders believe integrating AI into the force will bolster deterrence by projecting confidence that America can “prevail on future battlefields” despite challenges. The flip side is that if the U.S. were perceived as lagging in AI, adversaries like China (or Russia) might be tempted to press advantages, thinking the U.S. unable to respond in time. Thus, maintaining a leadership position in AI is seen as critical to preventing conflict as much as winning one. Indeed, a technologically superior force equipped with AI decision-support and autonomy could deter aggression by making any attack plan against it too uncertain or likely to fail.

    That said, the AI arms race also carries deterrence dilemmas. One concern analysts note is that when both sides have high-speed, automated decision systems, there’s a risk of escalation if those systems lack sufficient human override. A minor incident could be misinterpreted by an AI as a full-blown attack requiring immediate response, leading to a rapid spiral – a scenario sometimes called “flash war.” Avoiding this requires careful strategy. The U.S. and other responsible powers will need to establish rules of the road for military AI, perhaps new agreements or at least tacit understandings (analogous to Cold War arms control in spirit, if not in formal treaty). Confidence-building measures, like transparency about certain defensive AI systems or hotlines to clarify ambiguities, could mitigate the risk that ultra-fast AI systems push humans out of the loop in crisis decision-making. In the competition with China, this means that even as the U.S. develops AI to maintain superiority, it also seeks dialogue on norms – for example, the Pentagon has indicated interest in talks about AI safety and crisis communications to reduce chances of an accidental clash due to AI misjudgment. Balancing competitive urgency with strategic stability is tricky but vital. The U.S. aims to win the AI race by demonstrating not only better technology but also stronger governance of that technology, thereby persuading allies and neutral countries to align with the U.S. vision of AI-enhanced security rather than China’s. As former Google CEO Eric Schmidt (who chaired the Defense Innovation Board) remarked, U.S. leadership in articulating ethical AI principles shows the world that democracies can adopt AI in defense responsibly. In the long run, this could translate into a coalition advantage – if U.S. allies trust American AI systems and agree on their use, it amplifies collective deterrence against aggressors who might use AI in destabilizing ways.

    In conclusion of this competitive landscape: AI is becoming a cornerstone of what strategists term the new “Revolution in Military Affairs.” It promises to reshape how wars are deterred, fought, and won. Both Washington and Beijing know that superiority in AI could mean faster and more precise operations, better coordinated forces, and more resilient systems – in short, an edge in almost every dimension of conflict. The United States, leveraging its open society and innovative economy, is striving to maintain its edge by integrating AI across defense while upholding the rule of law and international norms. China, with its state-driven approach, is rapidly challenging that edge. The outcome of this competition will significantly influence global power balances. Decision superiority in the next conflict may belong to whichever nation can most effectively blend human and artificial cognition into its way of war. For the U.S., the task is ensuring that it is our forces, educated and empowered by AI, that can observe first, understand first, decide first, and act decisively, thereby deterring conflict or ending it on favorable terms if it must be fought.

    Conclusion and Recommendations

    The exploration from computing to cognition – from John von Neumann’s ENIAC to today’s AI – illustrates a clear thesis: artificial intelligence, managed correctly, is not a menacing “third offset” to be feared, but rather a strategic enabler that the United States can harness to enhance national security. Far from replacing the human element, AI can augment American defense capabilities in profound ways: accelerating decision-making, optimizing resource use, and uncovering insights in oceans of data that would overwhelm human analysts. To fully realize this potential, however, the U.S. must reframe its mindset and approaches. AI should be viewed not as a mysterious black box or a mere buzzword, but as a set of powerful tools – tools that require investment in people, sound governance, and visionary planning to integrate effectively. In short, as this paper has argued, the conversation needs to shift from “How might AI threaten us?” to “How can we smartly leverage AI to stay ahead of threats?”. The following forward-looking recommendations are offered to concrete stakeholders in the defense and intelligence community to drive this shift:

    1. Professional Military Education (PME) Institutions – Build an AI-Ready Force: PME institutions should lead the way in cultivating a force that is literate in AI and comfortable with emerging technology. This means updating curricula continuously to include not just fundamentals of AI, but case studies of its use in warfare, ethical decision exercises, and practical training on AI-enabled systems. Military academies and ROTC programs can introduce cadets to AI through STEM courses and wargames featuring autonomous systems. Intermediate and senior service colleges (like Command and Staff Colleges and War Colleges) should require coursework on technology and innovation, ensuring that future battalion commanders and generals alike can champion data-driven approaches. Faculty development is critical – instructors need opportunities (and incentives) to stay current on tech trends, perhaps via sabbaticals with industry or AI research labs. PME schools can also establish partnerships with civilian universities for joint courses or certification programs in AI (similar to the NPS certificate described earlier). Beyond formal curricula, wargaming and exercises should incorporate AI elements: for example, a joint wargame where officers must employ AI tools for logistics or intelligence and deal with adversary AI capabilities in the scenario. By learning in a sandbox environment, leaders will gain intuition about AI’s strengths and pitfalls. Finally, PME institutions should instill a mindset of lifelong learning in technology – given the pace of AI advancement, one-off education isn’t enough. Officers and NCOs will need continuous refreshers, which could be delivered through online courses, micro-certifications, and periodic tech immersion programs throughout their careers. The outcome sought is a U.S. military ethos that values digital competency on par with marksmanship or tactical acumen, producing leaders who confidently wield AI-enabled capabilities as extensions of their command.
    2. Defense Planners and Policymakers – Integrate AI into Strategy and Force Design: For those in the Pentagon, Joint Staff, and combatant commands who shape requirements, doctrine, and budgets, the mandate is to fully integrate AI considerations into all levels of planning. At the strategic level, this means incorporating AI development goals into defense strategy documents and threat assessments. Planners should routinely ask: How does AI change the game in this mission area? and What must we do to stay ahead? For example, war planners should account for AI-driven enemy tactics and how U.S. forces will counter or exploit them. The deliberate planning process can include red-teaming with AI: use adversarial perspective AI models to simulate how a foe might use AI against us, and develop counters accordingly. In capability development, the Joint Capabilities Integration and Development System (JCIDS) should treat AI and data as critical enablers for every new platform or system. Requirements for a new aircraft or ship, for instance, should explicitly outline how it will leverage AI for maintenance, targeting, or autonomous functions. Resource allocation must back up these priorities – sustained R&D funding for military AI, including investments in test infrastructure (data libraries, simulation environments) and secure, scalable compute resources for the services. Defense planners should also emphasize open architecture and interoperability for AI systems so that different platforms and allies can share data and AI services seamlessly, avoiding stovepipes. Experimentation units (like the Army AI Task Force or Air Force’s Project Arc) should get robust support to prototype and field AI solutions quickly, with feedback loops to doctrine writers. Meanwhile, policy-makers need to refine and publish clear doctrines or concepts of operations (CONOPS) for AI-enabled warfare (e.g., how do we fight with human-machine teams? what is the doctrine for autonomous wingmen drones in an air campaign?). These guidelines will help front-line units incorporate AI tools into their SOPs in a disciplined way. Another key recommendation for defense planners is to continue engaging allies: include AI interoperability and data-sharing agreements in alliance talks (NATO, etc.), conduct combined exercises with AI components, and share best practices on ethics and safety. By shaping international standards proactively, the U.S. and its partners can collectively mitigate risks (like uncontrolled autonomous weapons) and present a united front in the face of adversaries’ AI use. In essence, planners must ensure that AI is woven into the fabric of force design and strategy, not treated as a niche or add-on – it should be as integrated as joint operations doctrine itself.
    3. Federal Intelligence Community Leadership – Leverage AI for Decision Advantage: For leaders in the intelligence agencies (CIA, NSA, DIA, NGA, etc.), AI offers an unprecedented opportunity to enhance analytic capabilities and strategic warning, but it requires bold action to adapt decades-old analytic processes. First, intelligence agencies should accelerate the adoption of AI and machine learning for processing the ever-growing volume of data (“big data”) in espionage and open-source intelligence. This includes deploying AI to automatically transcribe and translate foreign communications, flag anomalies in financial transactions or shipping data, generate summaries of vast social media feeds, and identify patterns in satellite imagery (NGA is already doing some of this with illegal fishing detection, for example). By automating low-level tasks, AI frees human analysts to focus on higher-level judgment and synthesis. Augmented analysis tools – like AI assistants that can answer natural language questions or test hypotheses against data – should become standard issue for analysts, with training on how to use them effectively. Intelligence community (IC) leaders also need to invest in talent: hiring data scientists and computational experts, and upskilling current analysts with data literacy (similar to the military’s efforts). Joint duty rotations between IC agencies and the DoD’s AI units (or even tech companies under appropriate safeguards) could cross-pollinate expertise.

      Moreover, the IC must develop frameworks for evaluating AI-derived intelligence. Analysts are trained in sourcing and skepticism; now they will need tradecraft for evaluating algorithmic outputs (e.g., understanding confidence levels, potential biases in training data, and error rates of AI models). IC agencies might create an “AI validation unit” that rigorously tests analytic algorithms and guards against false positives or adversary deception of our AI. Speaking of deception: intel leaders should assume that adversaries will try to mislead U.S. AI systems (through spoofing, deepfakes, etc.), so counter-deception techniques and deepfake detection become crucial new intelligence disciplines. A forward-looking recommendation is for the Director of National Intelligence (DNI) to champion a National Intelligence AI Strategy that parallels the DoD’s efforts – aligning all 18 IC elements on common standards for AI ethics, data-sharing (within the bounds of law), and rapid technology insertion. Such a strategy could establish centralized resources like a high-performance computing cloud and classified big data repositories accessible to all IC analysts, leveling the playing field so even smaller agencies can use advanced AI tools without massive organic infrastructure. Finally, intelligence leadership should integrate AI into warning and crisis response mechanisms. AI prediction models might help anticipate geopolitical instability or militarization by identifying subtle indicators far in advance. During fast-moving crises, AI decision-support could help senior officials explore scenarios (“If adversary does X, likely responses Y and Z”). However, these tools must be rigorously vetted and always placed under human supervision to avoid overreliance on machine prognostication. The IC’s ethos of considered judgment and avoidance of surprise can be well-served by AI, but only if embraced with the same diligence applied to other intel methods.
    4. Cross-cutting Recommendation – Cultivate a Culture of Innovation and Adaptation: Across PME, defense planning, and intelligence analysis, a unifying recommendation is to foster a culture that prizes innovation, agility, and informed risk-taking with AI. The federal national security enterprise can draw lessons from the tech sector here: encourage pilot projects, allow “fast failure” and learning in controlled environments, and reward individuals who find creative AI applications to mission problems. Senior leaders should communicate a consistent vision that AI is a priority – not to replace warfighters or analysts, but to empower them. This involves addressing organizational inertia and fear: some personnel worry AI will make their roles obsolete or that mistakes with AI will be career-ending. Leaders must allay these fears by highlighting AI successes, sharing knowledge of AI limitations openly, and framing adoption as an imperative to stay ahead of adversaries like China (whose investments we cannot ignore). Initiatives like hackathons, AI challenge problems, or innovation competitions within agencies can spark bottom-up solutions – for example, an Army brigade S-2 (intelligence officer) develop a machine learning model to predict insurgent attacks from incident data, and higher HQ can amplify and resource that idea if it shows promise. The DoD and IC should also streamline bureaucratic processes that hinder tech adoption (acquisition reform is beyond our scope, but rapidly acquiring and fielding software and AI updates is crucial). Modernizing infrastructure is part of culture too – ensuring deployed units have connectivity and computing to use AI tools, and analysts have access to data forward at the speed of relevance.

    In all these efforts, maintaining the American ethical high ground is essential. Reframing AI as an enabler also means communicating – to the force, the public, and the world – that the U.S. will use AI in alignment with democratic values and laws. This stance not only differentiates the U.S. from authoritarian competitors but also builds trust internally that the AI revolution will not run roughshod over moral considerations. It’s heartening that DoD leadership has embraced ethical AI principles and that military thinkers emphasize keeping humans in control. Carrying this onward, ethics training, legal oversight, and international agreements on AI in warfare will reinforce that AI adoption by the U.S. strengthens both our capabilities and our principles.

    Conclusion: “From Computing to Cognition” is more than a catchy phrase – it encapsulates the journey the U.S. defense enterprise must continue on. In the 20th century, those who exploited computing power gained a decisive edge; in the 21st, those who master AI will shape the future of security. The United States has the opportunity to lead this next revolution, just as it did the last, by embracing AI as a force multiplier across education, operations, and strategy. By investing in our people’s skills, establishing strong ethical and practical governance, and out-innovating our adversaries, we make certain that AI becomes a source of American strategic advantage. The recommendations above chart a path for military educators, defense planners, and intelligence professionals to collaboratively drive this transformation. The message is clear: AI is here to stay – and if we integrate it wisely, creatively, and responsibly, it will magnify the effectiveness of U.S. national security institutions while preserving the values that distinguish us on the world stage. In the final analysis, technology wars are won not by the machines, but by the humans who wield them best. The United States can and must be the nation that wields AI to sharpen our insight, quicken our decision-making, and strengthen our security, thereby turning a perceived risk into a strategic cornerstone for decades to come.

    References:

    [1] https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/ 

    [2] https://fivethirtyeight.com/features/chatgpt-thinks-americans-are-excited-about-ai-most-are-not 

    [3] https://getcoai.com/news/how-the-terminator-continues-to-influence-perception-of-ai-40-years-later

    [4] https://thebulletin.org/2023/03/how-science-fiction-tropes-shape-military-ai 

    [5] https://media.defense.gov/2019/feb/12/2002088963/-1/-1/1/summary-of-dod-ai-strategy.pdf

    [6] https://thebulletin.org/2018/08/jaic-pentagon-debuts-artificial-intelligence-hub

    [7] https://www.defense.gov/News/News-Stories/Article/Article/2427173/artificial-intelligence-enablers-seek-out-problems-to-solve

    [8] https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures

    [9] https://www.navy.mil/Press-Office/News-Stories/display-news/Article/2239977/us-naval-war-college-conference-on-artificial-intelligence-looks-to-move-beyond

    [10] https://www.mitre.org/sites/default/files/2021-11/prs-20-0975-designing-a-new-narrative-to-build-an-AI-ready-workforce.pdf

    [11] https://defensetalks.com/united-states-project-maven-and-the-rise-of-ai-assisted-warfare

    [12] https://en.wikipedia.org/wiki/Project_Maven

    [13] https://en.wikipedia.org/wiki/Joint_Artificial_Intelligence_Center

    [14] https://defensescoop.com/2023/05/10/air-force-selects-ai-enabled-predictive-maintenance-program-as-system-of-record

    [15] https://warontherocks.com/2024/04/how-will-ai-change-cyber-operations

    [16] https://defensescoop.com/special/operational-ai-in-the-u-s-military-defensescoop-special-report-2023

    [17] https://www.defense.gov/News/News-Stories/Article/Article/4177966/experts-say-special-ops-has-made-good-ai-progress-but-theres-still-room-to-grow/

    [18] https://ahf.nuclearmuseum.org/ahf/profile/john-von-neumann

    [19] https://en.wikipedia.org/wiki/ENIAC

    [20] https://smallwarsjournal.com/2025/05/07/embracing-the-inevitable

    [21] https://online.nps.edu/-/128-artificial-intelligence-for-military-use-certificate

    [22] https://www.defense.gov/News/Releases/release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence

    [23] https://www.rand.org/pubs/research_reports/RR3139-1.html

    [24] https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2024/MJ-24-Glonek/