In this vision, AI-enabled robotics fight on behalf of states, winning territories and holding ground while insulating humans from the "dirty jobs" of the battlefield.
Proponents point to the effective role of AI-enabled drones in this decade’s wars—from the Caucasus to Ukraine and the Middle East—as precursors to this brave new era. Yet, the strategic efficacy of these systems remains contested. Most of these ongoing conflicts have not reached a swift resolution or stable end state; instead, they have devolved into the ugliest form of war—attrition. These conflicts remain exceedingly bloody and catastrophic, and none have evaded the difficult ethical questions surrounding the use of automated systems in taking human life.
Perhaps the most promising use of AI systems lies in the realm of "cyber warfare." Since the emergence of generative AI platforms like ChatGPT, hacker groups have found a bonanza for developing cyberweapons. Generative AI has become a distinct force multiplier, allowing attackers to operate with greater speed, scale, and sophistication. It has also lowered the barrier to entry, enabling even less-skilled actors to launch advanced attacks. AI tools allow hackers to supercharge social engineering and phishing campaigns, eliminating the awkward phrasing or non-personalized templates that were once reliable indicators of an attack. Furthermore, they allow attackers to create deep-faked photos and videos to compromise targets and bypass biometric security measures.
More importantly, AI is revolutionizing malware development. We are seeing the rise of polymorphic malware capable of rewriting its own code upon every infection to evade detection. With the advent of Agentic Artificial Intelligence (AAI), which signals a shift from automated tools to autonomous agents, hackers can use AI agents that independently determine the next step in an operation. These agents can scan targeted systems, swiftly exploit vulnerabilities, and exfiltrate data before defenders can react. Given the short time it took to move from generative to agentic AI, some speculate that the next step in the evolution of cyber operations will see AI engaging targets and conducting operations entirely in isolation from human intervention—systems fighting systems.
These developments, whether real or imagined, raise serious concerns for state and corporate security. A nightmare scenario involves an AI system autonomously choosing to attack a country’s infrastructure, putting networks out of service for extended periods and creating disruption and confusion among the public. However, the apocalyptic "Terminator-like" scenario—where an AI system dominates a state's defence capabilities to attack another or commit national suicide—remains valid only in theory and currently roams the realm of science fiction. It will likely remain there for the time being for three key reasons.
First, the golden rule of the relationship between warfare and technology persists: every measure brings about a countermeasure. States and corporations are widely adopting AI systems as part of their cyber defence strategies, either to prevent attackers from gaining the upper hand or to make cyberattacks more expensive to deter further attempts. For example, some AI-enabled defence systems are creating decoy environments to allure attackers, subsequently using those engagements to analyse and neutralize the attacker's weapons.
Second, most known AI-enabled attacks have focused on phishing, ransomware, or corporate espionage. None have attempted to sabotage the critical infrastructure of a state, as such operations require capabilities that are largely beyond the reach of hacktivist groups. Even states are hesitant to risk using their cyber capabilities to attack another nation directly or to put these capabilities in the hands of surrogates or mercenaries.
Third, and most importantly, artificial intelligence—whether generative or agentic—has not fundamentally transformed the political nature of cyber operations. They still rarely amount to "armed attacks" or a legally recognized use of force, remaining instead in the realm of sabotage, espionage, and subversion. In 2013, Thomas Rid issued his monograph Cyber War Will Not Take Place. Now widely considered a classic in conflict studies, this work contended that cyber-attacks do not constitute "war" in the military sense because they fail to meet the criteria of violence, instrumentality, and attribution. To be considered war, Rid argues, an act must be violent, meaning it causes bloodshed or kills people; this does not include inflicting damage on infrastructure or military capabilities.
Furthermore, the instrumentality of war refers to the use of violence to compel an adversary to submit to one's will. Rid contends that cyber-attacks seldom possess the decisive impact necessary to force a nation to surrender or alter its policies. To date, no cyber-attack has fundamentally shifted the strategic calculations of an opposing state or prompted it to adopt a more submissive stance.
Finally, in war, one must know the enemy to fight back and deter them. However, the cyber domain is designed for anonymity. Attributing the identity of actors is a difficult feat; technically, it requires massive resources and time to attribute an attack to a certain group with high certainty. States often hire hacker groups and proxies to ensure deniability, avoiding political risks and escalation. While AI can enhance attribution capabilities, relying on forensics remains a political choice, and most state responses are limited to sanctions or countersubversion rather than military force.
Definitely, Rid’s argument is not bulletproof—none indeed are—and several scholars have countered it with more alarmist arguments. For example, Lucas Kello argued that warfare has changed beyond the traditional model Rid presumed, suggesting cyber operations can be a standalone power achieving political goals on their own. Previously, Richard Clarke, the former White House advisor and well-known "Cyber Czar," warned of a "Cyber Pearl Harbor."
Nevertheless, Rid’s argument remains valid, particularly in assessing the impact of new technology on ongoing global conflicts. Hard power capabilities and economic tools remain indispensable, as seen in the raging wars in Ukraine and the Middle East, or the tensions in the Indo-Pacific; they cannot be replaced by cyber operations even if empowered by AI.
More importantly, technologies must be tied to strategy to be useful as military and political tools. This is not to exclude the possibility that cyber warfare could take place in the future, just as science fiction tales of the past have materialized in our present. In any case, policymakers should be proactive and cautious, tailoring their policy options based on both the realities of the present and the possibilities of the future.
Short link: