Autonomous Weapon Systems: The Military's Smartest Toys?

November 20, 2014 Topic: State of the MilitaryDefense Region: United States

Autonomous Weapon Systems: The Military's Smartest Toys?

"We are standing at the cusp of a momentous upheaval in the character of warfare, brought about by the large-scale infusion of robotics into the armed forces."

If the enthusiasts are to be believed, we are standing at the cusp of a momentous upheaval in the character of warfare, brought about by the large-scale infusion of robotics into the armed forces. In the future, many proponents of the “robotics revolution” argue, a variety of military functions—from simple logistical tasks to the application of lethal force—will be performed by machines acting more or less autonomously, without any direct interference from human operators. Human-rights groups and other critics agree that such a change might be in the offing—and argue that it must be urgently reined in before machines devoid of any sense of moral reasoning are given the capacity to make unaccountable, independent decisions that lead to the taking of human lives.

While it is important to consider the implications of autonomous weapon systems (AWS) from a variety of perspectives, moral and legal standpoints—forcefully articulated by activists often driven more by “righteous indignation” than by a penchant for detached analysis—currently dominate the debate. Meanwhile, their strategic consequences generally remain underexplored and are, at best, alluded to nebulously.

But what might the effect of such autonomous weapon systems pose for military stability? Will these systems increase the likelihood that militarized rivalries lead to destabilizing arms competitions? Will they undermine the purpose of deterrence by creating incentives for actors involved in a crisis to strike first? And will they contribute to situations in which hostilities, once joined, quickly get out of hand?

If the impact of AWS on arms-race stability, crisis stability and the prospects of escalation is unproblematic, then they would tend to support military rationales for developing and procuring such systems. If, on the other hand, their impact is found to be deleterious, this should provide decision makers with an added incentive to carefully weigh the potential advantages of such systems against their negative consequences for national security.

Autonomous Warfare: A Likely Prospect

Military forces that rely on armed robots to select and destroy certain types of targets without human intervention are no longer the stuff of science fiction. In fact, swarming anti-ship missiles that acquire and attack targets based on pre-launch input, but without any direct human involvement—such as the Soviet Union’s P-700 Granit—have been in service for decades. Offensive weapons that have been described as acting autonomously—such as the UK’s Brimstone anti-tank missile and Norway’s Joint Strike Missile—are also being fielded by the armed forces of Western nations. And while governments deny that they are working on armed platforms that will apply force without direct human oversight, sophisticated strike systems that incorporate significant features of autonomy are, in fact, being developed in several countries.

In the United States, the X-47B unmanned combat air system (UCAS) has been a definite step in this direction, even though the Navy is dodging the issue of autonomous deep strike for the time being. The UK’s Taranis is now said to be “merely” semi-autonomous, while the nEUROn developed by France, Greece, Italy, Spain, Sweden and Switzerland is explicitly designed to demonstrate an autonomous air-to-ground capability, as appears to be case with Russia’s MiG Skat. While little is known about China’s Sharp Sword, it is unlikely to be far behind its competitors in conceptual terms.

In light of these developments, a future in which armed platforms execute some missions and attack some types of targets autonomously is certainly imaginable—perhaps even likely. What is beginning to take shape in the air will also be true of other operational environments. What, then, will be the strategic consequences of the inclusion of such systems into advanced force structures, as far as military stability between states and potentially even between states and nonstate actors is concerned?

Wrecking the Balance

From swords made of iron, to the breech-loading rifle, to the Dreadnought-type battleship, to multiple independently targetable reentry vehicles (MIRVs), the introduction of novel military capabilities that provide one actor with a substantial edge has usually forced others to follow suit. The pressure for competitive force modernization, which is created by the adoption of new technologies and new doctrines for employing them, has sometimes resulted in self-sustaining arms races. These are commonly assumed to make war more likely, although international-relations research now tends towards more complex and less gratifying narratives. It is nonetheless important to ask whether the adoption of AWS could result in a destabilizing arms competition, and there are reasons to believe that it might.

First, if AWS should prove much more capable than human-controlled systems in important areas of warfare, advanced armed forces the world around may find it difficult to exclude them from their force structures for very long, even if they have serious misgivings about them. Pressure for adoption could result from the desire to gain an important edge over a competitor, from the fear of being preempted by others, or reactively, once the systems have proved themselves superior on the battlefield. This may well be the case in activities such as air-to-air combat, in which extremely rapid and consistent decisions and the removal of physical limits imposed by the human body may provide a decisive advantage. As Michael Byrnes contends in a highly topical paper, “a tactically-autonomous, machine-piloted aircraft […] will bring new and unmatched lethality to air-to-air combat.” Brought to its logical conclusion, he argues, “a single [such aircraft] with a few hundred rounds of ammunition and sufficient fuel is enough to wipe out an entire fleet” of manned aircraft.

If such visions should prove even partially accurate, the most advanced air forces could find themselves outmoded by an autonomous air-to-air system, much in the way that the Dreadnought relegated battleships that had been launched only years before to virtual obsolescence. While the mechanics would be quite different, AWS may also provide radical advantages in overcoming enemy defenses and executing air-to-ground, anti-ship and other missions in nonpermissive environments, with similar consequences. The resulting arms dynamics could have both qualitative and quantitative elements, even though the former are likely to dominate initially. They may pit offensive systems against other offensive systems, offensive against defensive, or both.

Modernization would probably occur all around, and might be only moderately competitive once a baseline autonomous capability has been established. More consequential would be the destabilizing effect upon individual military balances, in which AWS capabilities might be unevenly distributed to begin with and several rounds of catch-up and advantage-seeking are required to reestablish a stable balance at a higher level of capability. This would especially be true of constellations in which both sides are motivated by an underlying geostrategic rivalry, and have considerable economic and military potentials that allow them to sustain the competition over time. While this could include other great- and medium-power constellations as well, it may apply to the United States and China, in particular.

Of course, there are many scenarios for the adoption of AWS, the most likely of which involves a gradual slide towards autonomy, in which manned, unmanned, semi-autonomous and eventually fully-autonomous systems coexist for many decades, and sudden obsolescence does not occur. What effects a more leisurely modernization process would imply are difficult to fathom, but the potential for instability would likely remain.

The Stimulus to Strike

Irrespective of the pace at which the adoption of autonomous weapon systems would proceed, the introduction of such systems—particularly those with offensive missions—could have a significant impact on actors’ conduct in a crisis. While crisis behavior depends on a host of contextual factors, it is widely accepted that force structure (and posture) plays an important role in stabilizing, or further destabilizing, interstate relations at the brink of war. As a recent study has once again brought to the fore, forces that effectively support crisis diplomacy should be potent without being excessively vulnerable, and should pose a credible threat without instilling fears of a crippling “bolt from the blue.”

Could autonomous weapons contribute to the pressure to use force in a crisis in ways that manned or remotely controlled systems do not? At first glance, it would seem that while they are merely kept on alert in their bases, AWS do not provide any first-strike incentives that a human-controlled system of similar configuration and capability would not also provide. However, considering that autonomous weapons may be endowed with capabilities that are far beyond those of human-controlled systems in some areas, an actor may see a window of opportunity to disable the systems before they begin to act autonomously, and therefore become much more difficult to find and defeat. Knowing that its systems are vulnerable before they are activated, the other party may feel pressured to activate them early, which might induce the first actor to activate his systems as well, and so on. Another set of incentives to “jump the gun” would arise if autonomous systems possess any specific vulnerabilities that provide a premium on early kinetic or nonkinetic attack; e.g. if they are serviced by a few exposed pre-mission programming centers. However, such vulnerabilities are not inherent in the concept of AWS, and they are avoidable in principle.

Difficulties are also raised when no overt windows of opportunity exist, but one or several actors begin to employ AWS in their crisis operations, be it in support of crisis management efforts or as part of their preparations for war. In doing so, the states in question would be introducing into the crisis equation an element that is beyond their immediate control, but that nonetheless interacts with the human opponent’s strategic psychology. In effect, the artificial intelligence (AI) that governs the behavior of autonomous systems during their operational employment would become an additional actor participating in the crisis, though one who is tightly constrained by a set of algorithms and mission objectives. This may raise doubts in the human participant’s mind as to whether these systems pose a danger not just as an instrument governed by the opponent’s intentions, but independently of them. (To be fair, one could say the same about a powerful military organization like the Cold War–era Strategic Air Command, which also acted according to a logic of its own, and sometimes without effective supervision from its political masters.)

Additionally, because no loss of human life is involved, the threshold for the use of force against AWS may be lower than it is against manned systems and the attacker may believe that he can get away with destroying them, thus triggering conflict in an act of miscalculated escalation. On the other hand, the fact that autonomous weapons aren’t fully under the control of a human agent can also be seen as introducing what Thomas Schelling called “a threat that leaves something to chance,” which could induce both sides to behave more responsibly for fear of losing control over a tense situation.

Dogs of War, Unchained

If a crisis does result in the use of force, escalation theory reminds us that much depends on how the initial stages of the conflict unfold. If escalation is immediate and dramatic, the crisis becomes irrecoverable and the conflict is unlikely to be contained. To avoid such an outcome, offensive weapons systems should be recallable and postured such that they need not be launched immediately. Moreover, at all stages of the conflict, they should be employed in ways that avoid inadvertent escalation, which occurs when conventional military actions unintentionally undermine the opponent’s nuclear deterrent. This requires accurate target discrimination, as well as a doctrine that avoids “patterns of damage or threat” to an actor’s strategic forces.

In all of these areas, AWS raise difficult questions. Recallability and loss of control, clearly, are major concerns. While strike systems along the lines of the X-47B UCAS could initially be employed under close human supervision, it is difficult to see how they could realize their full potential in those scenarios where they offer by far the greatest value added: intelligence, surveillance and reconnaissance (ISR) and strike missions deep inside well-defended territory, where communications will likely be degraded and the electronic emissions produced by keeping a human constantly in the loop could be a dead giveaway. For now, the Navy is skirting this and other bureaucratic-cultural issues by downgrading UCAS into a system that cannot really fulfill the role for which it was originally envisioned. The Air Force’s “optionally manned” long-range strike bomber (LRS-B) will face a similar dilemma in its uninhabited configuration.

While AWS would be inherently more recallable than ballistic missiles and, in fact, no less recallable than manned aircraft while they are in permissive airspace, the equation would change once they infiltrated denied zones. These systems would be among the first to cross into enemy territory, as few other assets would be survivable inside the envelope of a full-blown anti-access defense. Under an operational construct like the Joint Operational Access Concept, which currently represents the state of the art in access warfare and which envisions “striking enemy antiaccess/area-denial capabilities in depth,” the most survivable strike assets would have to penetrate deep into the defended zone and persist in enemy airspace for extended periods of time.

While executing their missions, they would be subjected to cyber and other nonkinetic attacks, and could be at least intermittently out of contact with their human supervisors over periods of twelve hours or more, depending on their unrefueled endurance. During these stints inside the defended zone, AWS might not be fully recallable or reprogrammable, even if the political situation changes, which presents a risk of undesirable escalation and could undermine political initiatives. (It should be noted that similar risks are routinely presented by submarine operations, with the potentially significant difference that these do not take place over the enemy’s home territory. The sinking of the Argentine cruiser ARA General Belgrano by HMS Conqueror during the 1982 Falklands War is a case in point.)

While they are scouting, disrupting and destroying key nodes in the enemy’s defenses, autonomous strike systems would also be presented with target discrimination challenges of some magnitude. In correctly identifying legitimate targets for attack, AWS would have to rely on some kind of pre-mission input to which to compare their sensor data, and on algorithms that ensure positive identification and limit collateral damage. Knowing that this is the case, opponents would have every incentive to complicate the targeting process by employing cover, concealment and especially deception. This could include, inter alia, relocating important assets to busy urban settings or next to inadmissible targets, such as hydroelectric dams or nuclear-power stations; altering the appearance of weapons and installations to simulate illegitimate targets, and perhaps even the alteration of illegitimate targets to simulate legitimate ones; large-scale use of dummies and obscurants, and the full panoply of electronic deception measures. Even in the absence of such measures, discrimination can be a daunting task; to give just one example, the Chinese DF-21 medium-range ballistic missile exists in nuclear, conventional land-attack and anti-ship versions—all of which are deployed within the same organizational framework. In such cases, only reliable and up-to-date order-of-battle analysis might be able to ensure discrimination, and even then, fateful mistakes and threatening “patterns of damage” that lead to inadvertent escalation will remain a possibility.

Even If the Skies Fall Not: A Realist Case for Caution

The political decision for or against autonomous will ultimately turn not on legal or moral issues, but on the answer to a very practical question: Are the advantages of removing human supervision at the point of attack so overriding that they justify taking the resultant risks? As is usually the case with novel and powerful military instruments, AWS promise to provide those who embrace them with a set of capabilities that, from a narrowly military-operational point of view, are not to be frowned upon. As is usually the case, these advantages will come at a strategic cost. In the case of AWS, this is likely to include serious modernization pressures, which could prove destabilizing in some instances. While they need not provide any clear-cut first-move advantages in a crisis, it is also likely that they will touch upon issues of crisis management, perhaps in ways that are not well understood at present. Finally, there are scenarios in which the introduction of autonomous strike systems could result in temporary loss of high-level control over operations, and unwanted escalation (conventional or nuclear).

None of these dangers are new or unique to AWS, and they will probably be present in future strategic rivalries, crises and conflicts, even if fully-autonomous weapons and platforms never leave the drawing board. Moreover, advocates may argue that these systems can even increase stability by ensuring access, strengthening deterrence and reducing critical vulnerabilities.

That said, policy makers should not let their hands be forced by these advocates, or by the “righteous indignation” of activists in search of the next class of weapons to ban. They should exercise prudence and caution in weighing the implications of autonomous weapons for military stability against the potential benefits of introducing these systems into an equation that is highly complex as it stands. This requires that, especially where nuclear weapons come into play, the burden of proof lie with the proponents of AWS, not with their critics. It also requires that mutual restraint be explored as a serious option. At the very least, it would appear that the strategic risks presented by these systems need to be studied much more thoroughly before current demonstrator programs are allowed to mature into operational systems. Those who are driving the competition would do well also to invest in the knowledge base that is required to understand the full implications of an AWS revolution—or, indeed, to avert it, if it is found to be an unattractive prospect after all.

Michael Carl Haas is a researcher with the Global Security team at the Center for Security Studies, ETH Zurich. His areas of interests include air and missile power, military innovation, and the proliferation of advanced conventional weapons.

Image: Flickr/The U.S. Army/CC by 2.0