Redirecting...

Risks and Benefits of Autonomous Weapon Systems: Perceptions among Future Australian Defence Force Officers

  • Published
  • By Drs. Jai Galliott & Austin Wyatt

 

Abstract1

The prospect of increasingly autonomous systems has seized the military imagination and rapidly generated an international debate surrounding the merits of a potential preemptive ban under international law. What has been missing to this point has been an in-depth consideration of how artificial intelligence, autonomous systems, and unmanned platforms would be perceived by the junior officers who will play a core role in their integration into future militaries. Drawing on a broad survey of officer cadets and midshipmen at the Australian Defence Force Academy conducted in 2019, this article provides an analysis of how perceived risks and benefits of autonomous weapon systems are influencing the willingness of these future defense leaders to deploy alongside them.

Introduction

The prospect of increasingly autonomous weapons systems (AWS) has seized the military imagination and featured prominently in strategic guidance, not just in the Australian Defence Force (ADF) but also from our allies,1 competitors,2 and nonstate actors.3 It is also becoming increasingly apparent that artificial intelligence (AI), trusted autonomous systems, and unmanned platforms will play a crucial role in the ADF’s capacity to maintain a credible deterrent capability edge over potential challengers in the region. However, there have been no concentrated, published efforts to determine how military end users would perceive such systems.

Existing studies examining public opinion toward lethal autonomous weapon systems (LAWS) have been limited in scope and focused primarily on civilians in the United States. At the time of writing, the only publicly available Australian research is also civilian-focused. Over the past two years, the Campaign to Stop Killer Robots has conducted two surveys of Australian civilians and identified that more than half of respondents opposed autonomous weapons.4 Overall, while these papers provide a useful baseline understanding, they remain focused on civilians rather than the ADF.

Indeed, the literature generally seems to assume that military personnel would be more likely to support the use of LAWS than the civilian population. While trust has been raised as an essential factor,5 overall, this has not been reflected in the context of an empirical public opinion study. Therefore, the purpose of this article, and the underlying study, was to test this assumption among officer cadets and midshipmen at the Australian Defence Force Academy (ADFA) to identify which perceived risks and benefits of AWS are most influential on the willingness of these future defense leaders to deploy as part of manned-unmanned teams (MUM-T).

This article is divided across four sections that outline the results of the underlying study and highlight the main takeaways for discussion. The first substantive section of this article establishes a baseline understanding of the extent to which the respondents were willing to deploy into a combat environment as part of a MUM-T that included potentially lethal robots with varying levels of autonomous functionality. The next two sections consider a range of potential benefits and risks respectively and outline which were considered important to the respondent group, which informs an alternate, end user–based view of the key challenges to the effective integration of unmanned, AI-enabled or autonomous systems into the future ADF. Finally, this article will discuss three core conclusions that can be drawn from this study before concluding with policy and doctrinal recommendations.

Regardless of whether a preemptive development ban is imposed on lethal variants under international law, the impact of increasingly autonomous unmanned systems will be felt most keenly by the junior officers charged with leading MUM-Ts in combat. However, there is currently a dearth of published research that engages directly with active military personnel or questions how the emerging generation of officers perceive increasingly autonomous platforms and systems. In response to this gap, the Values in Defence and Security Technology Group conducted a survey of more than 800 officer cadets and midshipmen at the ADFA, Australia’s premier tertiary military education institution. This article utilizes that dataset to inform an analysis of how the perceived risks and benefits of autonomous systems are influencing the willingness of these future defense leaders to deploy alongside them.

Prior Surveys of Perceptions toward Autonomous Weapon Systems

This study is believed to be the largest survey examining perceptions toward autonomous military technology among serving military personnel, at the time of writing. It was also the first survey of its kind to focus almost exclusively on Australian military respondents, as prior published studies have been primarily focused on the United States.

Chronologically, Charli Carpenter conducted the first study of US public opinion toward AWS in 2013. More than half the respondents in this study said that they opposed autonomous weapon systems (with 39 percent expressing a strong opposition).6 Unfortunately, this initial study utilized leading and highly emotive terminology in its questions. This is a topic that the general public still has little knowledge or understanding of beyond their immediate association of robotic weapons with the Terminator movie franchise (although some may prefer Transformers). As Michael Horowitz’s subsequent study confirmed, the influence of contextualized questioning is particularly important with this topic. Despite this concern, Carpenter’s paper was an important first step in building our understanding of public attitudes toward this technology and is still widely referenced in academic literature and working papers produced as part of the ongoing High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) meetings of the International Group of Experts on LAWS in Geneva.

A subsequent online survey, conducted by the Open Roboethics Institute in November 2015, was the first to include respondents outside of the United States. The results of this survey were fairly clear, with 85 percent of respondents saying that LAWS should not be used offensively and 67 percent supporting a ban.7 The most common reason for opposing LAWS was that only humans should be allowed to make the decision to end life.8

Interestingly, in a 2016 study, Horowitz found that the baseline level of opposition to autonomous weapons dropped from 48 percent to 27 percent if autonomous weapons protected US soldiers and were more effective than remote-operated weapons.9 While Horowitz has not published a follow-on from this admittedly US-focused study, the key implication was that the manner in which autonomous systems are presented to the public is an important factor in whether they would be negatively received.

Most recently, the Campaign to Stop Killer Robots commissioned two large-scale but limited surveys, the first in 201710 and the second in 2019.11 These surveys found opposition to autonomous weapons rising, hitting 61 percent in the second survey.12 The most common reasoning among those who opposed killer robots was that “machines should not be allowed to kill” and a concern that AWS would be unaccountable.13 Of the 1,000 Australian respondents in 2019 (out of 18,795 total respondents), 16 percent were supportive or strongly supportive and 59 percent were opposed or strongly opposed. Interestingly 25 percent of Australian respondents stated that they were unsure, the same rate as Canada and the United States and 8 percent higher than the survey average.14 This data was an important contribution, given the argument that LAWS violate the principle of humanity, offend the public, and should thus be banned under the Martens Clause.15 However, the underlying surveys were quite limited in scope, with only those who indicated opposition being asked the survey’s second question. Furthermore, their value for informing policy beyond supporting a general call for a ban is questionable, given Horowitz’s findings that the composition of the question was influential when measuring public reaction to LAWS.16

The underlying survey for this article accounted for these shortcomings by adopting a neutral language and utilizing a research design that questioned why the respondents held the expressed views. Among the core purposes of this article, therefore, is to submit into the literature a detailed exploration of how a series of risk-benefit factors affect perceptions toward autonomous systems among the next generation of ADF leaders.

Research Design

Reviewing the steadily growing discourse surrounding the development of increasingly autonomous weapon systems would support the generation of three hypotheses for how these future military leaders would perceive the risks and benefits associated with deploying alongside “killer robots,” each of which will be tested in this article. Firstly, we could expect, based on the above surveys, to see a majority of respondents to either oppose of strongly oppose the use of machines that are “allowed to kill” without direct human control. Secondly, given the results that Horowitz found, this cohort’s perception of autonomous weapon systems should skew dramatically toward opposition between scenarios based on how the system’s level of meaningful human control is described. Finally, given the clear focus in publicly published doctrine documents from the Five Eyes states, we hypothesized that military respondents would place the highest value on potential risks and benefits of autonomous systems that relate to improving force protection, reducing procurement costs, and replacing humans in dull, dirty, or dangerous tasks. Interrogating this hypothesis was a key factor in developing the questions on importance of perceived risks and benefits.

The authors also acknowledge that this research design has two major limitations that must be noted. The first is that, as this is the largest survey of military officers to date, we cannot draw on extant literature to inform an expectation of the level of difference between this data and public opinion among the civilian population. However, extant research on attitudes toward the use of armed remote unmanned aerial vehicles (UAV) would suggest that junior military leaders would have a greater level of understanding than the general public, but that this would not necessarily translate into a significantly higher level of support. In response to this gap in the literature, the underlying research instrument included a comparative scenario that presented respondents with hypothetical systems with varying levels of human control.

The second limitation is this article’s focus on respondents from the ADF could raise legitimate questions about its generalizability. While acknowledging this concern, the authors make two contentions. The first is that, as the largest military-focused survey of its kind at the time of writing, the data itself offers a valuable insight upon which future studies of fellow militaries could be based. Second, the ADF is regarded as among the most capable and well-equipped militaries in the region, especially on a per capita basis. Furthermore, while a justifiable argument can be made that the ADF has sometimes proven a slow or inconsistent adopter of new innovations, it also has a history of successfully leveraging military technology to generate a sufficient competitive edge to maintain credible deterrence. Therefore, the attitudes expressed by these respondents could feasibly be used as a comparative basis for estimating servicemember perceptions in operationally and doctrinally similar militaries, both within the Five Eyes network and more generally among technologically advanced middle-power states.

Demographics

Before moving on to the substantive analysis and discussion, it is useful first to outline key features of the underlying dataset. This survey was conducted in early 2019 and, at the time of writing, is the most extensive study of military attitudes toward autonomous systems in terms of scale and detail. Reflecting their status as officer cadets and midshipmen, the respondents were almost exclusively young people (97.6 percent were between the ages of 18 and 24). Among the respondents, there was only limited female representation (26.8 percent), and more than 87 percent were born in Australia. Furthermore, while there was a roughly even distribution based on their year of study, a significant majority of respondents were from the Army (45 percent), with Royal Australian Air Force officer cadets and Royal Australian Navy Midshipmen accounting for the remaining 33 percent and 22 percent respectively.

The demographic breakdown of respondents has two important implications for this article. The first, and most obvious, is this data focuses the analysis on military personnel rather than the broader civilian population. This is admittedly a limitation of the scope; however, focusing on the end users separates this article from existing research of attitudes toward autonomous systems, which have been almost exclusively focused on the civilian population. Secondly, the authors are cognizant that their focus on junior officers arguably limits the applicability of its results to current defense policy and procurement. The authors would instead argue that the emerging nature of autonomous systems (and AI more broadly) means that it is critical that we understand how the decision makers of tomorrow understand the ethical, legal, practical, and operational potential, risks, and constraints of increasingly autonomous systems.

Willingness to Deploy Alongside Unmanned or Autonomous Systems

The first important takeaway from this study is a baseline understanding of the extent to which these young defense leaders would be comfortable, or not, to deploy into a conflict zone as part of a MUM-T, also known as a human-machine team. The MUM-T concept has become prominent in the public and policy discourse surrounding autonomous and unmanned systems.17 The underlying assumption with MUM-Ts centers on the contention that keeping humans in or on the Observe, Orient, Decide, and Act (OODA) loop somewhat mitigates the ethical and legal issues with killer robots, as well as reducing the technological and financial barriers to deploying potentially lethal autonomous systems.18 This study aimed to interrogate the assumption that military officers would be comfortable deploying into MUM-Ts with autonomous systems. Therefore, respondents were asked about their willingness to deploy in a team “involving robots to achieve a combat mission,” where the system was given varying levels of autonomous operation capacity.19 The response data is illustrated in figure 1.


 

Figure 1. Willingness to deploy alongside autonomous systems

There are three main conclusions regarding military perceptions of autonomous systems that can be drawn from these initial data points. The first is that this data illustrated that a significant relationship exists between the perceived level of independence of the “robot” and a willingness to deploy across each of the three MUM-T scenarios. Where the autonomous systems were either entirely under human control or were limited to preprogramed functions, the vast majority of respondents were willing or somewhat willing to deploy alongside autonomous systems. This would cover a variety of currently deployed systems that, for example, provide landing assistance to human pilots. However, when the autonomous system could exercise “preprogramed decision making” in the use of force in predefined areas (which correlates with semi-autonomous weapon systems), there was a significant negative shift, although the level of willing and somewhat willing respondents retained a slim majority (51.7 percent). Scenario three also marked a significant increase in the rate of uncertainty in responses, which rose 16.6 percent from 6.8 percent in scenario two. In the case of scenario four, where the system would meet commonly used definitions for a LAWS, there was a considerable increase in respondents that would be unwilling to deploy alongside such systems, cementing this as the only scenario in which a majority of respondents would not deploy. However, it is also important to note that the number of respondents that were “willing” remained similar and above 10 percent in both scenarios (15.8 percent and 13.2 percent respectively). Opposition to this level of autonomy is unsurprising given the findings of prior research, which admittedly focused on civilians; however, it does support a conclusion that, while a minority would be currently willing, the majority of this cohort harbors a discomfort with deploying alongside autonomous systems with the independent capability to apply force.

Secondly, this data supports the assertion that questions construction and discursive practice is particularly influential with autonomous systems, even for military officers. Note that 3.3 percent (27 respondents) would be either unwilling (7 respondents) or somewhat unwilling to deploy alongside a system that “need[s] a human operator [to] control every function” and an additional 24 were uncertain. Where the system was under human control but could “independently perform some preprogramed functions,” twice as many were unwilling (2.1 percent, or 17 respondents) or somewhat unwilling (3.9 percent, or 32 respondents) to deploy in the MUM-T, and a further 55 respondents were uncertain. While a statistically minor segment of the cohort, these results provide an interesting illustration of the discursive effect in the case of autonomous systems and the existence of an additional wariness toward machines utilizing potentially lethal force. Consider that, from a purely function-based perspective, these descriptions could apply to a variety of systems that are already in use with the ADF. It is unlikely that this cohort would be unwilling to deploy in a combat unit that utilized remote turrets (such as the Bushmaster Protected Mobility Vehicle), automatic target identification (such as the Phalanx Close-In Weapons System) or autonomous navigation with human-controlled strike capability (such as MQ-9 Reaper unmanned combat aerial vehicle [UCAV]). This further reinforces the need for detailed, fact-based training for military personnel to dispel remaining myths and address concerns among junior leaders regarding autonomy in military systems.

Finally, while this pattern of responses remained consistent, there was some interesting variation apparent when the data was analyzed on the basis of parent service branch. For example, Navy midshipmen were notably more willing to deploy alongside autonomous systems in scenario two yet were more uncertain in scenarios three and four. Indeed, 41.9 percent of naval respondents were unwilling or somewhat unwilling in scenario four, compared to 48.2 percent (Army) and 50 percent (Air Force). Contrastingly, the Army respondents had the highest levels of opposition across all four scenarios; for example, 14 percent of Army respondents were unwilling to deploy in scenario three compared to only 7.8 percent of Navy midshipmen and 9.3 percent of Air Force officer cadets. The Air Force respondents were broadly consistent with their Army colleagues yet displayed less uncertainty in scenarios three and four. This variance, while interesting, cannot be explained solely by differences in organizational culture between the services, because this cohort consisted of trainee-officers whose military experience had been chiefly tri-service at the time of the survey. Therefore, their distinct responses to these scenarios suggest that there must also be other factors at play beyond the natural biases generated by their service branch’s weapon systems and mission. The logical next step in this research was, therefore, to explore what potential benefits and risks of autonomous systems are most influential in building these perceptions among the next generation of defense leaders.

Perceived Benefits of Autonomous Systems

The second component of this study engaged directly with this question, questioning what level of importance these junior officers placed on a range of identified risks and benefits associated with autonomous systems. In this section of the survey, respondents were asked to rank how influential each of a list of benefits (fig. 2) was to their views on deploying alongside autonomous systems along a Likert scale. The results of this component provide valuable insights for future training and familiarization practices, as increasingly autonomous systems, as well as distinct platforms, are progressively integrated into the future ADF. While most respondents listed each of the 10 benefits as “somewhat important” or “important,” when one looks closer at the data, there are three takeaways worth highlighting.

Figure 2. Importance of perceived benefits of AWS

First, it is worth noting that the one significant exception to this pattern was when respondents were asked about the potential of autonomous systems to reduce harm or injury to enemy combatants. This factor was a notable outlier, with less than 12 percent listing it as a significant influence on their view of autonomous systems. This is particularly telling when it is contrasted against the other three harm reduction factors (which focused on the ADF, allied personnel, and civilians), which were clearly the most influential factors, being listed as important by 83–89 percent of respondents. The authors acknowledge that this data point could be interpreted with some skepticism, given the cohort’s status as young, inexperienced officer cadets and midshipmen; however, this same argument also highlights the core importance of identifying this discrepancy. These are soldiers, sailors, and airmen who will have command authority and oversight over increasingly autonomous systems in a future combat zone. The fact that the reduction of harm and injury to enemy combatants was so widely dismissed is a warning sign, especially when considering the expected importance of counterinsurgency and urban operations in the future operating environment, and this should prompt the provision of further targeted ethics training for these officer cadets and midshipmen.

Second, this data suggests that several of the benefits traditionally touted in favor of adopting autonomous systems are of less importance to the end user than expected. Aside from the risk of harm to enemy personnel, the least important potential benefits were reduced costs and new jobs and skill sets. These two factors were the only others that a significant number of respondents considered somewhat unimportant (15.5 percent and 11.7 percent respectively) and were only considered important by 26.5 percent and 33.6 percent of respondents. Interestingly, these factors also had the highest rate of being selected as somewhat important. The results for the remaining variables were similar, while each was listed as unimportant by less than 7 percent of respondents, they were only listed as important at an average rate of 40 percent. This suggests that training and messaging around autonomous systems should focus on the potential to protect host-nation and partner forces, as well as to improve the accuracy and reliability of targeting to protect civilians more effectively from unintentional engagement.

Finally, the data from this question displayed a more significant service branch variation than was seen in the previous question. Unsurprisingly, given their greater willingness to deploy alongside autonomous systems, Naval midshipmen were overall more likely to describe a benefit as important, while Air Force officer cadets had the least “important” results yet the most “somewhat important.” Interestingly, Air Force respondents were half as likely to list harm to enemy combatants as unimportant and were more likely to list this factor as somewhat important than either other service. Contrastingly, overall Army officers assigned significantly less importance to each benefit, with the notable exceptions being the harm reductions to ADF, allied, and civilians. For example, Army respondents were twice as likely to regard harm reduction to enemy combatants as unimportant than Air Force and 10.2 percent higher than Navy midshipmen. A similar difference can be seen with reduced costs, which twice as many Army officer cadets viewed as unimportant compared to their peers. Overall, the benefits data reinforces the need for individual service branches to supplement central efforts to integrate autonomous systems with training and exercises that reflect the specific platforms and domains they operate within.

Perceived Risks of Autonomous Systems

The final survey question to discuss in this article focused on determining the influence of a series of 13 potential risks on the willingness of the respondents to deploy in MUM-Ts. This question provided a valuable insight into which risks that this cohort of future defense leaders considered to be the most important—a perception that can guide future efforts to build trust among defense personnel as well as focus attention, within the military context, rather than considering the full range of concerns raised by prior civilian-focused studies. Overall, this data (figs. 3A and 3B) illustrated that respondents placed greater importance of operational risks—such as safety, accuracy, and loss of human control—than on the procurement and maintenance costs of autonomous systems or their potential to be organizationally disruptive within the ADF.

Figure 3A. Importance of perceived risks of AWS


 

Figure 3B. Importance of perceived risks of AWS

The risk perception data supported a hypothesis that respondents would place greater importance on the potential consequences of removing a weapon system from their direct control. While all identified risks were considered important or somewhat important by most respondents, potential safety and accuracy concerns were immediate outliers. Less than 2 percent of respondents considered these two variables as unimportant or somewhat important, and the number that were unsure was also negligible. Instead, we see that 83 percent of respondents placed high importance on safety, and over 86 percent did so for the accuracy of targeting and identification. Breaking down these figures by service branch reveals that Army officer cadets were more likely to deem both factors as important than their colleagues, who rated them as “somewhat important” at a compensatory rate. The rationale for these allocations is immediately apparent when we consider that these officer cadets would be asked to deploy alongside autonomous systems in complex land-based battlespaces, potentially in a counterinsurgency or hybrid warfare context, and that they would already be cognizant of their responsibility to ensure the safety of their soldiers while abiding by the Laws of Armed Conflict.

Building on that thought, as future officers, these respondents were preparing for their first command, for example, an infantry platoon, air defense unit, or an artillery battery in the case of Army officer cadets. It, therefore, makes sense that these respondents would also be concerned by potential accountability issues and loss of human control. However, the distinction between these risks is also worth noting, while 75.5 percent listed the latter as important and 13 percent as somewhat important, only 59.4 percent considered accountability issues as important, with 29.1 percent only considering this risk as somewhat important. Interestingly, 7.5 percent of Air Force officer cadets deemed accountability issues as somewhat unimportant, compared to 2.8 percent of Navy midshipmen and 5.2 percent of Army officer cadets. Determining why more respondents deemed accountability issues as less important as the loss of human control would be a valuable avenue for future research; however, on this data it is possible to contend that more respondents were concerned by the potential for autonomous systems to go rogue, so to speak, than by questions of military accountability (which as officers they must already consider).

Finally, on this aspect of their role as future defense leaders, it is interesting to note that the potential for autonomous systems to deteriorate command authority and impact on unit cohesion were only deemed important by 53.8 percent and 45.4 percent of respondents respectively, and just more than 9 percent were uncertain in both cases. Given the prevalence of concepts for incorporating AI into command-and-control processes across multiple militaries, this suggests that the future generation of ADF officers (who will be charged with incorporating and operating alongside such systems within operational command environments) would benefit from additional training, simulation, and war-gaming exercises to improve their understanding of the potential impacts and risks of integrating autonomous systems in the operational command cycle.

As with the benefits question, this data illustrates that these respondents placed less importance on the cost to build and maintain autonomous systems and job displacement, what is distinct about this risk evaluation is that less importance, particularly among Air Force officer cadets, was placed on potential challenges to ADF/service values and psychological impacts. There is a great deal of literature about moral and psychological injury from serving in conflict and an emerging body examining why there is such a high prevalence among drone pilots. It is, therefore, concerning that these risks were considered unimportant or somewhat important by 18.4 percent and 11.8 percent of respondents respectively. In fact, the impact of autonomous systems on service values was considered unimportant at the highest rate of any identified risk and second-highest as somewhat unimportant (behind job displacement). Furthermore, approximately 13 percent of respondents indicated that they were uncertain how to classify these risks in relation to autonomous systems. This is indicative of a potential lack of understanding of the ethical, moral, and psychological aspects of deploying in MUM-Ts among these future defense leaders that would need to be addressed prior to widespread integration of these technologies into the future force.

Discussion

Although prior literature has engaged directly with the importance of many of these perceived risks and benefits, these studies have generally been conceptual. At the time of writing, this is the only study to present the risks and benefits of potentially lethal robots to the officers and midshipmen who will be responsible for the safe and effective operation of MUM-Ts. Considering this list of perceived factors, both positive and negative, through the lens of the intended end user revealed three core takeaways that could inform future defense doctrinal development and procurement.

The first core takeaway from this study was that there is a clear difference between the perceptions of this cohort and ADF leadership in terms of how vital the reduced development, procurement, and maintenance costs of autonomous systems are as a potential benefit over low-mass manned platforms. Reduced operational costs are regularly touted as a core factor in favor of pursuing increasingly autonomous systems.20 One would, therefore, expect that this would be reflected in the views of the officer cadets and midshipmen. Instead, this study found that comparatively few respondents considered either cost or the potential for autonomous systems to disrupt traditional job roles as important factors in determining whether they would be willing to deploy as part of a MUM-T. Therefore, while the resource requirements to develop, procure, and deploy increasingly autonomous unmanned systems is important for defense planners, it is unlikely to be a useful focus for internal efforts to acclimatize soldiers to battlefield robots.

Second, this cohort indicated that the most influential factors in determining willingness to deploy with autonomous systems are their perceived safety, accuracy, and reliability. While the importance of trust in autonomous systems is well-documented,21 this study suggests that the ADF should integrate trust-building and autonomous system acclimatization exercises directly into the Academy Military Education and Training curriculum. Given the noted response variance based on parent service branch, an alternative could be to integrate such training into the Single Service Training components. This would have the added benefit of also accommodating non-ADFA officer cadets and midshipmen. Beyond the impact of such training on the junior officers themselves, it is also worth considering the importance of addressing the concerns highlighted in this study for the integration of autonomous systems into the core combat units of the ADF.

Prevailing wisdom holds that small-unit combat teams only work when the soldiers, sailors, or airmen trust their comrades and leaders, understand their role intimately, and are able to react to changing battlefield conditions in a consistent manner even under intense stress.22 The results of this study reflect that effective trust building and acclimatization at the small unit-level prior to combat deployment is vital and highlights the issue of junior enlisted soldiers being influenced by the views of their leaders (principally these officers, although also noncommissioned officers) toward unmanned platforms. If, for example, the lieutenant commanding an Australian Army rifle platoon is unwilling to deploy alongside a potentially lethal unmanned system that can use force based on preprogramed criteria,23 it is unlikely that their enlisted soldiers are going to be disposed to trust that platform in combat. Without that trust, the unit is, quite understandably, likely to ignore, minimize, or leave behind that piece of equipment regardless of doctrinal guidance.

Taking a step back from the tactical level, addressing the concerns raised by these officer cadets would also be a useful step toward improving the capacity of the ADF to build and maintain a capability edge in autonomous systems through a more effective, bottom-up innovation and diffusion cycle. Prior studies have demonstrated that bottom-up participation is a vital component of successful military innovation.24 The development of the Innovation and eXperimentation Group (IXG) is an apparent attempt to jump-start bottom-up innovation and experimentation in the Australian Army.25 While current officers commanding at the company and battalion levels are influential supporters of such efforts, for the IXG to be truly useful, it will require that junior officers take the initiative to experiment with the unmanned or autonomous systems under their command. The most effective way to equip junior officers for success in this endeavor would be to incorporate tailored war games and exercises into their initial training to both acclimatize emerging leaders to autonomous systems and to encourage tactical and operational experimentation once they reach their first command.

Finally, from an ethical standpoint, this study raises both positive and concerning implications for how junior military leaders perceive the impacts of autonomous systems on the battlespace. Beginning with the positive results, reduction of harm to civilians, ADF personnel and allied contingents were almost universally considered to be important factors affecting the respondent’s willingness to deploy alongside autonomous systems. There is also an immediately clear link here to the importance that was placed on the safety, accuracy, and reliability of autonomous systems. The argument that autonomous systems could reduce the potential of harm to friendly forces and civilians is similar to the justifications for the use of prior military technologies such as precision-guided munitions and armed remote-operated UAVs. Furthermore, these results also reflect the Australian Army’s Robotic and Autonomous Systems Strategy, the goals of which included using increasingly autonomous systems to enhance the capabilities of the soldier and reduce their physical and cognitive load, to augment their decision making, and to replace manned platforms in specific roles.26 Overall, these results suggest that future defense leaders will be amenable to arguments that autonomous and AI-enabled systems will allow for far more accurate targeting; remove ADF personnel from dull, dirty, or dangerous roles; and limit their exposure to combat.

It is, however, concerning that respondents placed a lower importance on adverse outcomes, such as the potential for autonomous systems to affect unit cohesion, to inflict additional stress or psychological damage, or to conflict with the values of the ADF. Given the emerging research on the rate of psychological injury among drone operators in the United States,27 it would be valuable for the ADF to consider how the use of potentially lethal robots interacts with its values and how officers are taught ethical and lawful battlefield operations. While the advent of AWS may yet remove human soldiers from elements of warfare, we must be careful that the reduction in physical risk is not attendant with exposing soldiers, sailors, and airmen to additional risk of moral or psychological injury.

Recommendations for Further Research

This article raises four interesting avenues for additional research. The first is to conduct a follow-on study focused on noncommissioned officers (NCO) and long-term enlisted soldiers. This data would be a valuable analytical companion to this piece because it is these experienced soldiers who would advise junior officers in combat. Understanding how NCOs perceive the risks and benefits of autonomous systems would also be valuable from the perspective of norm generation and training because, as the senior soldiers in a given unit, they would have a significant socializing influence upon the tactical use of autonomous systems.

Similarly, the second line of future research would be to conduct a qualitative follow-up study with a representative sample of the original respondents to contextualize and further explore the implications raised in this article. Companion interviews or focus groups would inform a more detailed understanding of the link between these risk-benefit perceptions and willingness to deploy alongside autonomous systems in a manner that could inform the creation of targeted training or identify design factors that should be prioritized in first-generation autonomous systems. Finally, this would allow the researchers to undertake a more direct comparative analysis of each potential risk or benefit, which could inform far more specific recommendations for the ADF.

The third avenue for future research would be to refine its focus by both segregating the respondents on the basis of service branch and referring directly to capabilities and platforms that have been identified by their respective services as priorities for integrating autonomy. This would more clearly indicate the extent to which the equipment, culture, and battlefield function of the respondent’s service branches influence their perception of the risks and benefits of autonomous systems and whether this explains the level of service branch difference that we saw in this study.

Finally, consider that this cohort will be increasingly unlikely to directly employ physical platforms on the frontlines as their careers progress and autonomous systems proliferate and diffuse. Therefore, the fourth avenue for future research would be to analyze whether these perceptions among junior military officers change when the autonomous system is integrated into their command-and-control processes, such as with an AI-enabled digital assistant for collating and prioritizing incoming signals intelligence for a battalion command post.

Conclusion

In conclusion, this article has challenged the assumption that junior leaders are inherently open to the use of autonomous systems and instead demonstrated that a significant majority would be unwilling to deploy alongside fully autonomous LAWS. This article has demonstrated that comparative willingness to deploy in MUM-Ts among this cohort is influenced by a range of concerns and incentives; however, it has also demonstrated that allowing a robot to use lethal force retains a discursive weight that influences a significant minority to claim that they would be uncomfortable deploying alongside robots that have comparable operational independence to systems that are already in use by the ADF.

This article identified that the most important factors influencing a respondent’s willingness to deploy in a MUM-T are the perceived safety, accuracy, and reliability of the autonomous system and that the potential to reduce harm to civilians, allied forces, and ADF personnel are the most persuasive benefits. Contrastingly, this data suggests that the resource efficiencies of autonomous systems and their potential to disrupt the defense workforce are significantly less influential upon their position than it is for strategic planners. Finally, this study highlighted a concerning lack of emphasis on the part of these respondents toward the potential negative emotional and psychological impacts of deploying a robotic weapon system under their responsibility, if not control.

Overall, this article makes two core recommendations based on the underlying data. The first is that autonomous system acclimatization training should be incorporated at all levels of the officer training process and that small-unit leadership tactics training should incorporate robotic units. This leads into the second recommendation, which is that units at the company level and lower would be well served by undertaking war games and exercises with the purpose of encouraging Army’s soldiers, NCOs, and officers to experiment and innovate with autonomous systems. Where suitable platforms do not exist or are not readily available, units should be encouraged to run tabletop or proxied exercises for this purpose.

The ADF will only be able to secure and maintain a capability edge in the future if we encourage all elements of the military to experiment and become comfortable with autonomous and AI-enabled systems because it is this bottom-up, constant experimentation that will keep the military innovating with sufficient speed and agility to regularly reestablish its regional, comparative dominance in the deployment of autonomous systems.

Dr. Jai Galliott

Dr. Galliott is director of the Values in Defence and Security Technology Group, School of Engineering and Information Technology, within the University of New South Wales at Australian Defence Force Academy, where he also sits on the Faculty and University Boards. He also holds appointments as a fellow of the Modern War Institute at the West Point and the Centre for Technology & Global Affairs in the Department of Politics and International Relations at the University of Oxford.


 

Dr. Austin Wyatt

Dr. Wyatt is a research associate in the Values in Defence and Security Technology group at The University of New South Wales at the Australian Defence Force Academy. His research concerns autonomous weapons, with a particular emphasis on their disruptive effects in Southeast Asia.

Notes

1 Acknowledgments and Funding: The authors of this paper have received support from the Australian Government through the Trusted Autonomous Systems Defence Cooperative Research Centre; the Australian Department of Defence; US Air Force Office of Scientific Research; and the Spitfire Foundation. The views expressed within this paper are those of the author/s and do not necessarily represent those of any other party.

Ethics Clearance: Ethical clearance for the original study was provided by the Departments of Defence and Veterans Affairs Human Research Ethics Committee.

1 Austin Wyatt, “Charting Great Power Progress toward a Lethal Autonomous Weapon System Demonstration Point,” Defence Studies 20, no. 1 (2020): 1–20.

2 Robert O. Work and Greg Grant, Beating the Americans at Their Own Game: An Offset Strategy with Chinese Characteristics (Washington, DC: Center for a New American Security, 2019).

3 Michael C. Horowitz, “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability,” Journal of Strategic Studies 42, no. 6 (2019): 764–88.

4 Campaign to Stop Killer Robots, “Global Poll Shows 61% Oppose Killer Robots,” 2019, https://www.stopkillerrobots.org/.

5 Heather M. Roff and David Danks, “‘Trust but Verify’: The Difficulty of Trusting Autonomous Weapons Systems,” Journal of Military Ethics 17, no. 1 (2018): 2–20.

6 Charli Carpenter, “US Public Opinion on Autonomous Weapons,” Duck of Minerva Blog, 19 June 2013, https://web.archive.org/.

7 Open Roboethics Initiative, “The Ethics and Governance of Lethal Autonomous Weapons Systems: An International Public Opinion Poll” (Vancouver, Canada: Open Roboethics Initiative 9 November 2015), http://www.openroboethics.org/.

8 Open Roboethics Initiative, “The Ethics and Governance of Lethal Autonomous Weapons Systems.”

9 Michael C. Horowitz, “Public Opinion and the Politics of the Killer Robots Debate,” Research and Politics 3, no. 1 (January–March 2016): 1–8, https://journals.sagepub.com/.

10 IPSOS, “Data for 2017 Campaign to Stop Killer Robots Survey,” (2017).

11 IPSOS, “Six in Ten (61%) Respondents across 26 Countries Oppose the Use of Lethal Autonomous Weapons Systems” (news release, 22 January 2019), https://www.ipsos.com/.

12 IPSOS, “Six in Ten (61%) Respondents.”

13 Campaign to Stop Killer Robots, “Global Poll Shows 61% Oppose Killer Robots.”

14 IPSOS, “Six in Ten (61%) Respondents.”

15 This Hague Convention clause aims to offer some protection to individuals caught up in armed conflict even when there is no specific applicable rule of international humanitarian law.

16 Horowitz, “Public Opinion and the Politics of the Killer Robots Debate.”

17 Richard Lim, “These Are the Drones You Are Looking For: Manned–Unmanned Teaming and the U.S. Army,” National Security Watch 15, no. 4 (21 December 2015), https://www.ausa.org/.

18 Mick Ryan, Human-Machine Teaming for Future Ground Forces (Washington, DC: Center for Strategic and Budgetary Assessments, 2018), https://csbaonline.org/.

19 These four scenarios centered on “potentially lethal robots” that

1. need a human operator to control every function of the system;

2. human operators control, but which can independently perform some preprogramed functions;

3. can exercise preprogramed “decision making” in determining how to employ force in predefined areas without the need for direct human oversight; and

4. can imitate human-level decision making ability to create and complete its own tasks in any environment without the need for any human input, learning from its mistakes.

20 Austin Wyatt and Jai Galliott, “Closing the Capability Gap: Asean Military Modernization During the Dawn of Autonomous Weapon Systems,” Asian Security 16, no. 1 (2020): 53–72.

21 Roff and Danks, “‘Trust but Verify’.”

22 Roff and Danks, “‘Trust but Verify’.”

23 Such a system was covered under Scenario Three, in which 21.4 percent of Army Officer Cadets were somewhat unwilling, 14 percent were unwilling, and 16.4 percent were unsure. This means that 189 of 365 (almost 52 percent) of these future Army officers expressed discomfort at the prospect of deploying as part of a MUM-T that included such a system.

24 Adam Grissom, “The Future of Military Innovation Studies,” Journal of Strategic Studies 29, no. 5 (2006): 905–34.

25 Chief of Army, Army in Motion: Aide for Army’s Teams (Canberra: Australian Army, 2019), https://www.army.gov.au/.

26 Robin Smith, Robotic & Autonomous Systems Strategy (Canberra: Australian Army, October 2018), https://researchcentre.army.gov.au/.

27 Robert Sparrow, “Drones, Courage, and Military Culture,” in Routledge Handbook of Military Ethics, ed. George Lucas (Oxon: Routledge, 2015).

Disclaimer

The views and opinions expressed or implied in JIPA are those of the authors and should not be construed as carrying the official sanction of the Department of Defense, Department of the Air Force, Air Education and Training Command, Air University, or other agencies or departments of the US government or their international equivalents. See our Publication Ethics Statement.