‘Ban the killer robots’ movement could backfire
Unfortunately, these efforts have stigmatized much-needed research on autonomous robots that will be central to increasing economic productivity and quality of life over the next half century — but only if the technology is able to be developed. Rather than allowing those predicting a techno-dystopia to dominate the debate, policymakers should vocally champion the benefits of autonomous robots — including in the military — and embrace policies designed to accelerate their development and deployment.
Autonomous robots will likely be one of the most important innovations of the coming century. With autonomous robots, factories will be able to increase productivity and better compete with low-cost competitors, mines will be able to improve safety, hospitals will be able to provide better care to patients, and service industries broadly will be able to dramatically cut costs. Substituting robots for human workers will lead to higher productivity, lower costs, higher wages, more capabilities, and more service availability, all without reducing the total number of jobs as demand expands across the economy in response to increasing supply. Indeed, this has been the case with virtually every major technological innovation, from the printing press to the steam shovel.
The military will also benefit, because substituting robots for soldiers on the battlefield will increase a military’s capabilities while substantially decreasing the risk to its personnel. It may even lead to a reduction in civilian casualties, as autonomous robots could be programmed to engage only known enemy combatants.
While some activists acknowledge the potential upside, they still call for banning these weapons. In 2012, a number of organizations came together to form the Campaign to Stop Killer Robots, a coalition seeking to “preemptively ban fully autonomous weapons,” and in 2015, the United Nations hosted its second meeting to consider a formal ban or other restrictions on the technology. The principle argument of detractors is that nothing short of a complete ban on autonomous weapons would stop an eventual arms race that would result in these weapons becoming available to everyone from Mexican drug kingpins to Afghan warlords. Moreover, they argue that allowing an autonomous robot to make life-and-death decisions “crosses a fundamental moral line,” would result in a lack of accountability for civilian deaths, and is fundamentally too ethically complex to delegate to a machine.
Undoubtedly, any military use of lethal autonomous weapons should first require a thorough review to ensure that these weapons continue to respect international ethical commitments and rules of war. But that is true of virtually every military invention on the battlefield, from drones to biochemical weapons. As the International Committee of the Red Cross has stated, “[T]he crucial question does not seem to be whether new technologies are good or bad in themselves, but instead what are the circumstances of their use.” Indeed, the U.S. Department of Defense has already created preliminary guidelines for autonomous weapons, and further development of such guidelines will help calm fears, protect soldiers and civilians, and pave the way for continued development of robotics.
Incidentally, this may be much ado about nothing. In the future, the difference in effectiveness between an autonomous weapon and a semiautonomous weapon that requires human approval before taking lethal action might be so insignificant that military leaders will be content with the latter. It is simply too soon to know. But it does seem clear that a ban on autonomous weapons would not stop the development of technology that is only a line of code away from being fully autonomous.
Thus, the only real impact of banning killer robots today would be a potential decline in investment in research and development. Military investment has long been a key catalyst to developing and commercializing new technologies with important commercial uses, and robotics is likely to prove no different. The military is already investing in many applications of autonomous robots, such as autonomous pack mules to transport supplies and autonomous robotic medics to carry wounded soldiers to safety. These technologies will be put to use in nonmilitary environments as well. For example, the type of robot used to transport injured soldiers can also be used to lift and move civilian patients in hospitals, thereby addressing one of the leading causes of workplace injuries among nurses. The pursuit of lethal autonomous weapons will likely produce many other research advancements in autonomous robots.
Policymakers should encourage military investment in autonomous robots, not only to improve national defense, but also to accelerate the development of autonomous robots for other sectors. The battle to ban autonomous weapons works against the critical goal of building autonomous robots that will improve the lives and livelihoods of the human race. Yet the anti-robot movement will only continue to pick up steam as activists tap into fears about everything from displaced workers to loss of human interaction. While doom-and-gloom predictions of Terminator-style robots wreaking havoc on humanity have captured the popular imagination, these stories are more appropriate for Hollywood movie scripts than serious policy debates. Policymakers need to be the voice of reason by pushing back against the hyperbole and instead supporting policies to advance this critical and transformative technology.
Daniel Castro is vice president of the Information Technology and Innovation Foundation, a think tank focusing on the intersection of technological innovation and public policy.