As the recent past has shown, the trend in modern warfare is towards asymmetric conflicts of guerilla groups against the technologically superior armies of industrialized nations. Additionally, industrialized nations are increasingly researching and deploying robotic technology to aid their armed forces. “Robots as Weapons in Just Wars” by Marcus Schulzke and “Robots, Trust and War” by Thomas W. Simpson both examine the inevitable clash between robotics and modern modes of warfare.
Both authors argue, for differing reasons, that fully autonomous robots are counter-productive in counter-insurgency operations because they cannot effectively win the hearts and minds of the local population. Both authors were overly pessimistic in their analyses. Armies consisting of primarily robot warriors can successfully wage counter –insurgency operations and win the hearts, minds and respect of the local population. Both Simpson and Schulzke believe that autonomous robots, robots that are capable of complicated though processes, damage the “hearts and mind” approach to winning a counter-insurgency war.
Schulzke cited several examples of ways in which robots are inferior to human soldiers; humans “distribute food and medical care, and they build relations with community leaders” (Schulzke 304). Additionally, “Local populations can therefore interpret the presence of foreign soldiers as something positive or negative depending on how the soldier acts”. His analysis is a vast oversimplification; Schulzke too narrowly focuses on human soldiers. In reality, his quote should say that locals can interpret the presence of foreign influence rather than just soldiers.
Due to language barriers, robots could be programmed to be more effective at communicating with indigenous populations than humans. Furthermore, robots could just as easily perform humanitarian functions like distributing food and medical supplies as human soldiers. I fail to see how locals could not analyze the actions of robots and determine whether they are a positive or negative force in their community. Schulzke generalized that to an indigenous population, robots create a negative impression, seem unfair and appear more hostile than human soldiers, yet he failed to provide any evidence to support his claims.
If robots are as bad as he claims, Schulzke should have been able to provide a specific example of their negative impacts in modern conflicts like the war in Afghanistan. In a similar analysis, Simpson asked, “could those whom robot warriors fight amongst trust them” (Simpson 327). He defined trust as a dynamic interaction between two things over another; normative trust (his primary focus of trust) is based on interactions between two groups and is not strictly reliance of one group on another.
Additionally, trustworthiness must be rational and based on one group’s long-term self-interest, goodwill, or debt to another and “acting from the right motive”. Simpson’s argument does not negate locals being able to trust robot warriors; neither of these definitions relies on any solely human characteristics. His own definitions create a loophole for indigenous populations to rationally trust robot warriors . According to Simpson, motivation and competence are the primary components of trustworthiness.
Since robot’s practical physical applications were not debated as they were in Schulzke’s essay, Simpson chose to closely examine the motivation of robots warriors. He feels that it would be irrational for a host population to trust robots unless the robots have character and are capable of acting from a motive. Lacking a motivating force eliminates robot’s ability to make practical judgments, and should therefore eliminate the local’s trust. Consequently, when robots begin to act from an internal motivation, people will begin to form relationships with and care for them.
I agree that when this criterion is met, robots will be able to earn the trust of the locals and become an extremely effective counter-insurgency tool. Ironically, he argues that if humans can view robot warriors as moral, then there will be no purpose in sending robots to fight in place of humans. When these criteria are met, “robot life is [not] any less valuable than a human life” (Simpson 333). Correctly, Schulzke never argues the importance of an intelligent robot warrior’s “life” over a human’s; this is the defining point between Schulzke and Simpson.
Assuming that a robotic “life” even parallels a human life is a foolish notion; all human lives are unique and will not bear any resemblance to the mass produced and service oriented existence of robot soldiers. Simpson tries to counteract Schulzke’s argument by saying that any “value-grounding property” that humans possess should be ignored. This “value-grounding property” should definitely not be ignored. First and foremost, the “value-grounding property” is the reason that we are interested in sending robots in our steed.
War is a terrible experience for any person to endure, and it should be our goal as a moral society to eliminate it from the human experience. Even the most human robots will never be human; being human will never be able to be replicated technologically. It is very difficult to put a price on human lives, but is it not obvious that their price should at least be greater than a robot’s? Simpson believes that robot’s expendability brings up another interesting situation. He provides a wonderful example of an engaged couple signing a pre-nuptial contract, and goes on to explain how it relates to an army of robot warriors.
Simpson argues that a government sending an army of robot warriors does not mean that they have committed to seeing a situation through till the end. A robot army is a way to mitigate risk by minimizing the resources endangered in the operation. The local population may believe that a government sending robot warriors is a signal that the government is not committed to a successful completion since robot warriors are an expendable asset. However, human soldiers are an expendable asset as well; however, they generally have a high price associated with their expendability.
It is illogical to assume that robot warriors, much less an entire robot army, will be inexpensive. Does a high monetary value being invested in an operation not lend itself to creating a level of trust between two parties? The United States spent billions of dollars defending Kuwait and Saudi Arabia in the First Gulf War. The vast financial resources involved in that war demonstrated our government’s commitment and trustworthiness. As Schulzke said, “The US military already spends enormous sums of money on disposable weapons” (Schulzke 301).
Does this mean that our government has already made itself untrustworthy in the eye of our allies? Furthermore, Simpson said initially that trustworthiness can come from self-interest, goodwill and debt to others. Are goodwill and self-interest no longer reasons for a host population to trust a robot warrior army? Mitigating risk may be a reason for an indigenous population to be more hesitant to trust a robot army, but it certainly should not eliminate their trustworthiness as Simpson claims.
One common thread of both authors is their reluctance to whole-heartedly adopt robot warriors based on the belief that robot warriors will make the choice to go to war much less costly and more appealing. I agree exactly with how Simpson phrased his stance, “So long as it is the enemy’s soldiers and civilians who do the dying, the moral cost of lost life seems to count for much less” (Simpson 336). Armies compromised of robot warriors could realistically make a nation’s declaration of war look like an investment strategy.
Lowering the morale cost of war could be a dangerous proposition. If we continue to place a great importance on human life, the morale cost of war will remain high and robots will remain a viable alternative to human soldiers. As Simpson and Schulzke agree, robot warriors and asymmetric conflicts are destined to cross path. However, robot warriors can and will be an effective tool in counter-insurgency operations. Robot warriors will be able to win the hearts, minds and respect of local populations through their actions and the motivations of the country they represent.
Robots will be able to perform humanitarian mission in addition to effectively fighting the enemy. One day, we will learn to love robots, but it will not simply be because we trust them. We will love them because they have spared the pain and suffering of sending our own sons, daughters, husbands and wives into battle. They will become an invaluable tool in our arsenal, not only because they are effective at waging war, but because they will reduce our human cost of war. We will love them, but we will never place a greater importance on their “lives” than we do on the lives of other human beings.