AI on the battlefield

NATO forces - fighting machine  / Photo: Reuters
NATO forces - fighting machine / Photo: Reuters

The fraught questions about morality in warfare have become even more complex as artificial intelligence comes into play.

Ethical issues in artificial intelligence (AI) are worrying quite a few experts in the sector. A special committee formed to investigate the question, headed by Prof. Karine Nahon of the Herzliya Interdisciplinary Center, published its recommendations in late 2019. The report described the risks of irresponsible use of AI tools and the importance of taking AI considerations into account when technological products are developed.

The report, however, dealt solely with the civilian aspects of the question. The committee deliberately ignored considerations related to the integration of AI in military applications, above all the value that is perhaps the most mysterious of all - human life. How important are the ethical considerations in the discussion? A document published by the US Department of Defense in 2018 cited ethics, together with safety, as one of the factors in which the Department of Defense was taking the lead.

Last December, the Herzliya Interdisciplinary Center held a conference on AI and modern warfare. Microsoft Israel commercial legal counsel Adv. Ben Haklai outlined the boundaries of discourse on the matter, which is only in its infancy. "Governments and research entities began discussing these matters in depth only in the past two years, but only now is all of the research done starting to assume a concrete character," he said. The conference was part of a weeklong program in which the center hosted a number of delegations from Europe and the US in the framework of a clinic on international criminal and humanitarian law. Students from Israel and countries like the US, Italy, China, Poland, and Germany participated.

Difficult to investigate decision-making

It appears that almost every Israeli is personally familiar with the significance of the ethical considerations in AI. When an Iron Dome battery "decides" not to launch an interceptor at a rocket fired from the Gaza Strip, even if it is not going to hit an inhabited area, the system is making an ethical decision. The Iron Dome case, however, is relatively simple, both because the dilemma is human life versus property damage and because every battery also has soldiers supervising its activity. The system also provides intelligence and precision for the purpose of making very accurate predictions of the expected landing place of the missile. The situation becomes a little more complex in the case of another system with autonomous capabilities used by the IDF - the Trophy tank protection system. This operates right on the battlefront, and its operation is liable to jeopardize infantry moving in the area.

How are the weapons that will operate in even more complex surroundings programmed, such as in the case of an unmanned vehicle used in operations in built-up areas? How do you program an automatic weapon like those placed on the border between North and South Korea - a weapon that has to detect the intentions of trespassers and "decide" whether to kill them or let them pass? There are no clear and absolute answers to these questions yet. This is exactly the reason for discussing the matter.

Haklai explained in his lecture how the various possible situations in a military context are becoming more complicated. Iron Dome is an example of a fairly simple situation in which a machine makes a decision, but with a human soldiers able to intervene in the decision and reverse it.

The easiest situation is one in which the machine only recommends an action; only the person can decide whether to execute it. The most difficult and challenging situation, on the other hand, is one in which people are out of the picture, and the machine makes decisions by itself. This situation will be more and more relevant as technology makes completely autonomous actions by drones, unmanned aerial vehicles (UAVs), and other robots possible.

The Samsung-made SGR-A1 robots being placed in the demilitarized zone between North and South Korea are to a large extent just such a situation. This robot is composed of an infrared camera and a voice identification system. It has the ability to choose between a warning action and shooting rubber bullets or other ammunition. The robot can act completely autonomously, but it still contains an option for a human operator to take command. Haklai explained that the SGR-A1 is a good example of the ethical dilemmas facing its programmers - "whether to program the machine to shoot at everything that moves, or have it first try to verify that a person is involved, and if a person is involved, whether he or she is raising his or hands in surrender."

According to Haklai, what is special about the discussion is that "in contrast to civilian applications, the party responsible for the consequences of using autonomous weapons is first of all the entire country, after which the question arises of who exactly in the chain of command is made responsible."

Responsibility is not the only aspect. In order to supervise the activity of such weapons, an ability to retroactively investigate decision-making is necessary. "All of the engineers will jump in and remind everyone that AI is a black box, meaning that the logic behind every decision taken by the machine cannot be understood, because it is based on an enormous quantity of data in which a human being can find no logic. The neuron networks in deep learning systems in effect generate an infinite decision-making tree or diagram," Haklai explains. The problem begins with the fact that "most human societies will not accept a situation in which killing takes place without it being possible to explain why it took place."

Haklai compared the civilian challenges to the military ones through the example of the transition from an ordinary car to an autonomous one. While the public is willing to accept a reasonable rate of fatalities as part of the price of doing business, he say, "It will be less willing to accept a case in which a drone bombs a village without knowing the reason for it."

Haklai says that the it is possible that a future solution could lie in additions that will decode the way AI decisions are made. "There are already startups working on algorithms that can provide a reasonable explanation for AI decision-making. Without this capability, it will be difficult to improve the original algorithms."

Integrating cybersecurity in development

All of these challenges are becoming even more difficult because of the inherent failure facing developers of AI-based systems for weapons systems. The quality of insights that deep learning-based systems are capable of producing depends of the quality and quantity of the data that they can analyze, and on which they are based. Haklai explains that the problem starts when there are not enough data. Mobileye's sensors are installed on millions of cars carrying them on the roads all over the world and feeding innumerable data about events into the system. These data enable the system to constantly learn and improve.

The same is true of the autonomous cars of companies like Google, which travel next to cars driven by people, gather data, and learn. On what data will AI-weapons systems practice? Will thousands of precious hours of combat pilots be wasted? The solution found by the US Department of Defense is to use simulators in cooperation with the private sector, such as Microsoft's AirSim simulator for training algorithms under laboratory conditions, as a substitute for field trials.

Finally, Haklai explains, when military needs are involved, the cyber aspects must be thoroughly understood in the development stage for AI-based systems. For example, Haklai asserts that tools for detecting the use of deepfake technologies, which can create a video or audio clip that appear to be real, should be included. Unless these tools are included from the beginning, Haklai says, it will be difficult for ordinary systems to understand that efforts are being made to deceive it.

If that sound complicated, it is only the beginning. These solutions are difficult to apply in a single cultural framework, but what happens when the entire hierarchy of values on which it is based changes? Haklai gives the example of the difference between the US and China. What is reasonable in one society is not necessarily reasonable in a different society. "Extensive use of cameras for facial identification and a credit rating for every person are perceived as a reasonable way of properly managing a manufacturing company. This difference in perceptions is definitely a matter of principle."

Commentary: Countries should not leave the stage to the technology giants

The involvement of companies like Microsoft in designing the rules for managing the world of technology is highlighted in a most riveting way by the company's involvement in the development of AI tools in a military context. Algorithms must be trained with large quantities of data, but since the systems cannot be trained in real situations, it is necessary to use simulators.

Since private companies control the way they program the simulators, it means that they are in effect involved in designing the how future weapons are used. This is not mere speculation. This ability to exert influence is consistent with the way the company thinks. Microsoft president Brad Smith warned in November 2019 that Artificial intelligence was liable to become a weapon that is hard to counter, unless the technology giants build restrictions for preventing misuse into the technology.

It is very possible that Microsoft and other companies developing such simulators will construct them according to rules and restrictions set for them by the state, but there is also another possibility - that some of those principles of action will be the result of work and thought in the corporations. "Corporate legislation" consists of rules that a corporation sets for itself. Almost every web surfer has personally encountered these rules. When Facebook blocks a user who published racist incitement, we understand the legal basis for the "punishment." What happens, however, when a user is blocked because he or she sent too many friend requests and violated the "community standard?" This is one of the rules that Facebook enforces on its users, which sometimes affect their lives more than state laws. "Something interesting is happening here. Legislation by corporations is becoming more important than legislation by governments," a top lawyer at one of Israel's largest and most establish law firms says.

It is known and studied at law schools that the legislator's response to technological progress and adaptation to it is very slow. This especially true in the case of AI, among other things because of the gaps in the regulators' technological knowledge and inability to catch up results in advanced thinking about regulation and ethics taking place in the offices of the technology giants, instead of the offices of regulators or committees of elected officeholders.

In the absence of a better discourse incorporating public representatives in addition to law and technology experts, the technology companies will be the ones shaping the rules for using technologies. Where AI, technologies whose consequences for our lives are unimaginably far reaching, is concerned, it is all the more dangerous to let the cat guard the cream.

Published by Globes, Israel business news - en.globes.co.il - on February 9, 2020

© Copyright of Globes Publisher Itonut (1983) Ltd. 2020

NATO forces - fighting machine  / Photo: Reuters
NATO forces - fighting machine / Photo: Reuters
Twitter Facebook Linkedin RSS Newsletters גלובס Israel Business Conference 2018