Drivers for the Deployment of Lethal Autonomous Weapons Systems

This November I spoke at the United Nations Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS), which took place at the UN in Geneva November 13–17.

Ambassador Amandeep Gill, was the Chairman of the meeting and asked me to speak on the military effects panel and address the UN member nations on some of the drivers for the deployment of autonomous systems technologies, as well as asymmetric force application within this context.

The following represents the content of my address:

(Picture: Military effects panel speaking to the UN, November 14, 2017)

Drivers for the deployment of lethal autonomous weapons systems technologies can be categorized into three areas: Trust, Culture, Availability.

Overview: Trust, Culture & Availability

First we need to first have trust in technology, which includes having confidence in its ability to perform as expected within parameters outlined by human so that we can rely on it before deploying it.

Then we need to culturally accept the technology before using it. Culture in this sense is not referring to a particular country, instead it refers to an organizational, institutional, or societal technology culture, particularly generational cultural differences.

After there is trust in the technology to operate as expected, and the cultural environments in which we operate accept the use of the technology, then it can not be deployed unless it is technologically available.

— -> TRUST

Within the category of trust, there are also information security drivers such as confidentiality, integrity, and availability. Within the context of LAWS, confidentiality with autonomous systems would mean limiting information and access to only authorized personnel. Integrity in LAWS would apply to the data being acquired (especially data directly involved in critical decision making infrastructure). Availability of LAWS would refer to reliable access to the the autonomous systems for monitoring or operational use. Authenticity in LAWS is equally important and would apply to the authenticity of the machine learning environments which are used to create operational parameters, and teach the autonomous system how to adapt and perform its instructions.

The security concern is that hackers can manipulate/alter data in a way that makes autonomous systems malfunction and/or not operate as expected. Even the potential doubt that a machine learning environment, or the data it has used has been hacked would instill mistrust in the confidence of the capacity of the autonomous system to perform as expected. In his 2016 keynote at the RSA Conference, Admiral Rogers (Commander USCYBERCOM/NSA) said that one of his biggest fears is data manipulation, where data, instead of being stolen, was simply altered. This would mean that we couldn’t even trust what we saw on our screens. Or in the case of lethal autonomous weapons systems, we couldn’t trust that the algorithm or database has been tampered with.

Four main areas where confidence could grow in Artificial Intelligence / Autonomous Systems:

  1. Cost reduction Through process optimization, efficient management of limited resources. ex: Google DeepMind reduced Google’s Data Center’s cooling bill by 40% This allows in expendable funds to invest in other areas.
  2. Reducing burden on the solider and/or commander — — Physically (lifting equipment, carrying fallen soldiers to safety, etc) — Intellectually (intelligence, reconnaissance, analysis, decision support, etc)
  3. Successful training and testing (in a synthetic and non-synthetic environment) Autonomous Weapons Systems should have ability to successfully: — Operate on the intent of the commander with minimal to no call-back for additional information nor permissions. — Assimilate “knowledge transfer” from other autonomous systems’ data, experience, decisions etc. (speed can play a decisive advantage when operating in a high tempo environment) — Operate within defined moral bounds and limits assigned to the algorithm complying with rules of distinction and proportionality; while finding efficient solutions to achieve mission objectives. (research shows that AI alone, and human only, is not as effective as a combination Human-AI teaming).
  4. The AI/AS algorithms need to be explainable Ability to explain, and confidently fix an algorithm as needed, will lead to greater trust in expected outcomes.


Each generation will view technology a bit different from the one before it. This cultural change will also play a role in the drivers of the use of new technology.

Thinking of generational-cultural perceptions, as well as institutional cultural differences, the following food for thought comments relate to the “Evolution of Digital Technology Culture” as a contributing driver to the use of autonomous weapons systems:

  • There may be more acceptability of the incorporation of changing technology in the battlefield in efforts to achieve decisive advantage. Day-to-day technology may come to be seen as one with artificial intelligence and a justifiable, normal, and fair use of military technology to achieve mission success. Leading AI expert Andrew Ng sees artificial intelligence as being the new electricity, in that every device will have no value without cognition, just as the devices we use and rely on today would have no value without electricity.
  • Moving forward, as digital natives become commanders, their perception of what types of weapons are acceptable in the battlefield may be different from senior military and political leaders today.
  • An artificial intelligence 9/11 style attack conducted by a terrorist organization could push culture and society to embrace the need to adopt machine learning to adopts machine learning as defense against AI weapon with a renewed drive to make lethal autonomous weapons systems fit within the accepted military technologies used.


Disruptive technologies such as the internet, mobiles, drones, 3D printing etc are becoming more available and accessible with a lower price point around the world. The availability of technology applies to everyone, whether it is nations who abide by international norms, organized non-state actors that do not, or individuals.

The 6 Ds of Exponential Technological Growth by Steven Kotler and Peter Diamandis also applies to artificial intelligence.

Technologies start on the exponential growth path by first becoming digitized, they then go into a deceptive stage which leads into a disruptive stage. After the technology disrupts a market it demonetizes the technology and then dematerializes it. The last step after it has been dematerialized is it becomes democratized and accessible to anyone.

For example, books got on a growth cycle of exponential technology when they became digitized into pdfs and e-books. However it did not disrupt the existing (physical) book selling market until there was enough critical mass, and the culture of reading kindle/e-books and increase of personal computer ownership that eventually disrupted the physical stores selling books. After it was disrupted and bookstores closed, the cost of books declined and access to books became cheaper (demonetized). As there were no more physical books, and people began to read more form the cloud, books became dematerialized. Then finally anyone who had access to the internet, had access to books for free — and it became democratized.

Many technologies, like artificial intelligence are on this path.

- The democratization of technologies will also reach non-state political actors who do not abide by international norms. For example: terrorists, criminals, etc.

- In the end, technology won’t matter as much as creativity in its use, which will bring decisive advantage.

— -> Asymmetric Force Application

Trust, Culture and Availability will also be relevant to non-state hostile actors such as terrorists, criminals etc as they consider deploying autonomous weapons systems.

However the order will be Culture, Availability and Trust.

  1. Culture — in terms of the accepted institutional modus operandi of the non-state actor. What types of autonomous weapons use is acceptable for them to achieve their political objectives.
  2. Availability — in terms of what types of technologies non-state actors (ex terrorist groups) can get access to and deploy.
  3. Trust — in terms of the confidence they have in the autonomous system operating as they expected. Their considerations for 2nd and 3rd order effects will not be as comprehensive as that of nation state actors.

The UN GGE on Lethal Autonomous Weapons Systems (LAWS) will meet again in 2018 to take the discussion a step further. As they meet to discuss the regulation of LAWS it will be important to discuss the “point of autonomy” where it would be acceptable, and under what conditions.

— —

Dr. Lydia Kostopoulos’ (@LKCYBER) work lies in the intersection of strategy, technology, education, and national security. Her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She is an advisor to the AI Initiative at The Future Society at the Harvard Kennedy School, participates in NATO’s Science for Peace and Security Program, is a member of the FBI’s InfraGard Alliance, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity.

Experimenter | Strategy & Innovation | Emerging Tech | National Security | Wellness Advocate | Story-telling Fashion | Art #ArtAboutAI →