Public Statement Submission to the Defense Innovation Board

[For those interested in submitting a statement, they can do so at this link.]

Public Statement Submission to the Defense Innovation Board (DIB): In response to the open call for public comments on:

“The Ethical and Responsible Use of Artificial Intelligence for the Department of Defense (DoD)”

April 2, 2019

Dear Defense Innovation Board,

I thank you for making an active effort to include the public in your exploratory investigation for guiding AI Principles that will be serve as a frame of reference for the Secretary of Defense. This act of transparent inclusiveness, combined with an open and accessible online platform to submit comments, is an example of co-creation, between the civilian population and top military advisors for defense innovation, on one of the most challenging ethical technological developments of our time. I hope other countries will look to this example and reach out to their citizens for their thoughts.

My comments below address the Defense Innovation Board’s objectives for the AI Principles,

“Ultimately, these AI Principles should demonstrate DoD’s commitment to deter war and use AI responsibly to ensure civil liberties and the rule of law are protected.”

with what I perceive to be some practical items for consideration:

1. Diversity in Technological Creation as an Imperative

2. Algorithmic Uncertainty as a Known Unknown

3. Technology Ethics as a Culture

1. Diversity in Technological Creation as an Imperative

2. Algorithmic Uncertainty as a Known Unknown

Artificial Intelligence and Weapons Systems:

At the United Nations Convention on Certain Weapons meeting on Lethal Autonomous weapons Systems, there are several words to describe the confidence in artificial intelligence to perform as intended: confidence, reliability and trust were the most popular.

The algorithm should be trusted to perform within a range of expected effects, despite whatever algorithmic imperfection it may have. The ethical questions will arise in managing the algorithmic risk from a values and legal perspective. Some due diligence aspects will involve algorithm stress testing (ex: in an AI sandbox) and the use of generative adversarial networks to test its responses to various inputs.

Algorithmic Uncertainty as a new part of Battle Damage Assessments:

If artificial intelligence is to be embedded in weapons systems, then there should be an element in battle damage assessments to capture some of the uncertainty and potential range of collateral damage. Just like with other weapons systems, the operational planner would identify AI enabled weapons systems capabilities and the range of known limitations, in this case, limitations of the algorithm.

Algorithmically Enabled Fires, Edge Computing and the Internet of Battlefield Things:

Artificial intelligence has the potential to converge with other technologies across war-fighting domains. As emerging Internet of Battlefield Things becomes more prevalent in operations, along with sensors feeding back information, and edge computing autonomously executing low-level decision making in real-time, there will be many spaces for ethical consideration. Particularly in the area of necessity, proportionality and distinction.

AI as Decision Support Infrastructure:

There is a large gap between the vast amounts of information being produced each day and the ability for human intelligence analysts to be able to collect, process and fuse it. AI presents itself as a desirable cost effective and seemingly accurate alternative to augment intelligence disciplines in a way that would produce actionable results at speeds magnitudes of order higher than that of human analysts. Designers of the algorithms should create a feature to indicate the degree of accuracy of the output. In this sense, algorithmically produced intelligence products would have an “algorithm confidence” rating to help the human decision maker determine how best to use the analysis.

Autonomous Targeting:

Operational planners and doctrine developers should rethink the targeting process Find, Fix, Track, Target, Engage, Assess (F2T2EA), in regards to cognitive weapons systems. F2T2EA is further put under ethical strain when we think about collaborative autonomous systems working in tandem via distributed maneuvers.

[DARPA’s Collaborative Operations in Denied Environments (CODE) program will be an advantage in the continuously contested multi-domain battle environment where decisive speed and agility in maneuver turn denied/contested spaces in favor of friendly forces. While the CODE program explores Unmanned Aircraft Systems, the algorithms produced will provide useful guidance for unmanned sea and space assets in future maneuvers featuring expanded collaborative autonomy.]

These speeds are increasingly less human and more machine, this direction appears to be one that is a byproduct of technological advancement rather than political decisions to take humans out of warfare. To keep humans accountable, a conscious effort needs to be made to have algorithmic expainability, technological supply chain transparency in maintenance, logs documenting all machine activity and clear command responsibility. It would be worthy to explore the idea of algorithmic auditing to comply with values and legal guidelines. I see these elements as playing an important role in the DoD’s Third Offset strategy which highlights five technological-operational components: (1) Deep-Learning Systems, (2) Human-Machine Collaboration, (3) Human-Machine Combat Teaming, (4) Assisted Human Operations, and (5) Network-Enabled, Cyber-Hardened Weapons.

DoD Hotline for Ethical Concerns in the Use of Technology:

Just as with fraud, waste and abuse there are mechanisms in which to report them. It should be anticipated that some people may want to voice concerns about ethical aspects in the development of algorithmically enabled military technology or during its use. Group-think that may exist in a work environment or fear of retaliation for expressing ethical reservations could prevent some from voicing their concerns. In these circumstances the DoD Hotline could serve as a channel for processing AI related ethical violations or concerns.

3. Technology Ethics as a Culture

Military educational environments play an integral role in shaping our soldiers’ mindsets about the ethical parameters in which they are expected to operate in. As a former employee of the National Defense University’s College of Information and Cyberspace, I can tell you I have witnessed first hand the impact military education has had on seasoned and experienced military officers who left Ft. McNair with an expanded mindset and renewed resolve on the strategic and military problems at hand. Whether it is cadets at the academies, or officers at joint educational institutions, the JS J7 should strongly consider a new educational requirement around technology ethics that is woven into existing curriculum in a holistic way. This will be more effective than imposing yet another stand alone educational requirement on an already very densely packed curriculum filled with existing educational requirements that can’t be removed or changed.

The tempo of conflict has been notably increasing particularly in the cyber domain, once AI is more adopted it will become an accelerant. This will inevitably create more opportunities for miscalculation, which is why ethical paradigms of times where conflict was slower may be strained during decision making at higher tempos. A form of “ethics at speed” Table Top Exercise style learning experience is one effective tool to explore ethical dilemmas that may surface. The baseline starting point is how to embed national values, Department of Defense guidelines, doctrine, as well as international agreements such as the Geneva Convention and the Law of Armed Conflict (LOAC) into new algorithms (and other emerging technologies) — specifically which values do we put in and at what parts of the technological development supply chain. The next important topic to tackle is how we prioritize our shared values in algorithms, and what algorithmic trade-offs we are willing to make in a fast paced dynamic operating environment with asymmetrical actors and commercial off the shelf technology.

Apart from educational environments, there is room to blend in technology ethics into required DoD annual awareness training. I should stress “blend” not “add” so as not to reach levels of awareness fatigue.

Looking ahead, as technology becomes more seamless with AI software operating at the speed of cyber, brain machine interfaces allowing thought control of drones, it may become hard to make timely judgments to prevent unwanted action. Ultimately, just as in any other situation it will be a human who is accountable and responsible for the unwanted action. Now is the right time to make deliberate efforts to shape institutional culture around DoD ethics and emerging technology.

If “Algorithms are opinions embedded in math”. (Dr. Cathy O’Neill, author ‘Weapons of Math Destruction’)

Then I would argue that weapons systems’ algorithms are national values, embedded in math, with lethal effects.

Thank you for opening up comments to the public on this very important matter. I will eagerly look forward to the final AI Principles the Defense Innovation Board puts forward for consideration by the Secretary of Defense.

Sincerely.

Lydia

Dr. Lydia Kostopoulos (@Lkcyber)consults on the intersection of people, strategy, technology, education, and national security. She addressed the United Nations member states on the military effects panel at the Convention of Certain Weapons Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS). Formerly the Director for Strategic Engagement at the College of Information and Cyberspace at the National Defense University, a Principal Consultant for PA and higher education professor teaching national security at several universities, her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She lectures at the National Defense University, Joint Special Operations University, is a member of the IEEE-USA AI Policy Committee, participates in NATO’s Science for Peace and Security Program, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. In efforts to raise awareness on AI and ethics she is working on a reflectional art series [#ArtAboutAI], and a game about emerging technology and ethics called Sapien2.0 .

Experimenter | Strategy & Innovation | Emerging Tech | National Security | Wellness Advocate | Story-telling Fashion | Art #ArtAboutAI → www.Lkcyber.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store