Public Statement Submission to the Defense Innovation Board

1. Diversity in Technological Creation as an Imperative

The former head of USCYBERCOM and NSA, Admiral (ret) Mike Rogers, said that his approach to finding people with innovative ideas and approaches was to look for people who didn’t look like him, didn’t have the same background as him and were all around different from him. He saw competitive value in diversity of thought, experience and perspective. We are in a fortunate position today to be able to look at back and see the results of biased thinking in software and product design that was entirely due to the homogeneity of the team who created the technology. Artificial Intelligence is no different. There is far too much at stake with algorithms in the civilian world (ex: medical decisions, predictive policing etc) and in the military world (ex: lethal application of force or algorithmic targeting) for this technology to be created by a homogeneous team of people who share the same gender, race, religion, sexual orientation and political affiliation. Efforts should be made to avoid this inside the DoD as well as by external contractors who develop artificial intelligence for use by the DoD — this applies to all uses, from autonomous weapons systems to AI empowered software for Global Force Management. Cognitive diversity in the teams that develop this technology will make it more robust and can contribute to avoiding miscalculation and unnecessary and unintentional escalation of tension.

2. Algorithmic Uncertainty as a Known Unknown

Across the world I find a common concern around ascribing human ‘intelligence’ characteristics and unrealistically high performance expectations to algorithms. While it is true there are many things algorithms can do better than humans, it is still worth having an institutional culture that sees AI just like cybersecurity — a matter of expectation management. Cyber experts around the world are becoming more comfortable in saying that ‘there is no such thing as 100% security’. This mindset means that many attitudes today around cybersecurity are about managing risk. It would be ethically prudent at this stage, of artificial intelligence development, to consider it to be an imperfect algorithm which could at times be a constantly improving algorithm that some could argue is constantly in beta mode. If AI is seen to be an imperfect algorithm which has tremendous value to offer, then a focus should be made on managing its risk. The acceptance of algorithmic uncertainty as a known unknown is a useful mindset that plays a big role in how the discussion of the technology is had.

3. Technology Ethics as a Culture

New and evolving technologies are rapidly changing the character of war at speeds which warrant military education on technology ethics.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dr. Lydia Kostopoulos

Dr. Lydia Kostopoulos

198 Followers

Experimenter | Strategy & Innovation | Emerging Tech | National Security | Wellness Advocate | Story-telling Fashion | Art #ArtAboutAI → www.Lkcyber.com