Is the US military an unlikely source of AI ethics wisdom?

For an institution that has garnered severe criticism in regard to AI on the battlefield, businesses could learn from its simple outlay.
27 February 2020

United States Navy Northrop Grumman unmanned surveillance aircraft. Source: Shutterstock

Tesla CEO Elon Musk is a frequent harbinger for artificial intelligence’s (AI) potential to destroy human life as we know it, aiming warnings at government agencies, business enterprises, and media.

The technology is being championed by users in all sectors, including public services and defense, and is proving to be powerful, although academicians and researchers believe that it is still in its nascency.

Facebook CEO Mark Zuckerberg, who is a pioneer of AI use cases in many fields, also expressed concerns initially when they found that one of their early projects enabled the technology to develop a language of its own which its staff couldn’t interpret.

Musk, Zuckerberg, and other technology leaders such as Microsoft’s Satya Nadella, Google’s Sundar Pichai, and Amazon’s Jeff Bezos all have strong views and opinions on the possible benefits that AI will deliver— but agree that some form of guidance in terms of ethics is required.

Industry participants, academicians, and regulators, in the US, EU, as well as other regions, regularly debate and discuss this, and are slowly working on creating their own versions of frameworks that lay out what “AI ethics” should really look like.

For an institution that has probably garnered some of the most severe criticism in regard to the debate around the use of AI ‘on the battlefield’. The US Department of Defense (DOD) just announced their own AI ethics framework, and although it is meant to guide military operations, businesses can probably take something from its simple outlay.

The US DOD’s AI ethics framework is comprised of five basic principles, based on recommendations of the Defense Innovation Board:

  1. Responsible: Staff must take up responsibility for the development, deployment, and use of AI capabilities
  2. Equitable: Deliberate steps must be taken to minimize unintended bias in AI capabilities
  3. Traceable: AI capabilities must be developed and deployed in a way that uses transparent and auditable methodologies, data sources, and design procedures and documentation
  4. Reliable: AI capabilities must have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities continually tested
  5. Governable: Staff must ensure that it is always possible to detect and avoid unintended consequences

How US DOD’s AI ethics could play out for enterprises

Enterprises have been using AI in interesting ways but most of it is limited to crunching data, automating workflows, and delighting customers with smarter, faster, and more efficient services.

The banking and financial services industry — among its peers — seems to be using AI to push the boundary. With lending, for example, banks are experimenting with the use of AI to improve credit checks and automate decision making. Let’s see how the US DOD’s rules of AI ethics could help.

The lending model created by the bank’s staff would be trained on data that the bank has gathered in the past. This model will then be tested and deployed. If the model truly uses an AI engine, then it will be expected to evolve over time and learn from the decisions it makes.

In the above case, the bank’s staff bound by the US DOD’s rules must take responsibility for not only the development of the model but also its deployment, recognizing the fact that decisions made by the bank could impact an entire community or state at the very least.

To ensure that the model is equitable, the bank must ensure that there’s no bias in the data it was fed when it was initially trained. Further, to keep the lending model functioning well, staff need to make sure that future iterations of the model are transparent and easily explainable — governable.

Finally, when a decision is made by the model, the bank’s staff and the lending team must be able to rely on it. That’s ultimately what will deliver value to the operations and its customers. Truth be told, the US DOD’s AI ethics can simply be lifted and applied to enterprise use cases.

However, given their brevity and simplicity, more effort must be made by enterprises to think deeply, broadly, and clearly about how their AI projects can use the principles to accelerate their development and flex their deployment scales.

Ultimately, enterprises need to agree on and adopt a common AI ethics framework. Without it, innovation will slow down.