Why AI ethics should be your biggest concern

As a system that continuously learns with human-generated data, AI is prone to repeating our own past mistakes.
5 July 2019

UK police have taken flak for inaccurate facial recognition software. Source: Shutterstock

Be it a chatbot, a screening system, or warehouse robotics, employing an artificial intelligence (AI) system is becoming common practice today. 

Parallel to the technology’s rapid deployment hovers a question— being a system void of morality that learns from data by itself, that doesn’t ‘know’ what is socially acceptable and what’s not, can we ever trust AI? 

For some, the debate over AI ethics right now might seem abstract or overwrought for its current capabilities. ‘Machines can’t think’, but they can mimic what we do as humans, and already there are no few examples of how this can go wrong. 

If the ethical principles of AI technology are not both considered, monitored and addressed, the impact can be damaging both to the businesses using them, and wider societal trust in the technology. 

Unethical AI can taint your image

Unethical AI can taint a company’s image by painting a controversial picture of its practices and culture to the public. Being a trending topic in a connected world, news spread fast, so ethical missteps will also. 

Amazon was recently forced to park its AI candidate-screening technology due to an inherited lack of gender-neutrality. The ‘secret’ tool was supposed to rank candidates with a five-star rating system— like how its shoppers rate products.   

Scientists trained the machine using resumes submitted over a 10-year period. But owed to male dominance in the tech industry and therefore in the data, the system taught itself that men were preferable, even discounting two women’s only schools from its selections. 

Meanwhile, it reportedly took less than 24 hours for Microsoft’s Tay Twitter-based chatbot to be corrupted by the social platform. “The more you chat with Tay, the smarter it gets,” said the computing firm, before— as reported by the Verge— ensuing interactions turned it into a “racist asshole”.

AI success is down to the data

The examples are evidence of how the success of AI initiatives distill down to the data it is fed or has access to. 

Ninety-three percent of UK and US organizations consider AI to be a business priority, found a study by Vanson Bourne. But data-related struggles are a top reason the company’s clients have ceased or canceled AI projects, according to an IBM executive. 

For society to be able to accept AI across its proliferating guises and applications, quality and integrity of data is everything. With the technology being deployed in every industry, the consequences of that not being the case can be grave.

In the UK, the Metropolitan police have been urged to suspend use of AI facial recognition software, used with the aim of identifying wanted criminals in public. 

The technology has proven successful just one-fifth of the time, prompting comments that it “could lead to miscarriages of justice and wrongful arrests” and poses “massive issues for democracy.”

Life-threatening consequences

In the case of self-driving cars— a market estimated to be worth US$ 7 trillion by 2050 (when close to all the world’s cars could be autonomous)— protection of life in operation is paramount, and one of the key features that are being touted by manufacturers. 

AI is, of course, central to the development of autonomous vehicle technology. But, once again, the technology is only as good as the data it learns from. 

A study conducted by Georgia Tech on detection models used by driverless cars, for example, found lighter skin tones were detected way better than darker ones. The problem was traced to the fact that the dataset used contained three times more light-skinned people than dark-skinned

With a number of self-driving-related fatalities having already occurred, meanwhile— and AI technology across all industries regarded ‘as one’ by consumers— early ethical oversights could see car makers face a battle to convince customers to make the leap.

Tackling AI ethics

To address issues like these, The European Commission (EC) launched an independent AI ethics group to ensure “trustworthy” AI development can be achieved. Last month, 42 countries signed up to the OECD’s common artificial intelligence (AI) principles— a plethora of other public and private groups are emerging. 

In a recent article for TechHQ, Accenture’s Responsible AI Manager, Caryn Tan, wrote: “Both organizations and governments alike clearly recognize the importance of designing ethics into AI, there’s no doubt about that. 

“Despite all of this, there has been little headway in finding practical ways for organizations to tackle real-life ethical dilemmas and make decisions when faced with them.”

For companies developing or applying AI technology within the business, Tan suggested allocating a team to oversee the technology, that can act as “firefighters” to spot issues before they can escalate. 

“While data scientists wield the power of the code, and ultimately the outcome of the AI, others within the organization have critical input in guiding how the code is written,” said Tan. 

“Effective communication between data scientist and other team members is key to enabling interdisciplinary decision making.”