Story image

Is there room for ethics in the ‘Wild West’ of AI?

13 Jun 18

Wherever the future of artificial intelligence (AI) is discussed, ethics is never far behind.

With AI being the hottest topic at the 2018 SAS Users of NZ (SUNZ) conference, on yesterday at the Michael Fowler Centre in Wellington, the expert panel on ethics covered a range of fascinating and important questions.

SAS director of product management for cloud and platform technologies Mike Frost is one of the experts, taking his seat directly after delivering the opening keynote.

Matthew Spencer and Rohan Light complete the panel, the Ministry of Social Development (MSD) chief analytics officer, and responsible information use lead advisor, respectively.

Finally, Tenzing management consulting director Eugene Cash acts as moderator.

They decide on a working definition of ethics - ‘knowing the difference between what you have a right to do, and what is right to do,’ an apt one for the purposes of the discussion.

Spencer begins the conversation by talking about the MSD's attempts to guide ethical decision making with the Privacy, Human Rights and Ethics Framework (PHRaE).

The PHRaE is under development by the MSD to help organisations make “early decisions to not progress initiatives, or to accept risk if value outweighs risk, can be made if risks cannot be mitigated,” according to the PHRaE information sheet.

This line of discussion leads to the development of the controversial Google Duplex which is able to pose as a human on a phone call and was debuted to wild applause without thought to the myriad of ramifications this technology could have.

Cash asks the panellists, does this show that Silicon Valley is ‘ethically rudderless’?

“I think that this is pretty typical of some organisations,” Frost replies.

“They will try something and then, based on the reaction, they’ll withdraw and pull back and say ‘we had a right to do it, but maybe we weren’t right to do it.

“Do I think that Google cares about the ethics of that? No, they’re like us - they’re trying to sell software… I don’t think that’s the right way, I think we should be more proactive rather than reactive… but right now it’s a bit of a wild west, and that’s how this self-governance materialises.”

Spencer points to another controversial use-case of AI as an example of how things can go wrong - the US courts using the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system to aid judges in deciding whether a defendant would be likely to re-offend and then sentencing them accordingly.

He notes that in a study on the efficacy of AI, humans had an AUR (a statistical measure of how accurate something is when choosing between two options) of .76, AI given the same task received a score of .82, but human and AI together managed .90.

Whether this is right or wrong is not the point here - what it shows, Spencer says, is that if we are going to use AI to aid important decisions, adding human intelligence to the mix is vitally important.

A bit of humanity is also the solution suggested for perhaps the biggest ethical concern for AI - bigotry.

If the data that we feed into an AI is inherently skewed toward or against a certain kind of person, the results will be just as skewed.

The key to avoiding AI echoing these biases is “human plus AI and constant calibration,” Light says.

“There should always be a human involved with the evolution of AI. If you don’t have that then the chance that it goes astray increases.”

Light also astutely notes the sadly common irony of four white men in suits discussing issues of bigotry and makes it clear that diversity is also an important ingredient in this recipe.

Frost suggests the possibility of a medicine-style panel of ethics for data scientists that would be in charge of reviewing uses for AI to ensure that they are staying within an ethical framework.

At the end of the half an hour panel, the discussion comes to an abrupt end with a dozen thoughts on ethics and AI half explored.

The main takeaway is that there is still a long way to go.

The panel comes together in agreement that as we move forward, an ethical perspective needs to be an integral part of the development and implementation of AI - but they also recognise that ethics are slippery and culturally specific.

As Frost says, at the end of the day, “Every community will have to set their own standards for what is an appropriate use of technology.”

Correction - in the original posting of this story it was erroneously stated that the AUR data came from a study on COMPAS - Spencer clarified, "I referenced the  COMPAS issue as an example of how things can go wrong. I also referenced, as a separate example, a study from MIT to illustrate how humans and machines may work together to offer a superior solution in some circumstances. The human factor in the loop allows for greater oversight."

HPE promotes 'circular economy' for end-of-use tech
HPE is planning to show businesses worldwide that throwing old tech and assets into landfill is not the best option when it comes to end-of-use disposal.
This could be the future of ridesharing
When you hear the words ‘driverless vehicle technology’, the company Bosch may not immediately spring to mind.
2019 threat landscape predictions - Proofpoint
Proofpoint researchers have looked ahead at the trends and events likely to shape the threat landscape in the year to come.
InternetNZ welcomes Govt's 99.8% broadband coverage plan
The additional coverage will roll out over the next four years as part of the Rural Broadband Initiative phase two/Mobile Black Spots Fund (RBI2/MBSF) programme expansion.
Commerce Commission report shows fibre is hot on the heels of copper
The report shows that as of 30 September 2018 there were 668,850 households and businesses connected to fibre, an increase of 45% from 2017.
Dr Ryan Ko steps down as head of Cybersecurity Researchers of Waikato
Dr Ko is off to Australia to become the University of Queensland’s UQ Cyber Security chair and director.
Businesses in APAC are ahead of the global digital transformation game
“And it’s more about people and culture - about change management - along with investing in the technology.”
HubSpot announces fund for 'customer first' startups
HubSpot is pouring US$30 million (NZ$40 million) into a new fund to support startups that demonstrate ‘customer first’ approach of not only growing bigger, but growing better.