eCommerceNews New Zealand - Technology news for digital commerce decision-makers
Story image
AI is at 'an ethical crossroad' and we need better guidelines, says report
Mon, 30th Jul 2018
FYI, this story is more than a year old

The growing power of artificial intelligence is being hailed as part of the fourth industrial revolution, but how are ethics being used to shape the technology and its effect on people's lives?

A new report by the Chartered Accountants Australia and New Zealand (CA ANZ), titled Machines can learn, but what will we teach them, looks at the ethical considerations around AI and machine learning, and possible implications on society, business, regulators, individuals, and the accounting profession.

“To date, the media focus on AI and machine learning has been characterised by two extremes. The first focuses on the tremendous benefits AI can deliver to humankind, freeing us from workplace drudgery and enabling us to actualise our higher order skills,” the report says.

 “At the other extreme are warnings of robots coming to take over our jobs and a world of ‘big brother' surveillance emerging where our every mood and move will be monitored and analysed, the ensuing information used to manipulate us in ways not of our choosing.

The report says that businesses working with AI should develop codes of ethics or updated governance guidelines about what their AI programmes can and can't do.

This information should then be shared so customers can make informed decisions about there they take their business.

 “In our world of fake news and privacy concerns, we are currently at an ethical crossroad where we need to determine the right direction for the development of machine learning and AI,” says CA ANZ business reform leader Karen McWilliams.

“By setting the right ethical framework now, we have an opportunity to design a new AI-enabled world, which could create a more inclusive global society and sustainable economy than exists today.

While AI brings research and learning tools that could potentially enable breakthroughs and new insights into customer behaviour, the report suggests there are numerous downsides.

Those downsides include data privacy risks, data security, and social reengineering.

The report says there is also a “strong possibility that vast numbers of the current workforce, and current graduates, may find themselves made obsolete because AI applications can do their work faster and more accurately”.

“The ethical impacts, especially in terms of human well-being and intergenerational equity, insist that we pace AI to remain in step with the limits of our human understanding so that we do not blindly alter our human evolution in ways that harm ourselves and future generations.

The report also suggests that AI algorithms should be designed so that they can be reviewed by a third party.

“The absence of transparency and a full understanding of how [AI] algorithms work creates significant ethical issues,” the CA ANZ report says.

It suggests that a global agreement is essential.

Earlier this year the AI Forum of New Zealand also released a paper that aims to build a national strategy to tackle mainstream AI challenges the country will face.

The Australian Government has also put aside $30 million for AI development, which includes an ethical framework.