Story image

New study to delve into the murky world of AI’s effects on law, ethics and morality

23 Jan 17

Artificial intelligence (AI) is slowly working its way into New Zealand but what that means for our laws and policies is still a great unknown, says The New Zealand Law Foundation.

New tech such as driverless cars, crime prediction software and “AI lawyers” are challenging traditional laws around transport regulation, crime prevention and legal practice.

Now AI is being put under the microscope in a new ‘groundbreaking’ Law Foundation study, which seeks to examine how these new technologies affect legal, practical and ethical challenges.

The three-year study, supported by a $400,000 grant from the Law Foundation, will be run by project team leader and Associate Law Professor Dr Colin Gavaghan at Otago University. The study will look at AI implications under four broad topics: employment displacement; “machine morality”; responsibility and culpability; and transparency and scrutiny.

“The AI study is among the first to be funded under our ILAPP project. New technologies are rapidly transforming the way we live and work, and ILAPP will help ensure that New Zealand’s law and policy keeps up with the pace of change,” says Law Foundation executive director Lynda Hagen.

The effects of AI on law are also under investigation in the study. At least one US law firm says it has hired its first AI lawyer, which will research precedents and make recommendations in a bankruptcy practice.

Another set of questions flows from the employment implications of AI. At least one American law firm now claims to have hired its first AI lawyer to research precedents and make recommendations in a bankruptcy practice.

“Is the replacement of a human lawyer by an AI lawyer more like making the lawyer redundant, or more like replacing one lawyer with another one? Some professions – lawyers, doctors, teachers – also have ethical and pastoral obligations. Are we confident that an AI worker will be able to perform those roles?” Gavaghan says.

Going further into the world of crime, prediction technology such as PredPol - now widely used by police in the US - has been accused of not only reinforcing bad practises and racially-biased policing. Courts are also using predictive software when determining if there is cause for likely reoffending.

“Also, because those parameters are often kept secret for commercial or other reasons, it can be hard to assess the basis for some AI-based decisions. This ‘inscrutability’ might make it harder to challenge those decisions, in the way we might challenge a decision made by a judge or a police officer,” Gavaghan says.

Gavaghan believes that driverless cars are also a contentious issue, as Mercedes recently revealed it would programme its cars to priories car occupants over pedestrians when an accident is about to happen.

 “This a tough ethical question. Mercedes has made a choice that is reassuring for its drivers and passengers, but are the rest of us OK with it? Human drivers faced with these situations have to make snap decisions, and we tend to cut them some slack as a result. But when programming driverless cars, we have the chance to set the rules calmly and in advance. The question is: what should those rules say?” Gavaghan asks.

Gavaghan will work alongside Associate Professor Ali Knott from the Department of Computer Science and Associate Professor James Maclaurin from the Department of Philosophy, as well as two post-doctoral researchers.

The Law Foundation’s Information Law and Policy project (ILAAP) is a $2 million fund dedicated to developing New Zealand law and policy in the areas of IT, artificial intelligence, cybersecurity, data and information. 

Wine firm uses AR to tell its story right on the bottle
A Central Otago wine company is using augmented reality (AR) and a ‘digital first’ strategy to change the way it builds its brand and engages with customers.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill. 
Protecting organisations against internal fraud
Most companies tend to take a basic approach that focuses on numbers and compliance, without much room for grey areas or negotiation.
Telesmart to deliver Cloud Calling for Microsoft Teams
The integration will allow Telesmart’s Cloud Calling for Microsoft Teams to natively enable external voice connectivity from within Teams collaborative workflow environment.
Jade Software & Ambit take chatbots to next level of AI
“Conversation Agents present a huge opportunity to increase customer and employee engagement in a cost-effective manner."
52mil users affected by Google+’s second data breach
Google+ APIs will be shut down within the next 90 days, and the consumer platform will be disabled in April 2019 instead of August 2019 as originally planned.
GirlBoss wins 2018 YES Emerging Alumni of the Year Award
The people have spoken – GirlBoss CEO and founder Alexia Hilbertidou has been crowned this year’s Young Enterprise Scheme (YES) Emerging Alumni of the Year.
SingleSource scores R&D grant to explore digital identity over blockchain
Callaghan Innovation has awarded a $318,000 R&D grant to Auckland-based firm SingleSource, a company that applies risk scoring to digital identity.