Otago researchers are investigating the implications of the artificial intelligence revolution on law, life and work.
"Think of the 19th century when people had to walk in front of 'motorised vehicles' carrying a red flag. That wasn't because they went fast: it was because people didn't know what they could do and didn't know what control drivers had over them, so you just had someone saying get out of the way. That's about where we are with AI."
There is a certain irony when Associate Professor James Maclaurin (Philosophy) draws on the dawn of the motoring age to illustrate the technological sea change artificial intelligence (AI) represents.
Artificial Intelligence and Law in New Zealand is a three-year Law Foundation-funded project, examining possible law and public policy implications of AI innovations, ranging from crime prediction software, to autonomous vehicles, to the automation of work.
The project team is led by Associate Professor Colin Gavaghan (Law), Associate Professor Alistair Knott (Computer Science) and Maclaurin, who brings expertise in ethics and the philosophy of science.
"Every powerful technology brings risk. So we want to maximise benefits and mitigate the risks using education, regulation and product standards," explains Maclaurin.
"Elon Musk points out that food, medicines and cars all have regulations and standards – but we have nothing for AI. Part of the reason we've got nothing is that it's a very difficult regulatory target. AI's development is fast and unpredictable, so we don't have a clear legal idea of what it is and what it can do."
In general, artificial intelligence technologies are those which can learn and adapt for themselves, but Maclaurin says one of the problems characterising AI is that, in people, intelligence is really a grab bag of evolved cognitive capabilities. In AI it is often much less general, but much more powerful.
"Legal responsibility flows from moral responsibility. Could an AI be morally responsible for something that it does, especially when, in the case of autonomous vehicles, we want it to make choices out on the road?"
Maclaurin says one of the most interesting aspects of the project is the way it demands investigation into what it is in human beings that we might want to put in a machine – aspects such as moral responsibility.
"So what's moral responsibility in humans? Well, it's having knowledge that allows us to calculate the consequences of our actions. It's being empathetic: we know one of the reasons it's bad to run over people is because we can imagine what it would be like to be run over. It's also the skill of being able to quickly and efficiently make good moral judgements," he says.
"In human beings those things come as a package deal. But an autonomous vehicle knows cars and roads – and that's all it knows. It doesn't have empathy and we'd struggle to work out how to make it empathetic in the right way.
"Then we're left with a choice. Can an AI be responsible for an accident? Yes. But you can't punish it. What do you do? Imprison your car?"
Alternatively, nobody did anything wrong and no one is responsible, says Maclaurin. "That looks like a problem we need to solve."
The project is also delving into the use of AI overseas in areas such as predictive policing – where to put police cars on a Friday night, whom to stop and search. It is also being used in judicial systems to decide whom to let out on parole.
Maclaurin says AI might be cheaper, but it needs to be better. "One thing we want is accuracy so the wrong person isn't let out on parole, but we also want fairness. We can't keep people in prison forever and we shouldn't keep people in prison for the wrong sort of reasons.”
Another key area is the impact of robotics on work, with studies suggesting there might be a calamitous drop in employment or in the range of work people do.
"Disruptive technology is constantly changing the borders of jobs, making it spectacularly difficult to work out how much work and what sort of work might be available when this year's graduates reach middle age."
"There is no doubt we are in the midst of a new industrial revolution – and the last one was very dislocating. We got lots of great things out of it, but it was pretty miserable to be in the middle of it. So how do we avoid that bit and get to the better bits?"
Funding
- New Zealand Law Foundation