Plato would argue that self-driving cars commit murder when they intervene in a fatal accident

In the Spring I met a person working at an advisory firm funded by Canada’s federal government who was coding how an autonomous car with a family of four should react when confronted with two potential fatal traffic outcomes – to crash into an oncoming car or to avoid the car and plummet off a cliff. He has no law degree or even a degree in philosophy and had never taken a course in ethics. So I asked him: “How do you code to kill humans without anchoring the justification for it in law or philosophy?” To which he replied: “They’re irrelevant.”
No, they are not.
Democracies are anchored on laws that are based on ethical principles that derive from ancient philosophy and ancient religious values. If we create technology to make decisions affecting humanity that are not based on the rule of law or the principles pursuant to which they are founded, what it holding up our democracy? The answer may be: “not much” in an AI world.
I didn’t give philosophy much time in post-secondary school but I have always been grateful that I was forced to study Plato and to study the Pythagorean philosophers for a number of years. It made me a better, more tolerant jurist and provided me with a solid grounding on what justice means and why the rule of law mattered in the time of Plato and still matters today.
Let’s think about coding self-driving cars to plunge off a cliff, killing that family of four.
Plato said that a person who kills a person by the agency of others, or who contrives the death of a person and is the author of the deed in intention and design, is guilty of murder. Coding machines to terminate a person authors the deed but does that make the coder guilty of murder? Plato recognized that there is a distinction between letting a person die (legal) and making a person die (illegal and immoral). Self-driving cars will not let a person die in a potential accident; they will make the person die by intervening and controlling the process (the cliff versus the car crash).
It’s not for me to say whether this is or is not murder but Plato would likely argue that a self-driving car plunging off a cliff as a matter of coding is committing murder. But it is for me to say, as all jurists should, that AI triggers fundamental issues of law and must be considered with lawyers as part of the decision-making process.
Self-driving cars are not the only situation where there will be death by autonomous machines.
The US military has developed artificially intelligent micro drones that act autonomously using AI, and are designed to work like a swarm, communicating and making decisions together, including adaptive formation flying. It is likely they will be used for surveillance missions and targeted assassinations in the future. Drones and robotic killing machines are expected to revolutionalize warfare with a first-lead advantage, allowing for the increase of human and infrastructure destruction without the corresponding risks to human life on the part of the initiating country.
Russia is creating a robot army with “killer robots”. The Russian Chief of General Staff has disclosed that Russia is building a roboticized unit capable of independently conducting military operations. Russia’s United Instrument Manufacturing Corporation has developed software that it says can be installed on any robotic system to allow it to make decisions to kill on its own, such as carrying out an attack on enemy artillery without human intervention. It uses AI to locate a human on a battlefield (or a living room) to eliminate him or her.
And then there will also very likely be rogue coders who will code AI to be harmful, or obviously coders who will hack into AI systems to re-code for destructive conduct that harms humanity, or destroys infrastructure, or locates a target and uses artificial intelligence to kill a person. A politician hooked up to an automated medicine dispenser, for example in a hospital setting, could be exterminated remotely. We don’t have laws for murder by AI [1].
Professor Frederick Kile, who worked on NASA’s Apollo mission as an engineer, was one of the first to suggest that in AI we pay attention to the great works of antiquity to ensure that we have external criterion against which to measure what is ethical as we code our way to the next world order [2]. For example, how Catholic medical pacifism view a machine-engineered death may be important to consider.
In the Book of Law, Plato wrote that there is no justice when jurists are silent and indeed the lawyers, lawmakers and judges should insist that, as AI progresses any further, we are included to ensure the preservation of the rule of law and to re-address the centuries old questions that plagued philosophers. But it’s not just jurists who have a say – in law, the state has an unqualified interest in the preservation and protection of human life and it too must be part of a dialogue on how, if and when machines are authorized to end a human life.
Postscript:
In August 2017, this self-driving car blew up on a road in China. The passenger managed to jump out of the car but who would be liable if he was killed in the flames?
"Self-driving" car erupts into a fireball on a street in Nanjing, China. The blaze is believed to be caused by spontaneous combustion pic.twitter.com/KFF2kepTYk
— China Xinhua News (@XHNews) August 20, 2017
[1] Christine Duhaime, Digital Finance Institute, “The Promises and Perils of AI and the Law”, United Nations UNCITRAL, 2016.
[2] Frederick Kile, “Artificial intelligence and society: a furtive transformation”, AI & Soc (2013) 28:107-115.