Ticketing Hal: Uber and the Question of Algorithmic Personhood

By Tony Garvan

Today marks a pretty big day for self driving cars and machine learning more generally. Bigger, perhaps, then the first time a self driving car successfully completed the DARPA Grand Challenge through the Mojave desert in 2005. Because today, as the Guardian reports, a self driving car violated traffic law in San Francisco, and it became clear that the world is not ready, philosophically and legally, for the reality of self driving cars and the advances they represent.

Sure it’s cute, but can you sue it?Sure it’s cute, but can you sue it?

The lead is a little buried in the Guardian article:

Asked how the San Francisco police department would respond to a self-driving Uber running a red-light, officer Giselle Talkoff said: “I don’t even know. I guess we could pull them over.”

This is not a small question. This is, in fact, a huge question. How do you ticket a car with no driver? If I am an engineer writing an algorithm for a self driving car, could I (or my company) be sued if it runs a red light? Do I have blood on my hands if it kills someone? If I get drunk and kill someone behind the wheel it is manslaughter — will there be similarly harsh penalties for deploying poorly tested code in a self driving car?

As a lover of philosophy and machine learning, I have been thinking about these question for years, and it always seemed playfully irrelevant, like it would only relate to some far-off Future World. Well, as of today the question is not only relevant, it is *urgent. *It needs to be resolved *yesterday. *Philosophers and lawyers have dawdled too long, and now the march of technological progress has forced the questions to be answered by police, on the ground, on a whim.

I realize this is how the law works unfortunately- it kind of bumbles through the most profound questions in the courts. I just want to draw attention to the fact that this is a *big deal *relating to many aspects of our future relationship with the algorithms we create. What if an algorithmically generated recipe poisons someone? How do you cover malpractice for a robotic surgeon? Can an algorithm commit war crimes?

We have made incredible progress replacing some of our human capabilities — seeing, hearing, driving — with technology, but is capability the same thing as responsibility? If so, how do you punish an algorithm? If not, where does the blame go, and where is the retribution? Does it go to the company that creates them? Will there be a special algorithmic business insurance, and how does criminal prosecution work when the only humans involved in a decision were statisticians doing their jobs?

Like it or not, part of the central role of the justice system is to provide a healthy outlet for revenge. If an algorithm kills my son, will it be enough for me to have the problem fixed in a software update?

We have a lot of questions today. But the questions are all the same question, and it is the question of algorithmic personhood: are algorithms distinct from their creators? We are not ready for the answer.

December 2016
© 2022, Anthony Garvan