Liability in the Self-Driving World

One more non-IP post, and then we’ll get back on track.

How do you cut 2500 jobs and call it something “That could have enormous potential benefits for Canada?” GM answered that today, when it announced it is closing several car plants, including one in Oshawa, Ontario. The “enormous potential benefits” arise because the closings will free up $8b that GM can allocate to electric and autonomous vehicles. And maybe some of those will be built in Canada.

But aside from being a great PR deflection, self-driving cars are an interesting legal topic. When machines take over for humans, what happens when they screw up? We know what happens when a driver today departs from the rules of the road and their expected standard of care. But what about when the problem is potentially a mix of bad regulations, bad programming, bad supervision, and not following the law?

In Tempe, Arizona, on March 17, 2018, a pedestrian was killed by an Uber self-driving test vehicle while walking her bike across a dark street. Some pointed to Arizona’s encouragement of autonomous vehicle testing within its borders, by promising an easy regulatory environment. U.C. Davis law professor Elizabeth Joh called the accident “another example of tech experimentation outpacing thoughtful regulation[.]”

On the other hand, the Uber vehicle had a safety driver, who was supposed to take over if necessary. Video showed her looking down instead of at the road. She told the National Transportation and Safety Board that she was monitoring the system and, specifically, was not using her phone. Later, police said the Uber safety driver was actually streaming an episode of The Voice at the time of the crash.

On the other, other hand, the NTSB found that the car’s detection system “saw” the pedestrian six seconds before hitting her, and yet failed to stop. And, disturbingly, that might have been the software operating as intended:

As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

And finally, it should be noted that the pedestrian was apparently illegally crossing the street at an unmarked location.

So what were the consequences of this colossal failure? In the wake of the accident, Arizona saw no need to change its regulations, but it did suspend Uber’s ability to operate automated vehicles. A report suggested that, at that point, Arizona’s Self-Driving Oversight Committee had not formally met in over a year.

About two weeks after the incident, Uber settled with the victim’s family for an undisclosed sum. Uber had recently reduced the number of human operators in its Arizona self-driving cars from two to one (the second one had been logging events and training the system). Uber is now seeking to return to the road, first in Pennsylvania, committing to two safety drivers and enabling automated braking.

As for the driver, although Tempe police have investigated, as of earlier this month a prosecutor had yet to decide whether to bring charges of vehicular manslaughter against her.

Automated vehicles could and should dramatically improve road safety and efficiency. But they could also be a menace. And they’re coming sooner than later, so creating a proper regulatory and legal environment to welcome them seems like the least we should do.

 

Post navigation

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.