Early this year, an autonomous vehicle (AV) killed a pedestrian . The vehicle included a “safety driver” and blame has been placed on the safety driver for not being sufficiently aware to take control in a timely manner. All AV being tested on the road have one human driver which they call a safety driver. In California, companies testing AV are required to report the number of disengagements. This describes the instances where the human driver has taken over control of the vehicle. Ideally, disengagements should be zero or at least trending towards zero.
Clearly there is a need for human oversight, yet simply putting someone in the vehicle and calling this person a safety driver is not sufficient to address the risk of a fully (current state of technology) autonomous vehicle. It does not take very sophisticated safety analysis to understand the potential risks associated with a safety-critical human component and some possible solutions to mitigate those risks.
Let us start.
RISK INSIGHT 1: What are the causes of current vehicle accidents? According to NHTSA , the number one cause of accidents falls under the generic category of driver distraction.
RISK INSIGHT 2: Are there examples of other safety technologies displaying unintended driver behavior? Cruise control is one such feature of cars where speed is controlled by the vehicle while the driver still maintains steering control. According to one study, even this partial automation of vehicle control led to a demonstrated decline in the driver’s attention and driving behavior .
RISK INSIGHT 3: What are some situations that are close to this (safety driver in AV)? One situation is driver education where student (usually teenagers) drivers control a vehicle while the instructor sits in the front passenger seat. In this case, there is an extra set of brake and accelerator pads for the instructor. In some case, there is a second rear view mirror and less common, a second steering wheel. Each driving session with a student lasts 30-45 minutes typically. I could not find any specific data on accidents involving student driver education vehicles other than it seems to be rare. Instructors must be certified by passing a simple training program.
So now let us examine the AV safety driver program as it has been running in practice. Generally it is one person who is responsible for taking control over driving if the situation warrants. This person is sitting in an AV for many hours, observing the vehicle drive around and even conducting some tasks necessary for assessing the performance of the vehicle. A simple thought experiment: I have to sit in a car for up to 8 hours, being ready at any time to control of a vehicle if the situation warrants it. Maybe early on, I will overreact and take control in situations where there is no safety issue. But after some time, I will be less vigilante. This brings me to another point in our safety analysis.
RISK INSIGHT 4: What issues could arise for a driver who is monitoring a vehicle that rarely crashes? In this case, we are assuming that the driver is not distracted. There is research that shows that when a human is asked to spot rare events, they tend to become less successful. This study came about when it was found that TSA agents missed spotting very dangerous but rare items, like guns, while easily detecting illegal but harmless objects like water bottles, which are plentiful. So as AV get even better, it is possible that the human driver will become less effective in recognizing dangerous situations and in taking over vehicle control without some mitigating action .
RISK INSIGHT 5: How does the transition from AV to human control happen? Now there are two basic methods for how the transition from the AV controlling the vehicle to the human driver controlling the vehicle could occur. One method is that the human driver is relying on their own judgment to take control of the vehicle. The second method relies on the AV system to detect an anomalous situation and alert the driver to take control. However, the former is a more severe requirement on the human driver . Of course, with a recent fatal Tesla car crash, where the vehicle was operating in auto pilot mode, the driver was alerted several times to take control but ignored those alerts .
Now let’s review the situation.
There is a human driver sitting in a car for long periods of time not needing to drive. According to RISK INSIGHT 1, they are essentially a distracted driver. During their shift, there may be no disengagements, and so over time, their ability to spot dangerous situation, according to RISK INSIGHT 4, may degrade. Also since the vehicle is running autonomously most of the time, according to RISK INSIGHT 2, their driving behavior may degrade. So clearly simply placing a human driver into an AV does help manage the safety risks of current AV public road testing.
With all these points, how to mitigate the risks?
First, do not call it a safety driver. Instead call it an ON-DEMAND HUMAN DRIVER (ODHD). This very clearly identifies the role that the human is expected to play. We need to make this the ODHD function properly as part of the safety critical system of the vehicle. Next, we suggest some very simple guidelines. First shorten the time (work shift) an ODHD is responsible for monitoring the vehicle.
Second, to address the distraction issue, randomly require the ODHD to take control (which makes the situation closer to that of RISK INSIGHT 3). Measure the time to take control. Now you have data on how well an ODHD responds to disengagements and you also keep the ODHD alert. This way you can monitor the performance of this safety critical function and assess if there is a need for further changes.
Though the ultimate aim of AV suppliers is to minimize disengagements, there is a lot of a value in still forcing disengagements to happen. Such learnings from ODHD on fully autonomous vehicle test runs can help inform on the general risk of commercializing vehicles  with different levels of autonomy where the human driver is basically an ODHD with different degrees of demand.