By the end of this week, potentially thousands of Tesla owners will be testing out the automaker’s newest version of its “Full Self-Driving” beta software, version 10.0.1, on public roads, even as regulators and federal officials investigate the safety of the system after a few high-profile crashes.
A new study from the Massachusetts Institute of Technology lends credence to the idea that the FSD system, which despite its name is not actually an autonomous system but rather an advanced driver assist system (ADAS), may not actually be that safe. Researchers studying glance data from 290 human-initiated Autopilot disengagement epochs found drivers may become inattentive when using partially automated driving systems.
“Visual behavior patterns change before and after [Autopilot] disengagement,” the study reads. “Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead.”
Tesla CEO Elon Musk has said that not everyone who has paid for the FSD software will be able to access the beta version, which promises more automated driving functions. First, Tesla will use telemetry data to capture personal driving metrics over a seven-day period in order to ensure drivers are still remaining attentive enough. The data may also be used to implement a new safety rating page that tracks the owner’s vehicle, which is linked to their insurance.
The MIT study provides evidence that drivers may not be using Tesla’s Autopilot (AP) as recommended. Because AP includes safety features like traffic-aware cruise control and autosteering, drivers become less attentive and take their hands off the wheel more. The researchers found this type of behavior may be the result of misunderstanding what the AP features can do and what its limitations are, which is reinforced when it performs well. Drivers whose tasks are automated for them may naturally become bored after attempting to sustain visual and physical alertness, which researchers say only creates further inattentiveness.
The report, titled “A model for naturalistic glance behavior around Tesla Autopilot disengagements,” has been following Tesla Model S and X owners during their daily routine for periods of a year or more throughout the greater Boston area. The vehicles were equipped with the Real-time Intelligent Driving Environment Recording data acquisition system1, which continuously collects data from the CAN bus, a GPS and three 720p video cameras. These sensors provide information like vehicle kinematics, driver interaction with the vehicle controllers, mileage, location and driver’s posture, face and the view in front of the vehicle. MIT collected nearly 500,000 miles’ worth of data.
The point of this study is not to shame Tesla, but rather to advocate for driver attention management systems that can give drivers feedback in real time or adapt automation functionality to suit a driver’s level of attention. Currently, Autopilot uses a hands-on-wheel sensing system to monitor driver engagement, but it doesn’t monitor driver attention via eye or head-tracking.
The researchers behind the study have developed a model for glance behavior, “based on naturalistic data, that can help understand the characteristics of shifts in driver attention under automation and support the development of solutions to ensure that drivers remain sufficiently engaged in the driving tasks.” This would not only assist driver monitoring systems in addressing “atypical” glances, but it can also be used as a benchmark to study the safety effects of automation on a driver’s behavior.
Companies like Seeing Machines and Smart Eye already work with automakers like General Motors, Mercedes-Benz and reportedly Ford to bring camera-based driver monitoring systems to cars with ADAS, but also to address problems caused by drunk or impaired driving. The technology exists. The question is, will Tesla use it?