Last month, a video went viral that showed a San Francisco police officer, at night, stopping a car that didn’t have its headlights on. Except that this was no ordinary car. As the cop approaches the vehicle, someone off-camera shouts, “Ain’t nobody in it!” The car, operated by Cruise, a subsidiary of General Motors, is completely empty. Just as the cop turns back to his colleague, the robotaxi pulls away, driving across an intersection before pulling over. For two minutes the video shows the police officers wandering around the car, trying to work out what to do.
The confusion is certainly amusing—an everyday encounter with something that would have seemed magical just a decade ago. But as these vehicles become more common, the question of how we know who’s driving will become increasingly serious.
It will soon become easy for self-driving cars to hide in plain sight. The rooftop lidar sensors that currently mark many of them out are likely to become smaller. Mercedes vehicles with the new, partially automated Drive Pilot system, which carries its lidar sensors behind the car’s front grille, are already indistinguishable to the naked eye from ordinary human-operated vehicles.
Is this a good thing? As part of our Driverless Futures project at University College London, my colleagues and I recently concluded the largest and most comprehensive survey of citizens’ attitudes to self-driving vehicles and the rules of the road. One of the questions we decided to ask, after conducting more than 50 deep interviews with experts, was whether autonomous cars should be labeled. The consensus from our sample of 4,800 UK citizens is clear: 87% agreed with the statement “It must be clear to other road users if a vehicle is driving itself” (just 4% disagreed, with the rest unsure).
We sent the same survey to a smaller group of experts. They were less convinced: 44% agreed and 28% disagreed that a vehicle’s status should be advertised. The question isn’t straightforward. There are valid arguments on both sides.
We could argue that, on principle, humans should know when they are interacting with robots. That was the argument put forth in 2017, in a report commissioned by the UK’s Engineering and Physical Sciences Research Council. “Robots are manufactured artefacts,” it said. “They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” If self-driving cars on public roads are genuinely being tested, then other road users could be considered subjects in that experiment and should give something like informed consent. Another argument in favor of labeling, this one practical, is that—as with a car operated by a student driver—it is safer to give a wide berth to a vehicle that may not behave like one driven by a well-practiced human.
There are arguments against labeling too. A label could be seen as an abdication of innovators’ responsibilities, implying that others should acknowledge and accommodate a self-driving vehicle. And it could be argued that a new label, without a clear shared sense of the technology’s limits, would only add confusion to roads that are already replete with distractions.
From a scientific perspective, labels also affect data collection. If a self-driving car is learning to drive and others know this and behave differently, this could taint the data it gathers. Something like that seemed to be on the mind of a Volvo executive who told a reporter in 2016 that “just to be on the safe side,” the company would be using unmarked cars for its proposed self-driving trial on UK roads. “I’m pretty sure that people will challenge them if they are marked by doing really harsh braking in front of a self-driving car or putting themselves in the way,” he said.
On balance, the arguments for labeling, at least in the short term, are more persuasive. This debate is about more than just self-driving cars. It cuts to the heart of the question of how novel technologies should be regulated. The developers of emerging technologies, who often portray them as disruptive and world-changing at first, are apt to paint them as merely incremental and unproblematic once regulators come knocking. But novel technologies do not just fit right into the world as it is. They reshape worlds. If we are to realize their benefits and make good decisions about their risks, we need to be honest about them.
To better understand and manage the deployment of autonomous cars, we need to dispel the myth that computers will drive just like humans, but better. Management professor Ajay Agrawal, for example, has argued that self-driving cars basically just do what drivers do, but more efficiently: “Humans have data coming in through the sensors—the cameras on our face and the microphones on the sides of our heads—and the data comes in, we process the data with our monkey brains and then we take actions and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.”
That’s not how people move on the road, nor how self-driving cars work. Humans drive in conversation with others. When we drive, we know that others on the road are not passive objects to be avoided, but active agents we have to interact with, and who we hope share our understanding of the rules of the road. Self-driving cars, on the other hand, are negotiating the road in a completely different way, in most cases depending on some combination of high-definition digital maps, GPS, and lidar sensors. Planes and birds both fly, but it would be a bad idea to design a plane as if it were just an upgraded bird.
An engineer might argue that what matters is what a vehicle does on the road. But others will want to know who or what is in control. This becomes particularly important in situations like pedestrian crossings, which often rely on two-way communication. A pedestrian may make eye contact with a driver to make sure they have been seen. A driver may reassure a pedestrian by waving them across. In the absence of such signals, such interactions may need to be redesigned. Traffic lights may reduce the uncertainty, for example, or a self-driving car may need to be told exactly how long to wait before proceeding. And pedestrians will need to know what those new rules are.
Until now it has largely been left to self-driving car companies to decide how to advertise themselves. This lack of standardization will create confusion and jeopardize public trust. When we’re walking across a street or negotiating a narrow road with another driver, we need to know what we are dealing with. These interactions work because we have a shared sense of expectations and mutual responsibilities. Clear, standardized labels would be a first step toward acknowledging that we are facing something novel on the road. Even though the technology is still in its infancy, clarity and transparency are well overdue.
Jack Stilgoe is a professor of science and technology policy at University College London.