The Robotaxi Problem
San Francisco's new driverless taxis and the pitfalls of uncritical futurism.
I have a memory from some time in the early-2000s, where I watch the Science Channel. That itself isn’t really a unique memory—I was the sort of kid who was really into that sort of thing—but this specific memory is interesting to me because the subject matter of the show has made its way into the news recently. The show, whose name I can’t remember, was about self-driving cars. At that time, the technology was the sort of thing that—outside of science fiction —lived in the space of futurist-flavored curiosity—in that it was real, but still experimental. Those early prototypes caught my attention, what with their bulky suite of electronics that included the characteristic spinning LiDAR device welded to the roof of an otherwise normal-looking car. Back then, I remember thinking “Wow, cool!” along with some version of what was probably the response intended by those sorts of programs: “The future is now”.
Now, in 2023, that future is finally here, and I can’t find it in me to be as excited as that younger version of myself would’ve wanted. I imagine I might’ve been, had it arrived when I was still young enough to believe that new technology inevitably makes the world a better place.
I’ll preface this by saying that I’m not some sort of technophobe. The technologies that makes a self-driving car possible have been included throughout consumer vehicles for a while now, and for the most part, things like lane departure warning, blind spot monitors, and automated parking have made the roads safer. Put together, though, into vehicles without drivers, they create something that I’m a lot more skeptical of, especially given the way it’s currently being implemented.
In San Francisco, Cruise (a subsidiary of General Motors), and Waymo (a subsidiary of Alphabet, Google’s parent company) were granted permission to operate paid, 24/7 robotaxi services. Since then, reports of malfunctions, including (but not limited to) an incident where a Cruise robotaxi collided with a fire truck en-route to an emergency have raised safety concerns among the public.
Of course, there will likely come a time where these vehicles are, as the companies market them, demonstrably safer than human drivers. Whether that happens sooner or later though, it’s clear to me that Cruise, Waymo, Zoox (Amazon’s entry into this new industry), and all the other companies out there with often-nonsensical names are using public streets to conduct their tech experiments.
I think, though, that ultimately my objection to the way this technology is being applied comes not from concerns about safety or even from questions about regulation and the ethics of allowing these kinds of public tests, but rather from the overwhelming sense that once again, a significant new leap in technology is being used in a way that rather than solving problems, will create new ones, or exacerbate existing ones.
In some theoretical future where robotaxis and self-driving personal vehicles are safer than human drivers and worked as advertised, that wouldn’t necessarily be a good thing for society at large. Automating cars does nothing to address the environmental risks of car dependency (electric or otherwise), the way that car dependency perpetuates financial and social inequality, or the negative sociological/health effects that such a system creates. It may even make those problems worse by further reinforcing car dependency and drawing resources away from public transportation.
I still think about that phrase, “the future is now”. It can mean positive things: that the great challenges of our time are close to being put in the past. It can also carry a darker connotation if we consider that not all “futuristic” things are good. The self-driving car and the robotaxi are a lot like another science-fiction future staple: the flying car. It’s a sensational sort of idea that conjures images of a techno-utopia, but is in reality just a bad idea made worse.