Driverless cars remain on a slow but steady march toward widespread deployment. This week proved as much.
On Tuesday, Google spinoff Waymo became the first to obtain a driverless testing permit from the California Department of Motor Vehicles (DMV). A 40-strong fleet of fully autonomous Chrysler Pacifica minivans — overseen by remote operators — will drive day and night on city streets, rural roads, and highways around Mountain View, Sunnyvale, Los Altos, Los Altos Hills, and Palo Alto.
The day before, Volkswagen, Intel’s Mobileye division, and car distributor Champion Motors unveiled a plan to launch a commercial autonomous taxi service in Israel next year.
Baidu, not to be outdone, announced on Wednesday at its Open World conference in Beijing several collaborations with automakers on autonomous vehicle technologies. It’s embarking on a two-year project to test self-driving vehicles on Chinese roads, and it’s working with Volvo to produce self-driving electric cars for the Chinese market. Lastly, it said it intends to soon deploy Level 4 autonomous cars — cars that can operate with limited human input and oversight in specific conditions and locations, as defined by the Society of Automotive Engineers — manufactured by Chinese state-owned FAW Group.
It’s exciting — if expected — technological progress. But I’d be lying if I said the lack of accompanying regulation didn’t give me pause. I don’t count myself among the 60 percent of people who told the Brookings Institution they “weren’t inclined” to ride in self-driving cars, but it’s my belief that such technological leaps are — if unguided by principles — fraught with ethical peril.
Pekka Ala-Pietilä, the former president of Nokia and tech entrepreneur who’s overseeing the European Union’s efforts to develop guiding AI principles, shares that sentiment.
“[We have to] make sure that we do regulate when it’s the right time,” he told Politico this week. “Ethics and competitiveness are intertwined, they’re dovetailed.”
Unfortunately, in the U.S., legislation remains stalled, at least at the Congressional level. More than a year ago, the House unanimously passed the SELF DRIVE Act, which would create a regulatory framework for autonomous vehicles. It has yet to be taken up by the Senate, which this summer tabled a separate bill, the AV START Act, that made its way through committee in November 2017.
Automakers aren’t the ones voicing opposition — on the contrary. GM CEO Mary Barra recently called on Congress to provide a path to deployment for OEMs and manufacturers, and in June, Waymo, Uber, Ford, and others formed the Partnership for Transportation Innovation and Opportunity (PTIO), which seeks to “foster awareness” of driverless vehicle technologies. Rather, regulators and advocacy groups are standing in the way. And to be fair, they’re not unjustified in doing so.
In March, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. Separately, Tesla’s Autopilot driver-assistance system has been blamed for a number of fender benders, including one earlier this year in which a Tesla Model S collided with a parked Culver City fire truck. (Tesla stopped offering “full self-driving capability” on select new models in early October.)
David Friedman, former acting administrator of the National Highway Traffic Safety Administration (NHTSA) and vice president at Consumer Reports, said recently that Congress should direct the NHTSA to implement privacy protections, minimum performance standards, and accessibility rules for self-driving cars, trucks, SUVs, and crossovers.
And Senator Dianne Feinstein (D-CA) said bills such as the AV START Act threaten to loosen the rules on self-driving cars before researchers have had adequate time to study their impact. The Rand Corporation, for one, estimates autonomous cars will have to rack up 11 billion miles before we’ll have reliable statistics on their safety.
“Until new safety standards are put in place, the interim framework must provide the same level of safety as current standards,” Feinstein and a handful of other senior Democratic Senators wrote in a letter to the Senate Commerce Committee in March. “Self-driving cars should be no more likely to crash than cars currently do, and should provide no less protection to occupants or pedestrians in the event of a crash.”
That’s not to suggest U.S. driverless vehicle policy is at a complete standstill.
In early October, the Department of Transportation, through NHTS, issued the third iteration of its voluntary guidelines on the development and deployment of driverless car technology: Automated Vehicles 3.0. In it, the agency proposes new safety standards “to accommodate automated vehicle technologies and the possibility of setting exceptions to certain standards … that are relevant only when human drivers are present.”
And in March, President Donald Trump signed into law a $1.3 trillion spending bill that earmarks $100 million for projects that “test the feasibility and safety” of autonomous cars.
But the changes aren’t coming fast enough. And with some analysts predicting as much as 10 million cars with some form of autonomy on the road by 2020, that’s dangerous.
Max Tegmark, a professor at the Massachusetts Institute of Technology and cofounder of the Future of Life Institute (FLI), said it best in an interview earlier this year:
“You begin to realize how amazing the opportunities are with AI if you do it right, and how much of a bummer it would be if we screw it up … Technology isn’t bad and technology isn’t good; technology is an amplifier of our ability to do stuff. And the more powerful it is, the more good we can do and the more bad we can do.”