Murphy’s Law says that "anything that can go wrong will go wrong,” and it’s finally caught up with autonomous vehicles. On Sunday night March 17, an autonomous car operated by Uber, including an emergency backup driver behind the wheel, struck and killed a woman on a street in Tempe, Ariz. Hours after the crash, Uber announced the suspension of all tests of its autonomous vehicles in Pittsburgh, Phoenix, San Francisco, and Toronto.
This was believed to be the first pedestrian death associated with self-driving technology. The accident reminds us that self-driving technology is still in the experimental stage, and people are still trying to figure out how to regulate it. Uber had a previous accident with an autonomous vehicle, but it was determined the other driver was at fault.
Uber, Waymo (previously Google), and other tech companies and automakers have expanded testing of their self-driving vehicles in various cities around the country. Waymo says cars will be safer than regular cars because they take easily distracted humans out of the driving equation. Although the technology is about a decade old, it’s now starting to experience real-world unpredictable situations that drivers can face.
Testing of autonomous cars has taken place in a small number of vehicles. If these vehicles were mass-produced like other cars are now, how will they be tested? Automakers certainly couldn’t send each vehicle out for a road test. In mass production, testing would have to be no more than a couple of minutes. Complicating the situation is the complexity of these cars that include special sensors and computer control to make them work properly. For example, how could you test a vehicle to avoid hitting a pedestrian at night, like the accident in Tempe?
The Federal government is trying to get involved. A Senate bill would free autonomous-car makers from some existing safety standards and preempt states from creating their own vehicle safety laws. Similar legislation has been passed in the House. The Senate version has passed a committee vote, but hasn’t reached a full floor vote.
“This tragic incident makes clear that autonomous vehicle technology has a long way to go before it is truly safe for the passengers, pedestrians, and drivers who share America’s roads,” said Senator Richard Blumenthal (D) of Connecticut.
What Exactly Happened?
The Uber car, a Volvo XC90 sport utility vehicle outfitted with the company’s sensing system, was in autonomous mode with a human safety driver at the wheel but carrying no passengers when it struck Elaine Herzberg, a 49-year-old woman, on Sunday around 10 p.m.
A Tempe police spokesman, said that a preliminary investigation showed that the vehicle was moving around 40 miles per hour when it struck Ms. Herzberg, who was walking with her bicycle on the street. He said it did not appear as though the car had slowed down before impact and that the Uber safety driver had shown no signs of impairment. The weather was clear and dry.
Uber’s Volvo XC90 sport utility vehicle, outfitted with the company’s sensing system, was in autonomous mode with a human safety driver at the wheel, but carrying no passengers, when it fatally struck Elaine Herzberg, a 49-year-old woman, in Tempe, Ariz. on March 17.
Tempe, with its dry weather and wide roads, was considered an ideal place to test autonomous vehicles. In 2015, Arizona officials declared the state a regulation-free zone in order to attract testing operations from companies like Uber, Waymo, and Lyft.
The state agreed to testing of autonomous vehicles that had safety drivers at the wheel, ready to take over in an emergency. That mandate was changed to allow testing of unmanned self-driving cars, because a “business-friendly and low regulatory environment” had helped the state’s economy.
In California, where testing without a backup driver was just weeks away from being permitted, a spokeswoman for the state Department of Motor Vehicles, said officials were in the process of gathering more information about the Tempe crash.
The National Transportation Safety Board said it was sending a team of four investigators to examine “the vehicle’s interaction with the environment, other vehicles and vulnerable road users such as pedestrians and bicyclists.”
Most testing of driverless cars occurs with a safety driver in the front seat who is available to take over if something goes wrong. However, it’s not easy to take control of a vehicle going 40 mph.
Waymo, which has been testing autonomous vehicles on public roads since 2009 when it was Google’s self-driving car project, has said its cars have driven more than 5 million miles while Uber’s cars have covered 3 million miles. Between December 2016 and November 2017, Waymo’s self-driving cars drove about 350,000 miles and human drivers retook the wheel 63 times. Uber hasn’t been testing its self-driving cars long enough in California to be required to release its disengagement numbers.
Researchers working on autonomous technology have struggled with how to teach the systems to adjust for unpredictable human driving or behavior. Still, most researchers believe self-driving cars will ultimately be safer than their human counterparts. Unfortunately, they can’t suggest a date when this will be the norm.
In another accident in May 2016, the driver of a Tesla using Autopilot, the car company’s self-driving feature, died on a state highway in Florida when his car crashed into a tractor-trailer that was crossing the road. Federal regulators later ruled there were no defects in the system to cause the accident. And, the vehicle was not completely autonomous.
An October 2015 article from the American Bar Association was entitled “Are we there yet? The legal aspects of driverless cars.” The article is just as true today as it was then. It mentioned a webinar “Driverless Cars in the Fast Lane: Legality, Safety, and Liability on the Road Ahead,” that was sponsored by the ABA Section of Science & Technology Law. Stephen Wu, counsel at the Silicon Valley Law Group and one of the founding members of SciTech’s Artificial Intelligence and Robotics Committee, said, “Right now we stand on the precipice of a change in transportation technology that we haven’t seen since the transition from the horse and buggy to the automobile.” Because of this, there will be legal issues that will arise from the transition from the vehicles we drive today to future vehicle technologies.”
Thomas Leu, corporate counsel at Google Inc., said the company is investing its time and money in the technology because of safety concerns.
“Driving is actually a very dangerous activity,” said Leu, adding that 1.2 million people are killed every year around the world, and in the U.S. alone, 33,000 people die each year. And he said traffic crashes are the primary cause of death for people between the ages of 5 and 34, and human error causes over 90% of accidents.
“Self-driving cars—they don’t fall asleep, they don’t get drunk, they don’t get distracted by text messages or phone calls,” added Leu. “So we think that developing this technology really gives us a chance to dramatically reduce car incidents that are caused by human error.”
Beyond safety, some other advantages to the technology include increased mobility for the elderly, disabled, and for people who can’t currently drive. Leu also pointed out other benefits, such as reduced fuel consumption, greenhouse gas emission reductions, better land use in cities by the elimination of parking garages, convenience, and saving time.
Although driverless cars—also referred to as autonomous vehicles—are being designed to save lives with many safety features, there are also legal aspects to consider.
J. Christian Gerdes, associate professor of mechanical engineering at Stanford University, said that actuators (steering, braking, throttle, gear shifting), sensors (radar, camera, ultrasonic, LiDAR), and logic (algorithms based on actions to create sensor input) are required for automated vehicles.
The Persistent Perception Problem
Cars today have the actuators to become automated, and a lot of progress has been made in the area of sensors. Although there has been tremendous development over the last few years, he said that these vehicles still “see the world differently than our eyes and brains.”
Gerdes, a pioneer in the field of autonomous driving, said, “There are still open issues here.”
A vehicle’s basic logic levels and the ability to make real-world decisions are extremely important, he said. For example, if a child runs after a ball in the street, a car would have to make some “real-world” decisions about what to do.
“So from the very simple examples of lane following to very real-world decision-making, this logic example is extremely important,” said Gerdes.
According to the webinar panelists, driverless cars will inevitably crash and the issue of product liability will be a concern. Currently, some state and federal laws aren’t written to accommodate driverless vehicles. The use of such vehicles will span various areas of the law, including torts, insurance, privacy, data security, transportation, and communications administrative law.
Bryant Walker Smith, assistant professor of law at the University of South Carolina, said New York is the only state that requires one hand on the wheel at all times.
“States as a whole have laws that could complicate automated driving, but few that would actually outright prohibit it,” said Smith.
Gerdes added that in the next few years, the market will see advances in technology in conventional vehicles and automated low-speed transportation. In Switzerland, Singapore, and the United Kingdom, 100% electric driverless shuttles are being used. Audi recently announced the Traffic Jam Assistant, which would guide the passenger through traffic jams, “while you turn your attention to other things,” he noted.
Ultimately, what will likely determine the speed with which we see driverless cars in the mass market will be safety issues, noted Gerdes.
“I think the limiting factor is really going to be safety,” said Gerdes. “I’m not aware of anyone who isn’t dedicated to making the system safe, but the question, really, is how safe?”
According to Gerdes, automated vehicles can avoid some of the “really bad behaviors” seen on the road today, such as texting distraction or being impaired while driving. “So, in some ways it’s very easy to design a car that can overcome the bad behaviors of humans, but it’s not as easy to design a car that can handle the behaviors of the best humans,” added Gerdes.
Engineers spend a lot of time with race-car drivers because they are good at controlling the vehicle “to the limits of its capabilities,” he said. “They do it to be fast, but we do it to be safe.”
He added, “It’s pretty tricky to drive the vehicle up to its limits and be as safe as the world’s best drivers.”
Other than safety, legal issues would also come into play with autonomous vehicles in situations such as where a human driver would cross a double yellow line to pass a car. Would the autonomous vehicle break the law to do this or wait all day behind a vehicle in the road? These are questions that engineers ponder. Other questions center on the car’s responsibility to the law and other people in the flow of traffic.
Gerdes said the desires for safety and legality sometimes conflict, which is a continuing issue for vehicle designers.
“I think that the decisions made in the legal field may in fact frame the problem that engineers have to solve,” said Gerdes. “Are we trying to solve ethical considerations because of a conflict between human desires and the law, or in fact will these issues be solved in the law and make the engineering much simpler?”