The Law Shop is now closed. Please click here to find out more.

Self-driving cars - Could they cause more problems than they solve?

James Watkins - Law on the Web

  1. 01 May 2015
  2. Cars and Motoring
  3. 0 comments
Robot driving

Self-driving cars, long held to be the province of science fiction, are edging closer and closer towards commercial viability. Google’s self-driving cars have already racked up hundreds of thousands of driving hours in their home state of California.

However, while automated vehicles are technically viable, it is unclear how the law will allow them to be used. Before they can become commonplace in the UK, lawmakers have some tough questions to address.

How automated will they be allowed to be?

Under the Geneva Convention on Road Traffic, from which the UK and many other countries derive many of their rules, any vehicle must have a driver who “shall at all times be able to control their vehicles”. Unless a way can be found around this, all automated cars would most likely be required to have a human sat behind the wheel while driving, even if they are not actually doing anything.

This would mean that one feature of Google’s self-driving car, a function that would allow users to call the car to come and pick them up, would never be legally viable. This would also require the “driver” to be in a condition to drive, so the dream of having your car drive you home from the pub after an evening of hard drinking may not quite be on the horizon.

Ultimately, decisions will need to be made about what sort of obligations the human “driver” has while their car is driving. If they still have to pay attention to everything that is going on around them, instead of relaxing behind the wheel with a nice book or something, they may wonder if having a car that drives for them is worth the trouble.

Who would be liable for an offence?

Even in a world where unthinking and unfeeling automatons dominate our roads, accidents will still happen, and driving offences could still be committed. If an automated car is involved in a crash, is it fair for the human in the car to be held liable? Or should the manufacturer of the car be the one in trouble?

Google believes that they themselves should be held responsible if one of their cars breaks the law.

“What we've been saying to the folks in the DMV [Department of Motor Vehicles], even in public session, for unmanned vehicles, we think the ticket should go to the company,” said Ron Medford, safety director for the Google self-driving car programme. “Because the decisions are not being made by the individual.”

Fortunately for authorities, this has not been an issue yet, as Californian police have yet to charge a self-driven car for a road traffic offence. If manufacturers do start being held responsible for driving offences over here, many of the laws themselves could require some rewriting.

For example, the Road Traffic Act 1988’s definition of dangerous driving is thus: “a person who drives a mechanically propelled vehicle dangerously on a road or other public place is guilty of an offence”. Unless the police could find some other way to charge the manufacturers of a self-driving car, this and many other laws would need to be rewritten.

Even if you reword or reinterpret this so that the term “person” includes “computer”, legislation is still rife with language that doesn’t really make sense when applied to the actions of a machine.

For example, one definition of careless driving is “driving without reasonable consideration for other persons”. How do you define “reasonable consideration” when referring to algorithms followed by a computer? As this article notes: “The laws are full of terms like ‘prudent’ and ‘reasonable’ that make sense for humans, but become frustratingly vague once you’re trying to convert them to code.”

There is also the issue of enforcement – after all, you can’t really dish out penalty points to the manufacturer of a car, unless you were to issue them with some kind of “manufacturer’s licence”, making it possible for them to tot up points and eventually have all of their cars disqualified from driving for a year.

It would probably be more practical to just fine them.

Ethical issues

While a robot car can’t be relied on to avoid every accident, they could conceivably reduce the amount of death and destruction that results – after all, a computer does not panic (unless programmed to do so), and can make decisions much more quickly.

However, this could create situations where the car will be unable to avoid a collision, but may be able to choose what it hits. This is a scenario proposed by research scientist Noah J. Goodall – if a vehicle had a choice between hitting a cyclist with a helmet and a cyclist without a helmet, it would logically make sense to avoid the cyclist without a helmet, as they are less protected from harm. This is also true if the car aims to hit a larger vehicle with a better safety rating instead of a less safe car.

However, if robot cars were programmed to make these sorts of decisions, it could have a host of unintended consequences. Would cyclists avoid wearing helmets if it made them more desirable targets for an automated vehicle? Would people stop driving larger, safer vehicles? How would the manufacturers of safer cars feel if they discovered that automated vehicles were being programmed to be more likely to hit their cars?

No one can expect a human to make a complex decision like this perfectly during the split seconds they have before a crash. However, when an automated car is doing this, it isn’t really making a decision at all – it is merely obeying its programming, and the decision that those who programmed it made. The programmers of the car have plenty of time to mull over (and live with) these kinds of decisions, and as such, these decisions will be under far more scrutiny.

As Patrick Lin, director of the Ethics & Emerging Sciences Group at California Polytechnic State University, points out (in an article alarmingly titled “THE ROBOT CAR OF TOMORROW MAY JUST BE PROGRAMMED TO HIT YOU”), these questions take on a more troubling dimension if the vehicle is involved in a fatal collision. “Programmers have all the time in the world to get it right. It’s the difference between premeditated murder and involuntary manslaughter.”

Of course, car accidents are rare, and they would presumably become rarer still as automated vehicles take over the roads. However, even if all of these scenarios remain hypothetical, programmers will still have to answer the questions.

If these legal issues have dampened your enthusiasm for self-driving cars, you can at least take solace from the fact that Slovakian firm AeroMobil may have finally cracked the flying car.




Share your experiences

Please note: The views expressed in community areas of this site do not necessarily reflect or represent the views of Law on the Web, its owners, its staff or contributors. All comments are moderated prior to publication.

comments powered by Disqus