Autonomous Vehicles – A 21st Century Moral Dilemma? – Tom Harrington

 

It’s been just over a year since the first pedestrian was hit and killed by a self-driving Uber car in Tempe, Arizona. Since then, a lot has been learned about algorithms that drive autonomous cars. We’ve learned that self-driving cars are probably better than human drivers at maintaining safe speeds and following distances on roads and researchers are working on teaching autonomous cars to make moral choices. We’ve also learned about serious problems, such as algorithms that are better at detecting light-skinned pedestrians and therefore are more likely to hit pedestrians with darker skin tones. Research suggests that the artificial intelligence community has not done enough to correct the biases that are currently embedded in their systems. And the reason we know they are not is because this is not a new problem, and it’s not a problem exclusive to autonomous vehicles. The problem with race in algorithms bias is very long standing. There’s a lot of promise in autonomous vehicles, but we’ve a long way to go to learn the right lessons to make this a safe, accessible and fair technology.

 

The Uber crash wasn’t just a tragedy. The failure to see a pedestrian in low light was an avoidable basic error for a self-driving car. Autonomous cars should be able to do much more before they are allowed to be driven, even in tests, on the open road. Just like pharmaceutical companies whose drugs must go through a series of tests proving the drugs are  effective at healing the symptom or condition its intended to, massive technology companies should be required  to thoroughly – ethically and morally – test their systems before self-driving cars serve or endanger the public at large.

 

Introduction

On occasions whilst driving, we’ve all seen the makeshift memorials by the roadside with crosses, flowers and little stuffed teddy bears. Clearly, a tragic road traffic fatality had occurred here and many drivers and road safety campaigners will dub the scene as an ‘accident black spot’ even though it may be a straight stretch of wide open road. But could this tragic loss of life have been prevented? All too often, many road users are killed or seriously injured on our roads due to various causes, but predominantly due to driver error.  But, what if a self-driving car could eliminate driving risk altogether, that would avoid accidents and which would equate to a scientific and technological utopian solution. Image a whole new driving experience driving through life with greater exhilaration, confidence and connection to the world around you. Cars that park themselves, watch what’s happening around you, and step in to keep you out of trouble. Now imagine a near future with cars than can actually learn from one another and electric vehicles that recharge as they drive along, no strings attached. Soon, your car will be able to take the stress out of driving and leave only the joy. It will pick you up, navigate heavy traffic and find a parking space all on its own. And at the push of a button, it gives you back control of the steering. It will even be able to communicate with other cars and pedestrians. Testing of advanced autonomous driving is happening on public roads today, which means this revolutionary driving experience, is within reach. [1] The self-driving car, as one designer describes it, “Never gets distracted, and never gets drunk”and always does the right thing”. While the concept may sound outlandish, this technology is very real and developers are gearing towards presenting it to consumers. These so-called “robot cars” develop from many of the existing features in modern cars today, such as cruise control, self-parking and emergency braking. Developers of the technology and vehicle manufacturers argue that these cars will significantly reduce traffic accidents. They also claim that drivers can look forward to decreased traffic and fuel emissions.

The reality of “robot cars” excites many, while terrifying others. Regardless of one’s view on the technology, one thing is clear: driverless cars will permanently change the driving world. [2]

 

The American Wonder

Decades before Google started outfitting Lexus SUVs with sensors and self-driving software, the driverless car de jour was an otherwise ordinary Pontiac. In the 1920s and 30s, a driverless car was more commonly known as a ‘Phantom Auto’ and demonstrations of the technology drew thousands of spectators in cities across the US.[3] However, the idea of a vehicle that doesn’t need a driver is not new. In reality, it started with the invention of the car, and in 1925, the first prototype – called the American Wonder – was presented to the world. Francis Houdina, (not to be confused with the great Houdini’s surname Ed.) an American inventor and US army engineer and owner of the Houdina Radio Control, invented a radio controlled car that could start its engine, sound the horn and even change gears without human assistance. He equipped a transmitting antenna on the tonneau [4] of the vehicle and operated from a second car that followed with a transmitter. The radio signals operated small electric motors that directed every movement of the car. Achen Motors, a car distributer in Milwaukee used Houdina’s invention under the name Phantom Auto, and demonstrated it in December 1926, in the streets of Milwaukee. The inventor even drove down 5th Avenue in Manhattan without the steering. As Houdina’s invention gained acclaim, the electrical engineer’s fame collided with another: the equally uncanny, mystifying magician called Harry Houdini. Harry was none too pleased that his (Houdina’s) name was strikingly similar to his. Also, Houdina was receiving some of Harry’s mail. In July 1925, Harry and his secretary, Oscar Teale, visited the offices of Houdina Radio Control and an argument ensued. Houdini damaged some furniture and an electric chandelier, accusing the company of unlawfully using his name. Houdina said that there had never been any intention on his part to capitalize on the name of Harry Houdini. A summons for disorderly conduct was issued against Houdini but charges were dropped when George Young the manager failed to appear in court.[5] (Even if the matter went to court, Harry’s attempt to prevent Houdina using his own name was bound to fail. Copyright laws are actually very restrictive and do not apply to items such as names.

A person’s name cannot be protected under copyright law. Copyright law protects original works of authorship, such as paintings, books, screenplays and musical compositions. Ed.)

 

The ‘Moral Machine’

When a drivers slams on the brakes to avoid hitting a pedestrian who is illegally crossing the road, he/she has to make a moral decision that shift risk from the pedestrian to the people in the car. Soon self-driving cars might have to make such ethical judgements on their own – but settling on a universal moral code for the vehicles could be a difficult and thorny task, suggests a survey of 2.3 million people from around the world. The largest ever survey of machine ethics, published in Nature, [6] found that many of the moral principles that guide a driver’s decisions vary by country. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who illegally stepped out into traffic. According to Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and co-author of the study:

 

“People who think about machine ethos make it sound like you can come up with a perfect set of rules for robots, and what we show here is that there are no universal rules”.

 

The survey called the Moral Machine, laid out thirteen scenarios in which someone’s death was inevitable. Respondents were asked to choose who to spare in situations that involved a mix of variables: young, old, rich or poor, more people or fewer. People rarely encounter such stark moral dilemmas, and some critics question whether the scenarios posed in the survey are relevant to the ethical and practical questions surrounding driverless cars. But the study’s authors say that the scenarios stand in for the subtle moral decisions that drivers make every day. They argue that the findings reveal cultural nuances that Governments and manufacturers of self-driving cars must take into account if they want the vehicles to gain public acceptance.

 

Trolley Test

Google, Tesla and other major companies aim to make driverless cars a reality, which they suggest could reduce accidents caused by human error.

However, fatal accidents that such autonomous vehicles have already experienced – such as the deadly collision of a self-driving Uber car with a pedestrian – suggest they will not only to navigate roads but potentially also the dilemmas posed by accidents, with unavoidable deaths. This fatality occurred in Tempe, Arizona on March 18 2018 when an Uber self-driving vehicle was being tested. The vehicle struck and killed Elaine Herzberg who was walking with her bike across the street. This was the first time a self-driving car killed a pedestrian and it raises questions about the morals and ethics of developing and testing of these cars. It’s also reported that Uber self-driving cars are already getting into many scrapes on the streets of Pittsburg. For example, should a driverless car hit a pregnant woman or swerve into a wall and kill its four passengers? One famous thought experiment that seems perfectly suited to help address this challenge is the so-called Trolley Test by British philosopher Phillipe Foot. The original scenario had you imagine you were driving a trolley whose brakes had failed and you had to choose to divert the runaway tram onto either a track where one victim would die or another where five would. The many variations that exist of this problem can help people explain when they might live or die. [7] The philosopher created the test to offer an alternative to the trolley problem, a dilemma so often used to evaluate utilitarianism that’s become a cliché. In the Trolley problem, a train is hurtling down the track towards five men stuck in its path. You can pull a lever to divert the train onto another set of tracks – but if you do, you’ll kill a single man trapped in the train’s new path. Your dilemma: do you kill the one to save five? A utilitarian [8] approach states that whichever action allows the greatest number of people to live would be the moral one. This perspective holds the commonly – referenced scenario, the “fat man” case, which asks whether  you would push a fat man off a bridge to stop a Trolley in its path and prevent it from running over five people. (This scenario involves a “fat man” to eliminate the possibility of self-sacrifice – your weight wouldn’t stop the trolley, but his would). The utilitarian answer is that the moral decisions to sacrifice the heaviest man, you’d still be killing one to save five. [9

 

Kill One or Five?

Automatic vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as crashing into pedestrians or sacrificing themselves and their passengers to save the pedestrians. Defining the algorithms [10] that will help AVs make these moral decisions is a formidable technological challenge. Participants in six Amazon Mechanical Turk studies [11] approved of utilitarian i.e. AVs that sacrifice their passengers for the greater good and would like others to buy them, but they themselves would prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology. If your car was in a lethal collision would you prefer for it to kill one innocent bystander or five? By posing variations of this problem online to volunteers nearly 40 million times, scientists now have insights on how moral preferences regarding such dilemmas vary across the globe and their new findings may help guide how driverless cars act in the future. [12] Though the Trolley problem seems farfetched, AVs will be unable to avoid comparable scenarios. If a car is in a situation where any action will put either the car passengers or someone else in danger – if there’s a truck crash ahead and the only options are to swerve into a motorbike or off a cliff – then how should the car be programmed to respond? Rather than pontificating on this, a group of philosophers have taken a more practical approach and are building algorithms to solve the problem. Nicholas Evans, a Philosophy Professor at the University of Massachusetts Lowell – is working alongside two other philosophers and an engineer to write algorithms bases on various ethical theories. Their work will allow them to create various Trolley problem scenarios and show how an AV would respond according to the ethical or moral theory it follows. To do this, Evans and his team are turning ethical theories into a language that can be read by computers.  Utilitarian philosophers, for example, believe all lives have equal moral weight and so an algorithm based on this theory would assign the same value to the car passengers as to pedestrians. There are others who believe that you have a perfect duty to protect yourself from harm. According to Evans: [13]

 

“We might think that the driver has some extra moral value and so, in some cases, the car is allowed to protect the driver even if it costs some people their lives or put other people at risk”.

 

As long as the car isn’t programmed to intentionally harm others, some ethicists would consider it acceptable for the vehicle to swerve in defence to avoid a crash, even if this puts a pedestrian’s life at risk. Most people place a much higher value on their own lives and those of their loved ones than car manufacturers and juries do. At least one economist has proposed a “pay-to-play” model for decision-making by autonomous cars with people who buy more expensive cars getting more self-protection than those who buy bare-bones self-driving cars. While that offends basic principles of fairness, most people won’t be able to afford cars with better protection. It speaks to some basic belief we have that people in their own cars have a right to be saved, and maybe be even saved first.

 

Pro’s & Cons (Pro’s)

In a world where cars drive guided by sensors and automated systems, there will be no traffic jams, no accidents and no one would ever suffer from road rage in  traffic. Even more so because every car will be able to calculate ahead, there will be no need to have speed limits, circulation signs and any other system designed for human drivers. This is why your commute will take less time and passengers will get to enjoy the ride. They will also find their own parking place and it will come and pick you up when summoned. Also, automated cars represent a step in the direction when it comes to protecting the environment and cleaning the air in crowded cities. The biggest safety advantage to a driverless car is that a robot is not a human-it’s programmed to obey all the rules of the road, wont speed and can’t be distracted by a text message flickering on a mobile phone. And, hypothetically at least, AVs can also detect what humans can’t – especially at night in low light conditions – and react more quickly to avoid a collision. And right now, for too many senior citizens and those with disabilities, driving isn’t an option, AVs could change their lives. [14] If you believe the hype, you’ll be expecting to see them on many roads soon. That doesn’t mean that the technology isn’t there. It is. And the vehicles have an impressive safety record compared to the carnage wrought by human-drive vehicles. Their excellent performance is largely enabled by machine-learning algorithms that have been trained on lots of data about roads, junction and street furniture etc.

So, what will happen with driverless cars? The answer is that they will eventually arrive, initially licensed for carefully circumscribed users, possibly on designated urban streets and separated from humans driving old-style cars. In that sense, the cars will be more like driverless trams that take you to the departure gates in places like Stanstead Airport or Singapore’s Jewel Changi Airport.

 

Contras

Before we get to reach a rate of zero traffic accidents, people will still die in car crashes. The current technology still has difficulties understanding the environment, and also the technology in most cars is big, bulky and quite expensive. Also, in some crashes, the software in self-driving cars encountered scenes or objects that it didn’t recognise. Given that the real world is full of things that cars sensors and software have never seen before, accidents like the Uber crash in which a pedestrian died, will inevitably continue. [15] Then there is the legal conundrum: who is responsible when a driverless car gets into a crash and causes damage or loss of lives? For now, all such accidents have been settled out of court, so legal systems all over the world and especially the USA are to be tested by such a case. The outcome of such a test case in a court of law will inevitably be contentious. There is also the problem of lack of suitable road infrastructure and charging points for electric vehicles. Heavy rain can affect roof mounted sensors and snow can cover cameras, depriving the vehicle of its communication system.

 

Further Concerns

With autonomous vehicles, there could be concerns about advertising – could cars be programmed to drive past certain shops? Liability, – who is responsible if the car is programmed to put someone at risk? Social issues – drinking could increase once drink-driving isn’t a concern, and privacy – an autonomous vehicle is basically ‘Big Brother’ on wheels. If self-driving cars increase road safety and fewer people die on the highway, will this lead to fewer organ transplants? There are also important privacy questions involving the data that self-driving cars computer collect and stores, including GPS data and visual images from the car’s cameras. It appears to be too soon to regulate this technology. AV systems should develop further so more informed decisions can be made about to regulate them. In recent times, companies including Google, Tesla and Uber have made significant progress in the development of self-driving cars. Currently, much of the testing for these vehicles has taken place in states like California and Arizona, where weather conditions are more favourable.

It appears inevitable that companies will soon wish to conduct tests of automated vehicles on many of Europe’s roads, as well. Self-driving cars will probably be expensive when they first enter the market, so only well-off people will own one. Also, poorer and less educated people already die in car crashes more often than rich and educated people.

 

Conclusion

To date, it appears the “jury is still out” on whether autonomous cars will work as expected. It is difficult to overstate the automobile’s impact on the modern world. Yet, each year, harrowing statistics remind drivers that this incredible advancement has not come without risks. Globally, 1.2 million people die in automobile accidents every year. These accidents and their devastating effects are so common that eliminating them seems hardly possible; yet proponents of a new technology contend that possibility is quickly becoming a reality. This technology, formally known as “autonomous driving technology” allows a vehicle to operate completely on its own without any manual assistance from a human.  However, there’s a big difference between trying out a new feature on an iPhone and playing with technology in a car when you’re travelling at 100km on a public road. The only conclusion we can muster is that self-driving cars still have plenty of challenges to face before they will be completely safe to use on all public roads. However, we are looking forward to that future where all the horrible traffic problems we face today will be nothing more than stories to entertain our grandchildren. For many people, the big question around self-driving cars is: when will the technology be ready? In other words, when will AVs be safe enough to operate safely on their own?  From ushering in an era of decreased car ownership, to narrowing streets and eliminating parking lots, automated cars promise to dramatically change our towns and cities. One argument is that cars that can drive themselves will ultimately mean more cars on our streets causing increased congestion and more traffic that has to be managed. We are led to believe that this technology will surely reign supreme and this is a technology that is here to stay. Also, it will no longer be necessary to put life at risk each time you unwrap your breakfast roll or open your can of soft drink when being driven by a self-driving car. .There’s a lot of promise in autonomous vehicles but we’ve a long way to go to learn the right lessons to make this a safe and accessible technology. Also, will self-driving cars ever be able to make moral choices as in the Trolley Test – to kill one innocent bystander or five – how will the vehicle be programmed to respond? Driverless cars are like a scientific experiment where we don’t know all the answers yet. Also, will technological limitations remain along with legal hurdles?

Will there be more futuristic unpredictable and sweeping changes to our way of living as the wheel completely slips away from our hands? For us “monkeys” we just have to wait and see.

 

 Tom Harrington LL B F Inst. MTD (April 2019)

 

 [1] Charles C. Choi. The Moral Dilemmas of Self-Driving Cars. Inside Science Oct.25 2018. www.insidescience.org

[2] DanielleLlenth.1.1.2013. McGeorge Law Review. Paving the Way for Autonomous Vehicles. Article 29, Vol. 44, issue 3, Chapter 570.

[3] Adrienne Lafrance. 29/6/2016. Your Grandmother’s Driverless Car.

[4] A tonneau normally covers an open-topped vehicle and protects whatever is inside it e.g. passengers, or cargo in a car, truck or a pick-up truck.

[5] Revolvy – Houdina Radio Control. https://revolvy.com

[6] Amy Maxmem. Self-driving dilemmas reveal the moral ethics are not universal. Internal Journal of Science. www.nature .com

[7] Amy Maxmem. Self-driving dilemmas reveal the moral ethics are not universal. Internal Journal of Science. www.nature .com

[8] Utilitarian. Designed to be useful or practical rather than attractive.

[9] Olive Goldhill. (3 February 2018)  Test How Moral (or Immoral) You Are With This Philosophy Quiz. https://qz.com

[10] Algorithms. A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

[11] Winter Mason & Siddharth Suri. (30 june 2011) Conducting Behavioural Research an Amazon’s Mechanical Turk. https://link.sprinter.com

[12] Charles C. Choi. The Moral Dilemmas of Self-Driving Cars. Inside Science Oct.25 2018. www.insidescience.org

[13] Katharine Webster 08/08/2018. Philosophy Prof. Wins NSF Grant on Ethics of Self-driving Cars. University of Mass Lowell [US] https://www.uml.edu

[14] Are self-driving cars safe for our cities? https://curbed,com

[15] John naughton 15/7/2018/ The Guardian. The crucial flaw of self-driving cars. http://theguardian.com

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply