It is estimated that over 90% of vehicular accidents are caused by human error and inattention (Eugensson, Brännström, Frasher, Rothoff, Solyom & Robertsson, 2013 and Goodall, 2014b) and with the gathering momentum of autonomous vehicle (AV) technology, we are close to the cusp of eliminating a large number of fatalities associated with personal transport. While advances in machine vision and learning are propelling the industry forward, the field of machine ethics still lags (Powers, 2011) but with each technological advance, we are getting closer to an inevitability: our vehicles will soon be making ethical decisions on our behalf.

This paper will discuss whether autonomous vehicles should always swerve around children, even if that means hitting other people. To understand the complexities behind a seemingly simple question, we must look more holistically at the state of decision-making technologies and borrow ‘value of life’ quantisation metrics from the healthcare and insurance fields but first it is important to look more generally at the wider questions of how humans make ethical and moral decisions using abstract thought experiments and modelling.

The ‘Trolley Problem’ & Ethical Models

The trolley problem, first introduced by Philippa Foot in 1967, is a remarkably powerful thought experiment to use in understanding moral dilemmas: a runaway train trolley is careening down a track toward a group of 5 people stuck on the track. You are stood by a switch and can flip it in order to divert the trolley on to another parallel track, on which a single person is also stuck. Would you choose to kill the one to save five?

Now imagine being on a footbridge watching the trolley speed toward the group stuck on the tracks. Next to you is a large man, whose weight would be sufficient to stop the trolley. Would you push the fat man on to the track to save the five?

The majority of participants in the thought exercise will deem the flipping of the switch permissible but strongly disapprove the pushing of the fat man. Human moral judgement seems to be sensitive to harm caused as a means versus harm caused as a side-effect (Shallow, Iliev & Medin, 2001, p.593).

Understanding how morals (an individual’s own set of principles) are balanced against societal ethics (external rules of conduct) allows us to build models to explore decision-making behaviours.

Ethical Models

Ethical models can be broadly placed within two categories: normative and descriptive. Normative ethics describes what individuals ought to do, rather than what we would prefer to do, within a given situation. Descriptive (or experimental) ethics on the other hand, looks at what a particular group believes.

Normative ethical models include utilitarianism and deontology. Utilitarianism is strictly consequentialist; where an action is good if it achieves the maximum utility in a given situation. The utility is considered globally and all actors are regarded equivalent, regardless of the context. Deontology focuses on the ‘goodness’ of the actions rather than the resultant outcomes of the situation – almost a direct opposite to consequentialism and is well summarised using Immanuel Kant’s Categorical Imperative: act only according to that maxim whereby you can, at the same time, will that it should become a universal law.

In context, a utilitarian would see the flipping of the switch and the pushing of the fat man as morally equivalent and even the “right” answer, as sacrificing the one for the five would provide the greatest net cumulative benefit regardless of the manner in which we achieved that outcome. A deontological view however would see the flipping of the switch as permissible but not the pushing of the fat man as it would be judging the actions rather than the outcome.

A descriptive ethical approach would be represented as a distribution rather than a set of rules or duties to be followed. Johansson-Stenman and Martinsson (2007) built an ethical preferences model to describe ‘value of life’ sentiment within and toward different demographics. For example, they found that most people value the lives of children over adults and particularly the elderly. So while descriptive ethical models can be useful in understanding what individuals believe, it can also produce results where a decision would be socially acceptable but morally wrong (Goodall, 2014a).[1]

Framing and Choice Context Effects

Beyond these ethical models, formally analysing the trolley problem also yields several observations that are useful in conjunction with the models. Firstly, is our susceptibility to positive framing – focusing on lives saved due to intervention rather than lives lost leads to greater approval of a given action (Shallow et al., 2011). This is important to note as it highlights just how fragile many of our moral and ethical decisions are – there can be no difference in the action nor outcome of an event, but the language used to describe it would alter our moral judgement.

Shallow, Iliev and Medin (2011) also discuss the complexities that arise moving away from a binary moral decision, or choice context effects. This includes the “similarity effect” where a third, non-dominating option close to one alternative can increase the likelihood its competitor is chosen. Closely related is the “compromise effect” where an original choice is seen as a compromise, thereby increasing its approval, on the introduction of a third choice.

While these both make sense intuitively, they create a counter-intuitive conclusion. Let’s expand our trolley dilemma to a trilemma: our trolley is careening down the track towards five people, with an option to flip a switch to divert the trolley onto a parallel track with two people or a third option to push the fat man off the bridge to stop the trolley short. The result of this third option actually makes the flipping of the switch more desirable (Shallow et al., 2011, p.595).[2]

The Problem with Me and other Real-world Complexities

Assessing a trolley problem with utilitarian impartiality is easy whilst the respondent remains an independent observer. In the real world, the respondent will invariably be intrinsic in the situation – either as passenger within the AV or a pedestrian. Unsurprisingly, studies that look at how such human decision-making changes show a significant level of self-interest toward ourselves and our family (Bonnefon, Shariff & Rahwan, 2015, p.6).

An AV you are travelling in with your family is crossing a bridge and suddenly an oncoming school bus loses control and veers into the lane ahead of you. What should the vehicle do? Three possibilities might include:

  1. Drive off the bridge, almost certainly killing its passengers but saving the school bus.
  2. Stay in lane, risking a heavy head-on collision that has a high (but lower than A) chance of hurting the AV’s passengers but also injuring the passengers within the bus.
  3. Attempt a high-risk manoeuvre to rapidly accelerate in order to avoid the bus but breaking the law (speed limit) and potentially injuring passengers of the car in front, who would have been otherwise unaffected by the incident?

A utilitarian view would be simple, as the maximum cumulative net benefit would be option A.[3] However, this entirely ignores the context of the passengers in the autonomous vehicle – deontologically, both B and C may be the more preferable options as they attempt to minimise risk to both the passengers of the AV and bus.

However, note how vague the language is within B and C; encapsulating fuzzy concepts of perceived risk and injury blur an already murky model within which our AVs can operate. Furthermore, C would require the AV to knowingly break the law by speeding. While this is undoubtedly a socially acceptable time to break to law, further scenarios would need to be defined to allow an AV to knowingly break the law to achieve a greater good.

If a utilitarian model is more likely to be utilised in early AV decision-making, how comfortable would consumers be with a vehicle programmed to self-sacrifice? A recent study by Bonnefon, Shariff and Rahwan (2015) conducted several surveys with between 200 and 400 respondents on their attitudes toward a multitude of scenarios that varied according to the number of pedestrians saved by the action of an AV and whether the vehicle’s passenger was an anonymous character or themselves. Surprisingly, respondents actually approved a utilitarian killing of the passenger even if it meant their own self-sacrifice and were even prepared to accept legal enforcement providing it only applied to AVs rather than human drivers – implying they trusted machines to make impartial, utilitarian decisions over humans. However, this all came with a large caveat: respondents exhibited a propensity toward others driving AVs, rather than themselves (p.8).

This is a classic social dilemma: while most people agree with what should be done for the greater good, it is everyone’s self-interest not to do so themselves – a real-world parallel of the abstract normative and descriptive ethical systems. So could we use descriptive ethics modelling to inform a more normative, deontological basis within which AVs could act upon?

The Value of Life

One aim of such a descriptive ethical model would be to create a ‘value of life’ that AVs could use when making their decision. While this may seem unsavoury to many, it has been a well-researched field within the healthcare and insurance professions. Quality-adjusted life-years (QALY) are used throughout the healthcare profession to understand the impact of medical intervention. One QALY equates to one year of perfect health so treatments can have a cost-effectiveness calculation performed over their course (Johansson-Stenman and Martinsson, 2008, p.740).[4]

Due to the inevitable health issues that arise in old age, QALY measurements naturally favour those younger that will live longer, healthier years although while survey respondents suggested that children are given a higher priority, they also suggested that those responsible for their own ill health (smokers and drug abusers) are given a lower priority (Johansson-Stenman and Martinsson, 2008, p.741).

Johansson-Stenman and Martinsson (2008) built a theoretical model based upon consequentialist ethical preferences to calculate the individual social marginal rate of substitution (SMRS), allowing them to express the relative value of saving the life of a person in group a compared to a person belonging in group b.

They surveyed nearly 1,500 people from around Sweden on a variety of scenarios that allowed them to map the SMRS for pedestrians and drivers across a wide range of ages and found that across all demographics, the SMRS was 4.646 between 10-year old pedestrians and 70-year old car drivers. That is, respondents equated saving 4.646 70-years olds with saving one 10-year old. Unsurprisingly, this number dropped to 3.470 for respondents aged 57 or over with no children and rose to 5.909 for respondents aged under 57 with children – that is, there was a degree of self-interest or self-serving bias present within the findings (Johansson-Stenman and Martinsson, 2008, Table 2).

However they also found that there was an intrinsic value of life – it was not just life-years but also the number of lives saved. For example, the data showed that one saved life corresponded to between 10 and 20 life-years or put another way, saving two individuals with 30-years left is equivalent to saving 3 individuals with 15 years left[5] (Johansson-Stenman and Martinsson, 2008, p.747).

Importantly, the Johansson-Stenman and Martinsson model looked at decisions and life-values that were independent to the respondents themselves. The work that Bonnefon et al. did factored in this variable, as well as whether it was a human driver or an AV making the decision. They discovered that while there was a definite bias toward self-preservation, respondents approved of AVs making a self-sacrifice decision to the same extent as a human driver, and in all their studies it was only the number of pedestrians saved that seemed to have a statistically significant effect on the outcome[6] (Bonnefon et al., 2015, p.6).

Although this only represents two studies, a mix of experimental ethical models used to justify a normative set of rules that valued both number of lives and live-years when evaluating an “unavoidable harm” collision (UHC), while maintaining a degree of socially-acceptable utilitarian self-sacrifice could strike the difficult balance that AV manufacturers will need to achieve between consistent behaviour, avoiding public outrage and not discouraging buyers.

However, this premise is effectively a “best of both worlds” situation, where a decision is to be made using the impartiality of a rules-based machine with the observational and intuited reasoning of a human. So while the human mind struggles to make careful, considered decisions when under pressure (such as a UHC), would we expect AVs to factor in more and more variables when making equivalent decisions? The number and ages of the passengers in both vehicles, the relative speed of collision, the types of vehicles/cyclists/pedestrians involved and their relative safety features even subtleties such as the clothes a pedestrian was wearing[7] may all factor into what would be a deontologically “right” answer.

Has our technology matured enough to allow for AVs to manage this level of reasoning?

Implementing Ethics in Autonomous Vehicles

A huge number of different technologies need to come together to create AVs, however for the purposes of our discussion, we can broadly classify the pertinent areas into sensors, decision making and vehicle-to-vehicle (V2V) communication[8]. The AV would use its sensors to detect the world around it, make decisions on its behaviour and then inform surrounding vehicles of its intentions. For the purposes of this paper, we will assume that sensors and the machine vision processing is approaching (or exceeding) human parity in detecting the numbers and relative ages of any pedestrians in view under a variety of conditions. In this section, we will review two possible implementations of the ethical models discussed.

Normative models map to a rational approach, where a machine would be programmed to follow a set of pre-defined rules. A descriptive or experimental model would map on to an artificial intelligence (AI) approach, using a machine learning algorithm such as a neural network to learn from training data how to act for itself.

Asimov’s Automobile and Kant’s Car

Both models come with significant pros and cons. The rational approach lends itself very well to well-defined tasks that require transparent and repeatable outcomes. A rational approach maps well to both consequentalism by maximising a particular outcome or a deontological model by adhering to a set of rules.

However, as we have seen in the previous section, there are significant shortcomings in both these ethical models, as consequentialism leads to counter-intuitive judgements and deontology struggles with incompleteness – articulating the complex and nuanced nature of moral judgements invariably leads to scenarios that have not been defined. Machines adhere to their programming in an absolute and literal fashion, making it very unlikely that a purely rational approach would ensure AVs acted in accordance to our ethical model in all situations. Indeed, despite Asimov’s Three Rules of Robotics being commonly cited as an example of a rational approach within machine ethics, he used them as a literary device to show the absurdity of such an approach (Goodall, 2014a, p.61)!

The rational approach would also yield an issue of moral authority: who would decide these rules? Would we rely on individual car manufacturers, national governments or international treaties to decide on how AVs react in these situations? Individual manufacturers would be liable to abuse their position (i.e., a factory-installed option that removed the ‘self-sacrificial’ logic), national legislation would only work for nations with no borders and international treaties would have to contend with significant cultural and technological differences between signing parties.

An artificial intelligence approach however would utilise a machine-learning algorithm such as a neural network to avoid many of these issues. Neural networks are probabilistic models inspired by biological neurons, popularised by their ability to learn complex and undefined functions.

Neural networks are typically trained by presenting a series of test inputs along with the desired outcome and allowing a learning algorithm such as back-propagation to correct errors within the network. With enough training, a neural network will learn to classify its test data correctly but more importantly, will also be able to classify brand new inputs in a similar fashion. Furthermore, a neural network’s output is not restricted to only one possible outcome. For example, a neural network could issue a real number between 0 and 1 that indicated its assessment of the “moral correctness” given the inputs it had just received.

Despite this incredibly promising start, artificial intelligence approaches such as neural networks also hold significant downsides. In the same way humans are susceptible to effects such as positive framing, neural networks are very sensitive to how they are taught – what test cases were presented in addition to who decided what the desired outcome should be. While neural networks would help solve the problem of deontological ambiguity in our moral reasoning, we would still be faced with a moral authority question when designing the test data. One rebuttal would be for a neural network to be trained on real-world data, watching humans reacting under a series of scenarios. Under these circumstances the neural network would then risk emulating human behaviour and morals rather than their beliefs and societal ethics (Goodall, 2014a, p.62). Creating a joint training set from real-world reactions and experimental ethics models such as those created by Bonnefon or Johansson-Stenman could train the neural network to balance the normative from the descriptive, the consequentialist from the deontological and the moral from the ethical, but even this leads to one final quandary – transparency.

A neural network trained to understand all these nuances would act as a black box – an output would be received from a given set of inputs, with no insight into how the results were achieved. As an algorithm, it would be possible to unwind the calculations that the neural network performed but this would do little other than verify the neural network was acting correctly upon its training set.

This resultant lack of transparency means it is very difficult to inspect or certify an intelligent system – an important task, particularly with one that will make life-or-death decisions on behalf of its owner. For example, a manufacturer could train a network to act according to a standards-based ethical training set but could also surreptitiously train it with additional data to avoid cars of the same make to reduce product liability or increase safety reputation. Any regulatory or standards body would be unable to detect or verify this without access to the complete training set.[9]

This opaqueness means that although the artificial intelligence approach currently provides much of the technology required to implement a complicated ethical model, its inherent lack of introspection should prevent widespread adoption. Goodall (2014a) proposes to build upon the work of Powers (2011), a key advocate for adaptive incrementalism within machine ethics. Goodall proposes a three-stage approach by firstly using a rational approach based upon existing mechanisms such as QALYs. Later, a hybrid implementation that used an artificial intelligence approach constrained by rational boundary conditions could mitigate concerns of a fully AI-based approach until such time that the technology has developed to be able to explain its own actions using natural language (Goodall, 2014a, p.63).

Product Liability

As machine learning algorithms currently have limited accountability, this also leads to questions around product liability – how can we prove who is at fault if we cannot determine what the machine did and why? With adaptive incrementalism, the most common liability issues could be obviated.

AVs themselves are party to adaptive incrementalism having had their roots in cruise control, anti-lock brakes and electronic stability control (ESC). All of these technologies removed human decision from key elements of driving without significant liability concerns.

More modern technologies such as Volvo’s City Safety, Mercedes-Benz’ Distronic Plus, Tesla’s Autopilot and the plethora of automatic parking features show that many manufacturers are taking tentative but incremental steps in releasing these sorts of features to market.

Villasenor (2014) performs a thorough analysis of existing US Federal and State tort and contractual frameworks in relation to product liability in AVs. He believes that the current legislation around negligence, strict liability, manufacturing/design defects, failure to warn, misrepresentation and breach of warranty (pp.7-12) would cover the majority of issues that would arise initially from mass-market AV usage, with the potential exceptions of liability insurance and situations where control was partially divided between an AV and human driver (p.13).

Goodall (2014b) also brings up the important issue of affirmative duty of care. Assume an AV is allowing a child to cross in front of it, however it detects a distracted human driver approaching from behind at a speed that will cause a collision. Does the AV move into an adjacent lane, saving its passenger but potentially killing the child? Under US common law, there is no duty to act unless there is a special relationship with the victim (in this case, parent-child). Though legal, many would find this behaviour morally unacceptable and as the Bonnefon et al. survey showed, most are willing to accept a degree of self-sacrificial behaviour, although regulation would be needed to ensure manufacturers did not hide excessive self-protection tendencies within the machine learning models (p.6).

These issues aside, the progressive announcements during 2015 within the field of AV liability are also testament to the incremental approach. In May 2015, Tesla announced that their Autopilot feature would require drivers to use the indicator paddle to initiate an automatic lane change/overtake manoeuvre in order to mitigate questions around liability (Smith, 2015). Only a few months later in October, Volvo’s CEO announced the manufacturer would accept full liability for their AVs, as did Mercedes-Benz and Google although Audi and General Motors both publicly stated they wanted to wait until legislative guidance was clearer (Korosec, 2015 and Vijayenthiran, 2015).

Should Autonomous Car Avoid Children at all Costs?

What started as a simple question has evolved into a meandering discussion detailing abstract philosophical analogies of moral dilemmas, ethical modelling, machine learning algorithms and product liabilities. To answer the question posed, we must break it down into three component parts:

  • Can we assert an ethical right to value one life above another?
  • Are we confident that an autonomous vehicle can make a moral decision on our behalf?
  • Is it both ethical and legally defensible to choose who to kill in an “unavoidable harm” situation?

The work that Johansson-Stenman and Martinsson conducted clearly showed an inversely-proportional relationship between ‘value of life’ and age and the Bonnefon et al. surveys suggested an acquiescence toward self-sacrifice in certain situations. Both these papers pointed to a general social acceptance however, rather than any proof of a normative ethical framework, which also contradicts ethical guidelines such as the Institute of Electrical and Electronic Engineers Code of Ethics prohibiting age discrimination (IEEE, 1963 and Goodall, 2014b). While other industries do assign numeric values such as QALYs, this is invariably done with significantly more and carefully collected data than an AV could ascertain in the split-second before a UHC.

Although an AV could not reliably gather sufficient information to make a QALY calculation, machines are considerably more adept than humans at analysing their sensor data quickly and making impartial decisions. Technology such as neural networks are capable of making complex decisions that people are willing to accept (Bonnefon et al., 2015), although questions still remain around the transparency and therefore accountability of such systems.

Finally, in UHCs it is incredibly rare that a human driver will choose to kill one pedestrian over another; it is an unfortunate tragic consequence of their actions. An AV with a rational or AI-based decision engine will also need to avoid such decisions, instead it must constantly attempt to minimise the overall loss of life or injury rather than deciding to kill one party over another. As the infamous and grisly case of R. v. Dudley and Stephens (1884) showed, necessity is not a valid defence for murder.

So, should an autonomous vehicle avoid children at all costs, even if it means hitting other people?

No.

While saving a child at all costs is socially acceptable, it is based on deontological[10] ethics and would require an artificial intelligence model too complex to allow for sufficient accountability. Bonnefon et al. (2015) also observed that while most agree AVs should take a utilitarian approach to UHCs whilst simultaneously preferring others drove them first, this disparity between societal ethics and individual morals could be narrowed as adoption increases.

Personal transport is one of the most pervasive technologies in the modern world and as such autonomous vehicles will shift cultural, ethical, legislative and technological norms over the coming decades. We cannot design for random events, nor we can we teach our moral and ethical reasoning without expecting reciprocal accountability. We can expect that adaptive incrementalism will ensure the technology will bootstrap and catalyse ever-maturing attitudes toward machine ethics and their legislative ramifications.

Works Cited

Bonnefon, J., Shariff, A. & Rahwan, I. (2015) Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? arXiv:1510.03346.

Eugensson, A., Brännström, M., Frasher, D., Rothoff, M., Solyom, S. & Robertsson, A. (2013) Environmental, Safety, Legal and Societal Implications of Autonomous Driving Systems. 23rd International Technical Conference on the Enhanced Safety of Vehicles (ESV).

Goodall, N. (2014a) Ethical Decision Making During Automated Vehicle Crashes. Transportation Research Record: Journal of the Transportation Research Board. Volume 2424. pp. 58–65.

Goodall, N. (2014b) Vehicle Automation and the Duty to Act. 21st World Congress on Intelligent Transport Systems. September, 2014.

Johansson-Stenman,O. & Martinsson, P. (2007) Are some lives ore valuable? An ethical preferences approach. Journal of Health Economics, 27, pp. 739-752.

Korosec, K. (2015) Volvo CEO: We will accept all liability when our cars are in autonomous mode. Fortune. Published on 7th October, 2015. Available at: http://fortune.com/2015/10/07/volvo-liability-self-driving-cars/. Last accessed 27th December, 2015.

National Institute for Health and Care Excellence. (2013) How NICE measures value for money in relation to public health interventions. Available at: http://publications.nice.org.uk/how-nice-measures-value-for-money-in-relation-to-public-health-interventions-lgb10b/nices-approach-to-economic-analysis-for-public-health-interventions. Last accessed 27th December, 2015.

Powers, T. (2011) Incremental Machine Ethics. IEEE Robotics & Automation Magazine 03/2011; 18(1):51-58.

The Queen v Dudley and Stephens (1884) 14 Q.B.D. 273.

Shallow, C., Iliev, R. & Medin, D. (2011) Trolley Problems in Context. Judgement and Decision Making, Vol. 6, No. 7, October 2011, pp. 593-601.

Smith, B. (2015) Tesla and Liability. The Center for Internet and Society. Published 20th May, 2015. Available at: http://cyberlaw.stanford.edu/blog/2015/05/tesla-and-liability. Last accessed 27th December, 2015.

Vijayenthiran, V. 2015, October 12th. Volvo, Mercedes And Google Accept Liability For Autonomous Cars. Available at http://www.motorauthority.com/news/1100422_volvo-mercedes-and-google-accept-liability-for-autonomous-cars. Last accessed 26th December, 2015.

Footnotes

[1] For example, imagine a liberalising but historically religious country. Premarital sex, divorce and abortion may be socially acceptable but still considered morally wrong.

[2] Shallow et al. do propose an interesting solution to this problem around joint versus discrete evaluation of the decision choices and a resultant deontological shift toward outcome utility (p.595).

[3] That said, a utilitarian view would only be simple if we had more knowledge of the number of occupants within the bus. A school bus will elicit a certain emotional response from a human driver, but if the bus was on the bridge at midday or 8pm at night, a human driver might assume there are no children on board.

[4] As an example, the UK’s National Institute for Health and Care Excellence advises the National Health Service on drugs and other treatments and uses a “£ per QALY” measurement to determine cost-effectiveness. Any drug that performs at above £20,000-30,000 per QALY would not be deemed cost-effective (NICE, 2013).

[5] Assuming we calculate ‘between 10-20 years’ to equal 15 years, then 2 x (30 years left + 15 life-value) = 3 x (15 years left + 15 life-value).

[6] Covariates were age, sex and religiosity of the respondent, rather than demographics of the pedestrians to be saved.

[7] For example, a pair of teachers wearing high-vis jackets may be crossing the street first, occluding from view a large number of small children behind them.

[8] While outside the scope of this paper, the area of V2V communication will help reduce fatalities even further by sharing information amongst vehicles in a local vicinity. A car ahead could warn surrounding vehicles of a child running into the road, parked cars could warn of pedestrians passing between them or even a group of vehicles acting as a self-sacrificial swarm to absorb the impact of a human-driven car to save a pedestrian’s life while spreading the risk to their own passengers.

[9] While it would be possible to detect if a certain set of training data led to the neural network in question, it would be impossible to reverse-engineer what the training data would have been that led to the creation of a particular network’s weights.

[10] It seems straightforward at first glance but imagine scenarios such as choosing between a 17-year old child and an 18-year old adult, or one 6-year old child and a family of four with two young teenage children or a 10-year old child and parent with a self-sacrificial decision point while a baby was also in the car.

Leave a Reply

Your email address will not be published.