Engineering Ethics Risk and Liability in Engineering: Main Ideas in This Chapter
Engineering Ethics Risk and Liability in Engineering: Main Ideas in This Chapter
Engineering Ethics
CHAPTER 07
Risk and Liability in Engineering
• For engineers and risk experts, risk is the product of the likelihood and magnitude of
harm.
• Engineers and risk experts have traditionally identified harms and benefits with factors
that are relatively easily quantified, such as economic losses and loss of life.
• In a new version of the way engineers and risk experts deal with risk, the ‘‘capabilities’’
approach focuses on the effects of risks and disasters on the capabilities of people to
live the kinds of lives they value.
• The public conceptualizes risk in a different way from engineers and risk experts, taking
account of such factors as free and informed consent to risk and whether risk is justly
distributed.
• Government regulators have a still different approach to risk because they place more
weight on avoiding harm to the public than producing good.
• Engineers have techniques for estimating the causes and likelihood of harm, but their
effectiveness is limited.
• Engineers must protect themselves from unjust liability for harm to risk while also
protecting the public from risk.
7.1 INTRODUCTION
The concern for safety is a common one for engineers. How should engineers deal with issues of
safety and risk, especially when they involve possible liability for harm?
Engineering necessarily involves risk, and innovation usually increases the risks. One cannot
avoid risk simply by remaining with tried and true designs, but innovation creates technologies
in which the risks are not fully understood, thereby increasing the chance of failure. Without
innovation, there is no progress. A bridge or building is constructed with new materials or with a
new design. New machines are created and new compounds synthesized, always without full
knowledge of their long-term effects on humans or the environment. Even new hazards can be
found in products, processes, and chemicals that were once thought to be safe. Thus, risk is
inherent in engineering. The relationship of safety to risk is an inverse one. Because of the laws
of engineering science and statistics, the more we accept risk in an engineering project, the less
safe it will become. If there were absolutely no risk in a project, then that project would be
absolutely safe. So safety and risk are intimately connected. Concern for safety pervades
engineering practice. One of the most common concepts in engineering practice is the notion of
‘‘factors of safety.’’
We begin this chapter by considering three different approaches to risk and safety, all of which
are important in determining public policy regarding risk. Then we examine more directly the
issues of risk communication and public policy concerning risk, including one example of public
policy regarding risk—building codes. Next, we consider the difficulties in both estimating and
preventing risk from the engineering perspective, including the problem of self-deception.
Finally, we discuss some of the legal issues surrounding risk, including protecting engineers
from undue liability and the differing approaches of tort law and criminal law to risk.
To assess a risk, an engineer must first identify it. To identify a risk, an engineer must first know
what a risk is. The usual engineering definition of risk is ‘‘a compound measure of the
probability and magnitude of adverse effect.’’ That is, risk is composed of two elements: the
likelihood of an adverse effect or harm and the magnitude of that adverse effect or harm. By
compound is meant the product. Risk, therefore, is the product of the likelihood and the
magnitude of harm. A relatively slight harm that is highly likely might constitute a greater risk to
more people than a relatively large harm that is far less likely.
We can define harm as an invasion or limitation of a person’s freedom or well- being. Engineers
have traditionally thought of harms in terms of things that can be relatively easily quantified,
namely as impairments of our physical and economic well- being. Faulty design of a building
can cause it to collapse, resulting in economic loss to the owner and perhaps death for the
inhabitants. Faulty design of a chemical plant can cause accidents and economic disaster. These
harms are then measured in terms of the numbers of lives lost, the cost of rebuilding or repairing
buildings and high- ways, and so forth.
Engineers and other experts on risk often believe that the public is confused about risk,
sometimes because the public does not have the correct factual information about the likelihood
of certain harms. A 1992 National Public Radio story on the Environmental Protection Agency
(EPA) began with a quote from EPA official Linda Fisher that illustrated the risk expert’s
criticism of public understanding of risk:
A lot of our priorities are set by public opinion, and the public quite often is more worried about
things that they perceive to cause greater risks than things that really cause risks. Our priorities
often times are set through Congress ... and those [decisions] may or may not reflect real risk.
They may reflect people’s opinions of risk or the Congressmen’s opinions of risk. Every time
Fisher refers to ‘‘risk’’ or ‘‘real risk,’’ we can substitute ‘‘probability of death or injury.’’ Fisher
believes that whereas both members of the U.S. Congress and ordinary laypeople may be
confused about risk, the experts know what it is. Risk is something that can be objectively
measured—namely, the product of the likelihood and the magnitude of harm.
Utilitarianism and Acceptable Risk
The engineering conception of risk focuses on the factual issues of the probability and magnitude
of harm and contains no implicit evaluation of whether a risk is morally acceptable. In order to
determine whether a risk is morally acceptable, engineers and risk experts usually look to
utilitarianism. This position holds, it will be remembered, that the answer to any moral question
is to be found by determining the course of action that maximizes well-being. Given the earlier
definition of risk as the product of the probability and the consequences of harm, we can state the
risk expert’s criterion of acceptable risk in the following way:
An acceptable risk is one in which the product of the probability and magnitude of the harm is
equaled or exceeded by the product of the probability and magnitude of the benefit, and there is
no other option where the product of the probability and magnitude of the benefit is substantially
greater.
One way of implementing this account of acceptable risk is by means of an adaptation of cost–
benefit analysis. As we have seen, utilitarian’s sometimes find cost–benefit analysis to be a
useful tool in assessing risk. In applying this method to risk, the technique is often called risk–
benefit analysis because the ‘‘cost’’ is measured in terms of the risk of deaths, injuries, or other
harms associated with a given course of action. For simplicity, however, we shall continue to use
the term cost–benefit analysis. The utilitarian approach to risk embodied in risk–benefit analysis
has undoubted advantage in terms of clarity, elegance, and susceptibility to numerical
interpretation. Nevertheless, there are some limitations that must be kept in mind.
• First, it may not be possible to anticipate all of the effects associated with each option.
Insofar as this cannot be done, the cost–benefit method will yield an unreliable result.
• Second, it is not always easy to translate all of the risks and benefits into monetary terms.
Alternatively, we can calculate how much more people would pay for safety in an
automobile or other things they use by observing how much more they are willing to pay
for a safer car. Unfortunately, there are various problems with this approach. In a country
in which there are few jobs, a person might be willing to take a risky job he or she would
not be willing to take if more jobs were available. Furthermore, wealthy people are
probably willing to pay more for a safer car than poorer citizens.
• Third, cost–benefit analysis in its usual applications makes no allowance for the
distribution of costs and benefits. Suppose more overall utility could be produced by
exposing workers in a plant to serious risk of sickness and death. As long as the good of
the majority outweighs the costs associated with the suffering and death of the workers,
the risk is justified. Yet most of us would probably find that an unacceptable account of
acceptable risk.
• Fourth, the cost–benefit analysis gives no place for informed consent to the risks imposed
by technology. We shall see in our discussion of the lay approach to risk that most people
think informed consent is one of the most important features of justified risk.
Despite these limitations, cost–benefit analysis has a legitimate place in risk evaluation. When no
serious threats to individual rights are involved, cost–benefit analysis may be decisive. In
addition, cost–benefit analysis is systematic, offers a degree of objectivity, and provides a way of
comparing risks, benefits, and cost by the use of a common measure—namely, monetary cost.
7.3. Expanding the Engineering Account of Risk: The Capabilities Approach to Identifying
Harm and Benefit
As we have pointed out, engineers, in identifying risks and assessing acceptable risk, have
traditionally identified harm with factors that are relatively easily quantified, such as economic
losses and the number of lives lost. There are, however, four main limitations with this rather
narrow way of identifying harm. First, often only the immediately apparent or focal
consequences of a hazard are included, such as the number of fatalities or the number of homes
without electricity. However, hazards can have auxiliary consequences, or broader and more
indirect harms to society.
Second, both natural and engineering hazards might create opportunities, which should be
accounted for in the aftermath of a disaster. Focusing solely on the negative impacts and not
including these benefits may lead to overestimating the negative societal consequences of a
hazard. Third, there remains a need for an accurate, uniform, and consistent metric to quantify
the consequences (harms or benefits) from a hazard. For example, there is no satisfactory method
for quantifying the non- fatal physical or psychological harms to individuals or the indirect
impact of hazards on society. The challenge of quantification is difficult and complex, especially
when auxiliary consequences and opportunities are included in the assessment. Fourth, current
techniques do not demonstrate the connection between specific harms or losses, such as the loss
of one’s home and the diminishment of individual or societal well-being, and quality of life. Yet
it is surely the larger question of effect on quality of life that is ultimately at issue when
considering risk.
In their work on economic development, economist Amartya Sen and philosopher Martha
Nussbaum have derived a notion of ‘‘capabilities’’ that the two scholars believe may be the basis
of a more adequate way of measuring the harms (and some- times the benefits) of disasters,
including engineering disasters. Philosopher Colleen Murphy and engineer Paolo Gardoni have
developed a capabilities-based approach to risk analysis, which focuses on the effect of disasters
on overall human well-being. Well-being is defined in terms of individual capabilities, or ‘‘the
ability of people to lead the kind of life they have reason to value.’’
From the capabilities standpoint, a risk is the probability that individuals’ capabilities might be
reduced due to some hazard. In determining a risk, the first step is to identify the important
capabilities that might be damaged by a disaster. Then, in order to quantify the ways in which the
capabilities might be damaged, we must find some ‘‘indicators’’ that are correlated with the
capabilities. According to its advocates, there are four primary benefits of using the capabilities-
based approach in identifying the societal impact of a hazard. First, capabilities capture the
adverse effects and opportunities of hazards beyond the consequences traditionally considered.
Second, since capabilities are constitutive aspects of individual well-being, this approach focuses
our attention on what should be our primary concern in assessing the societal impact of a hazard.
Third, the capabilities-based approach offers a more accurate way to measure the actual impact
of a hazard on individuals’ well-being.
Fourth, rather than considering diverse consequences, which increase the difficulty of
quantification, the capabilities-based approach requires considering a few properly selected
capabilities.
In addition to identifying more accurately and completely the impact of a hazard, its advocates
believe the capabilities-based approach provides a principled foundation for judging the
acceptability and tolerability of risks. Judgments of the acceptability of risks are made in terms
of the impact of potential hazards on the capabilities of individuals. Thus, according to the
capabilities approach, a risk is acceptable if the probability is sufficiently small that the adverse
effect of a hazard will fall below a threshold of the minimum level of capabilities attainment that
is acceptable in principle.
The ‘‘in principle’’ qualification captures the idea that, ideally, we do not want individuals to fall
below a certain level. We might not be able to ensure this, however, especially immediately after
a devastating disaster. In practice, then, it can be tolerable for individuals to temporarily fall
below the acceptable threshold after a disaster, as long as this situation is reversible and
temporary and the probability that capabilities will fall below a tolerability threshold is
sufficiently small. Capabilities can be a little lower, temporarily, as long as no permanent
damage is caused and people do not fall below an absolute minimum.