The Risk of Using Risk Matrices
The Risk of Using Risk Matrices
net/publication/266666768
CITATIONS READS
96 60,654
3 authors:
2 PUBLICATIONS 98 CITATIONS
University of Stavanger (UiS)
148 PUBLICATIONS 1,636 CITATIONS
SEE PROFILE
SEE PROFILE
J. Eric Bickel
University of Texas at Austin
87 PUBLICATIONS 1,164 CITATIONS
SEE PROFILE
All content following this page was uploaded by Reidar B. Bratvold on 06 April 2015.
Step 4. Step 5.
Step 1. Step 2.
Plot in Risk Matrix Risk Prioritization
Define Risk Cricteria Define Risk Events
“Risk Profile” & Mitigation plan
Step 3b.
Consequence Estimation
Standards. Among the standards that are commonly used in the matrix” and “risk matrices.” This returned 527 papers. Then, we
O&G industry are API, NORSOK, and ISO. All of these standards removed 120 papers published before the year 2000, to make sure
recommend RMs as an element of risk management. This section our study is focused upon current practice. We next reviewed the
summarizes how each of these standards supports RMs. remaining 407 papers and selected those that promote the use of
API. API RP 581(2008) recommends RMs customarily for its RMs as a “best practice” and actually demonstrate RMs in the pa-
risk-based-inspection (RBI) technology. RBI is a method to opti- per; leaving 68 papers. We further eliminated papers that pre-
mize inspection planning by generating a risk ranking for equip- sented the same example. In total, we considered a set of 30
ment and processes and, thus, prioritization for inspection of the papers covering a variety of practice areas (e.g., HSE, hazard
right equipment at the right time. API RP 581 specifies how to analysis, and inspection). We believe that this sampling of papers
calculate the likelihoods and consequences to be used in the RMs. presents the current RM practice in the O&G industry. We did not
The specification is a function of the equipment that is being ana- find any SPE papers documenting the known pitfalls of the use of
lyzed. The probability and consequence of a failure are calculated RMs. The 30 papers we consider in this paper are given in Appen-
by use of several factors. API RP 581 asserts that “Presenting the dix A.
results in a risk matrix is an effective way of showing the distribu-
tion of risks for different components in a process unit without nu-
merical values.” Known Deficiencies of RMs
NORSOK. The NORSOK (2002) standards were developed by Several deficiencies of RMs have been identified by other authors.
the Norwegian petroleum industry to “ensure adequate safety, value
adding and cost effectiveness for petroleum industry developments Risk-Acceptance Inconsistency. RMs are used to identify, rank,
and operations. Furthermore, NORSOK standards are as far as pos- and prioritize possible outcomes so that scarce resources can be
sible intended to replace oil company specifications and serve as directed toward the most-beneficial areas. Thus, RMs must reli-
references in the authority’s regulations.” NORSOK recommends ably categorize the possible outcomes into green, yellow, and red
the use of RMs for most of their risk-analysis illustrations. The regions. Cox (2008) suggested we should conform to three axioms
RMs used by NORSOK are less rigid than those of API RBI and one rule when designing RMs to ensure that the EL in the
because the NORSOK RMs can be customized for many problem green region is consistently smaller than the EL in the red region.
contexts (the RM template is not standardized). NORSOK S-012, Cox (2008) also clarifies that the main purpose of the yellow
an HSE document related to the construction of petroleum infra- region is to separate the green region and red region in the RMs,
structure, uses an RM that has three consequence axes—occupa- not to categorize the outcomes. He argues that the RM is inconsis-
tional injury, environment, and material/production cost—with a tent if the EL in the yellow region can be larger than in any of the
single probability axis for all three consequence axes. red cells or smaller than in any of the green cells. Nevertheless,
ISO. ISO standards ISO 31000 (2009) and ISO/IEC 31010 the practice in O&G is to use the yellow region to denote an out-
(2009) influence risk-management practices not only in the O&G come with a medium risk. Every SPE paper we reviewed imple-
industry but in many others. In ISO 31000, the RM is known as a ments this practice and also violates at least one of the axioms or
probability/consequence matrix. In ISO/IEC 31010, there is a ta- the rule proposed by Cox (2008), leading to inconsistencies in the
ble that summarizes the applicability of tools used for risk assess- RMs. inconsistency in yellow region is large
ment. ISO claims that the RM is a “strongly applicable” tool for Fig. 3 shows an example RM with many outcomes. This
risk identification and risk analysis and is “applicable” for risk example shows that there are two groups of outcomes. The first
evaluation. As with the NORSOK standard, ISO does not stand- group is the outcome with medium-high probability and medium-
ardize the number of colors, the coloring scheme (risk-acceptance high consequence (e.g., severe losses, well-control issues) and the
determination), or the size of range for each category. ISO praises second group is the outcome with the low probability but very
RMs for their convenience, ease of use, and quick results. How- high consequence (e.g., blowout). In Fig. 3, the first group of out-
ever, ISO also lists limitations of RMs, including some of their comes is illustrated in the red cells whereas the second group is in
inconsistencies, to which we now turn. the yellow cell. The numbers shown in some of the cells represent
the probability, consequence, and EL, respectively, where EL is
calculated as probability multiplied by consequence. This exam-
Deficiencies of RMs ple shows the inconsistency between EL and color practice in
Several flaws are inherent to RMs. Some of them can be corrected, RMs where all outcomes in the red cells have a lower EL com-
whereas others seem more problematic. For example, we will show pared with the outcome in the yellow cell. Assuming that we wish
that the ranking produced by a RM depends upon arbitrary choices to rank outcomes on the basis of expected loss, we would priori-
regarding its design, such as whether one chooses to use an increas- tize the outcome in the yellow cell compared with the outcomes
ing or decreasing scale for the scores. As we discuss these flaws, in the red cells, which is the opposite of the ranking provided by
we also survey the SPE literature to identify the extent to which the color regions in the RM. Clearly, the use of the RM would in
these flaws are being made in practical applications. this case lead us to focus our risk-mitigation actions on the out-
To locate SPE papers that address or demonstrate the use of come that does not have the highest EL. This type of structure is
RMs, we searched the OnePetro database using the terms “risk evident in eight of the papers we reviewed.
45% risks. The use of the scoring mechanism embedded in RMs com-
presses the range of outcomes and, thus, miscommunicates the
40% Severe Losses
relative magnitude of both consequences and probabilities. The
35% failure of the RM to convey this distinction seems to undermine
30% its commonly stated benefit of improved communication. This
Range compression problem, that 25M loss is the same as 250M loss example demonstrates the range compression inherent in RMs,
25%
Probability
which necessarily affected all the surveyed SPE papers. The next
20% section will introduce the “lie factor” (LF) that we use to quantify
15% the degree of range compression.
10% Well Control
Blowout
Centering Bias. Centering bias refers to the tendency of people
5%
to avoid extreme values or statements when presented with a
0% choice. For example, if a score range is from 1 to 5, most people
0 20 40 60 80 100
will select a value from 2 to 4. Hubbard (2009) analyzed this in
Consequence Millions, USD the case of information-technology projects. He found that 75%
of the chosen scores were either 3 or 4. This further compacts the
Fig. 4—Plot of probabilities and consequences value of the out- scale of RMs, exacerbating range compression. Smith et al.
comes in the case example.
(2009) came to the same conclusions from investigating risk man-
agement in the airline industry.
Range Compression. Cox (2008) described range compression Is this bias also affecting risk-management decisions in the
in RMs as a flaw that “assigns identical ratings to quantitatively O&G industry? Unfortunately, there is no open-source O&G
very different risk.” Hubbard (2009) also focused extensively on database that can be used to address this question. However, six
this problem. of the reviewed SPE papers presented their data in sufficient
Range compression is unavoidable when consequences and detail to investigate whether the centering bias seems to be occur-
probabilities are converted into scores. The distance between risks ring. Each of the six papers uses an RM with more than 15 out-
in the RM using scores (mimicking expected-loss calculation) comes. Fig. 5 shows the percentage of the outcomes that fell into
does not reflect the actual distance between risks (specifically, the the middle consequence and probability scores. For example, pa-
difference in their expected loss). per SPE 142854 used a 5"5 RM; hence, the probability ratings
In our case example shown in Fig. 1, blowout and well control ranged from 1 to 5. Paper SPE 142854 has 24 outcomes, out of
are considered to have the same risk (both are yellow). However, which 18 have a probability rating of 2, 3, or 4 (which we will
this occurs only because of the ranges that were used and the arbi- denote as “centered”), and the remaining six outcomes have a
trary decision to have the “catastrophic” category include all con- probability rating of 5. Hence, 75% of the probability scores were
sequences greater than USD 20 million. Fig. 4 more accurately centered.
represents these outcomes. A blowout could be many orders of For the six papers combined, 83% of the probability scores
magnitude worse than a loss of well control. Yet, the RM does not were centered, which confirms Hubbard (2009). However, only
emphasize this in a way that we think is likely to lead to high- 52% of the consequence scores were centered, which is less than
quality risk-mitigation actions. To the contrary, the sense that we that found in Hubbard (2009). A closer inspection shows that in
get from Fig. 1 is that a blowout is not significantly different (if four out of the six papers, 90% of either probability or conse-
any different) from a loss in well control—they are both “yellow” quence scores were centered.
100%
90% Average of centered
80% probability scores
70% 75% as documented
Percentage
Paper Number
Probability Consequence
SPE 146845 Frequent Several times a year in one location Occurrence > 1/year
SPE 127254 Frequent Expected to occur several times during lifespan of a unit Occurrence > 1/year
SPE 162500 Frequent Happens several times per year in same location or operation Occurrence > 0.1/year
SPE 123457 Frequent Has occurred in the organization in the last 12 months –
SPE 61149 Frequent Possibility of repeated incidents –
SPE 146845 Probable Several times per year in a company 1/year > Occurrence > 0.1/year
SPE 127254 Probable Expected to occur more than once during lifespan of a unit 1/year > Occurrence > 0.03/year
SPE 162500 Probable Happens several times per year in specific group company 0.1/year > Occurrence > 0.01/year
SPE 123457 Probable Has occurred in the organization in the last 5 years or has –
occurred in the industry in the last 2 years
SPE 158115 Probable Not certain, but additional factor(s) likely result in incident –
SPE 61149 Probable Possibility of isolated incident –
Category-Definition Bias. Budescu et al. (2009) concluded that than 1 occurrence in 10 years.” This clearly shows inconsistency
providing guidelines on probability values and phrases is not suf- between members of the same industry. Table 3 summarizes the
ficient to obtain quality probability assessments. For example, variations in definitions within the same indices in some of the
when guidelines specified that “very likely” should indicate a SPE papers surveyed.
probability greater than 0.9, study participants still assigned prob- Given these gross inconsistencies, how can we accept the
abilities in the 0.43 to 0.99 range when they encountered the claim that RMs improve communication? As we show here, RMs
phrase “very likely.” He argued that this creates the “illusion of that are actually being used in the industry are likely to foster mis-
communication” rather than real communication. If a specific def- communication and misunderstanding, rather than improve com-
inition of scores or categories is not effective in helping experts to munication. This miscommunication will result in misallocation
be consistent in their communication, then the use of only qualita- of resources and the acceptance of suboptimal levels of risk.
tive definitions would likely result in even more confusion. Wind- Why EGPC matrix probalility&concequence is not same direction?
schitl and Weber (1999) showed that the interpretation of phrases
Identification of Previously Unrecognized
conveying a probability depends on context and personal prefer-
ences (e.g., perception of the consequence value). Although most Deficiencies
research on this topic has focused on probability-related words, This section discusses three RM flaws that had not been previously
consequence-related words such as “severe,” “major,” or “cata- identified. We demonstrate that these flaws cannot be overcome
strophic” would also seem likely to foster confusion and and that RMs will likely produce arbitrary recommendations.
miscommunication.
We reviewed the 30 SPE papers on the scoring method used. Ranking is Arbitrary. Ranking Reversal. Lacking standards for
The papers were then classified into qualitative, semiqualitative, how to use scores in RMs, two common practices have evolved:
and quantitative categories.9 Most of the scores (97%) were qualita- ascending scores or descending scores. The example in Fig. 1
tive or semiqualitative. However, these papers included no discus- uses ascending scores, in which a higher score indicates a higher
sion indicating that the authors are aware of category-definition probability or more serious consequence. Using descending
bias or any suggestions for how it might be counteracted. scores, a lower score indicates a higher probability or more seri-
Category-definition bias is also clearly seen between papers. ous consequence. These practices are contrasted in Fig. 6.
For example, paper SPE 142854 considered “improbable” as A glance at Fig. 6 might give the impression that ascending or
“virtually improbable and unrealistic.” In contrast, paper SPE descending scores would produce the same risk ranking of out-
158114 defined “improbable” as “would require a rare combina- comes. However, Table 4 shows for each ordering the resulting
tion of factors to cause an incident.” These definitions clearly risk scores and ranking of the outcomes shown in Fig. 6. With the
have different meanings, which will lead to inconsistent risk use of ascending scores, severe losses will be prioritized for risk
assessments. This bias is also seen in the quantitative RMs. Paper mitigation. However, with the use of the descending scores, blow-
SPE 127254 categorized “frequent” as “more than 1 occurrence out will be prioritized for risk mitigation.
per year,” but paper SPE 162500 categorized “frequent” as “more The typical industry RM given in Pritchard et al. (2010) used
descending ordering. However, both ascending and descending
9
Qualitative refers to RMs in which none of the definitions of probability and consequence scoring systems have been cited in the SPE literature and there is
categories provide numerical values. Semiqualitative refers to RMs in which some of the no scientific basis for either method. In the 30 SPE papers sur-
definitions of probability and consequence categories provide numerical values. Quantita-
tive refers to RMs in which definitions of all probability and consequence categories provide veyed, five use the descending scoring system, and the rest use
numerical values. the ascending scoring system. This behavior demonstrates that
Ascending Descending
n¼ 2 n¼ 3
n¼ 2 n¼ 3
RM rankings are arbitrary; whether something is ranked first or sis, we introduced a multiplier n that determines the range for each
last, for example, depends on whether or not one creates an category. We retained ranges for the first category for both conse-
increasing or a decreasing scale. How can a methodology that quence and probability. For the categories that are not at the end-
exhibits such a gross deficiency be considered an industry best points of the axes, n will determine the start value and end value of
practice? Would such a method stand up to scrutiny in a court of the range. For example, with n ¼ 2, the second probability category
law? Imagine an engineer defending their risk-management plan in Fig. 1 has a value range from 0.01 to 0.02 (0.01 to 0.01 " n). For
by noting it was developed by use of an RM, when the lawyer the category at the end of the axis, n will affect only the start value
points out that simply changing the scale would have resulted in a of the range, which must not exceed unity (n ¼ 3.15) for the proba-
different plan. What other engineering best practices produce dif- bility axis and must not exceed USD 20 million (n ¼ 3.6) for the
ferent designs simply by changing the scale or the units? consequence axis. Tables 5 and 6 show the probability and conse-
Instability Because of Categorization. RMs categorize conse- quence ranges, respectively, for n ¼ 2 or n ¼ 3.
quence and probability values, but there are no well-established We vary the multiplier and observe the effect on risk ranking
rules for how to do the categorization. Morgan et al. (2000) rec- for both ascending and descending scores. While varying the mul-
ommended testing different categories because no single category tiplier for one axis, the ranges in the other axis are kept in a
breakdown is suitable for every consequence variable and proba- default value (Fig. 3) and constant. Because Table 1 gives the
bility within a given situation. consequence value in ranges, we use the midpoint10 consequence
Following this recommendation, we tried to find the best cate- value within the range for each outcome, as shown in Table 7.
gories for the RM in Fig. 1 by examining the sensitivity of the Given a single consequence value for each outcome, the categori-
risk ranking to changes in category definitions. To ease this analy- zation instability analysis can be performed. Figs. 7 and 8 show
how the risk ranking is affected by change in n.
Figs. 7 and 8 indicate that except where consequence is in
ascending order, the risk prioritization is a function of n. This is
TABLE 7—CASE FOR CATEGORIZATION INSTABILITY problematic because the resulting risk ranking is unstable in the
ANALYSIS sense that a small change in the choice of ranges, which is again
Outcome Consequence (USD Million) Probability
10
Severe Losses 3 40% For the practicality of the analysis, we assume that for blowout consequence, the ratio of
the range’s high value to low value is the same as for Category 5 (high value ¼ 4 " low
Well Control 12.5 10% value). Thus, the range is USD 20 to 80 million, and the middle value is USD 50 million. No
Blowout 50 5% matter which value is chosen to represent the high-end consequence, the instability
remains and is equally severe.
Risk Ranking
2 2
3 3
Severe Losses (SL) Well Control (WC) Blowout (BO) Severe Losses (SL) Well Control (WC) Blowout (BO)
n = 2.20 SL = 1 or 2| WC = 3 | BO = 1 or 2
n = 2.55 SL = 2 or 3| WC = 2 or 3 | BO = 1
n = 2.25 SL = 1| WC = 2 | BO = 3
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Probability Probability
arbitrary, can lead to a large change in risk prioritization. Thus, we we surveyed, Category 3 on the likelihood axis spans 0.9 to 18%
again see that the guidance provided by RMs is arbitrary, being of the total range.
determined by arbitrary design choices that have no scientific basis.
For each SPE paper that used at least one quantitative scale,
Table 8 shows percentage of the domain for Categories 1 through Relative Distance is Distorted. Lie Factor. According to Table
4, with Category 5 being excluded because it was often unbounded. 7, the consequence of a blowout is four times that of well control
The left-hand table is for the frequency and the right-hand table is (50/12.5). However, the ratio of their scores in the RM is only 1.2
for the consequence. For example, the probability categories for (6/5). The difference in how risk is portrayed in the RM vs. the
paper SPE 142854, in ascending order, cover 0.001, 0.1, 0.9, and expected values can be quantified by use of the LF.
99% of the domain. The consequence categories for paper SPE The LF was coined by Tufte and Graves-Morris (1983) to
142854, in ascending order, cover 0.1, 0.9, 9, and 90% of the describe graphical representations of data that deviate from the
domain. principle that “the representation of numbers, as physically meas-
That categories cover different amounts of the total range is ured on the surface of the graphic itself, should be directly propor-
clearly a significant distortion. In addition to this, the size of the tional to the quantities represented” (Tufte and Graves-Morris
categories varies widely across papers. For example, in the papers 1983). This maxim seems intuitive, but it is difficult to apply to
Risk Ranking
2 2
3 3
Severe Losses (SL) Well Control (WC) Blowout (BO) Severe Losses (SL) Well Control (WC) Blowout (BO)
n = 3.2 SL = 3| WC = 1 | BO = 2
n = 3.4 SL = 2| WC = 3 | BO = 1
0 5 10 15 20 80
Consequence (USD Million)
Frequency Consequence
Paper Number Rating Percentage of Range Paper Number Rating Percentage of Range
data that follow an exponential relationship, for example. Such definition might be the maximum LF for any category. Table 9
cases often use log plots, in which the same transformation is shows the result of our average LF calculation. All reviewed RMs
applied to all the data. However, RMs can distort the information use infinity as the upper bound on the consequence axes. This
they convey at different rates within the same graphic. gives infinite LFs. However, in summarizing the LF for the
Slightly modifying the Tufte and Graves-Morris (1983) defini- reviewed papers in Table 9, we have chosen to use the second
tion, we define LF as largest category as the upper limit for the consequences. This
obviously understates the actual LFs in the reviewed papers.
DVm;n All nine papers have an LF greater than unity along at least
LFm;n ¼ ; ........................... ð1Þ
DSm;n one axis. Paper SPE 142854, for example, has an LF of 96 on the
consequence axis and 5,935 on the probability axis.
where Many proponents of RMs extol their visual appeal and result-
jVn ' Vm j jSn ' Sm j ing alignment and clarity in understanding and communication.
DVm;n ¼ ; DSm;n ¼ ; and n > m: However, the commonly used scoring system distorts the scales
Vm Sm and removes the proportionality in the input data. How can it be
The LF is thus calculated as the change in value (of probability argued that a method that distorts the information underlying an
or consequence) over the m and n categories divided by the engineering decision in nonuniform and uncontrolled ways is an
change in score over the m and n categories. In calculating the industry best practice? The burden of proof is squarely on the
LF, we use the midpoint across the value and probability ranges shoulders of those who would recommend the use of such meth-
within each category. ods to prove that these obvious inconsistencies do not impair deci-
From Fig. 1, the score of the consequence axis at m ¼ 3 is sion making, much less improve it, as is often claimed.
S ¼ 3 and at n ¼ 4 is S ¼ 4. By use of the midpoint value for each
category, LF3,4 ¼ 11.4 ¼ (j3,000–625j/625)/(j4–3j/3). The inter-
pretation of this is that the increase in the underlying consequence A Consistent Approach to Risk Management
values is 11.4 times larger than an increase in the score. The motivation for writing this paper was to point out the gross
None of the 30 papers reviewed included enough quantitative inconsistencies and arbitrariness embedded in RMs. Given these
information for the LF to be calculated. We define the LF for an problems, it seems clear to us that RMs should not be used for deci-
RM as the average of the LFs for all categories. An alternative sions of any consequence. Our pointing out that RMs produce arbi-
trary rankings does not require us to provide another method in
their place, any more than we would be required to suggest new
medical treatments to argue against the once popular practice of
TABLE 9—LF FOR NINE SPE PAPERS bloodletting. The arbitrariness of RMs is not conditional on whether
or not other alternatives exist. Nevertheless, the question is bound to
Average of Each Category be raised, and thus this section provides a brief set of references to
what we consider to be a consistent approach to risk management.
Paper Number LF of Consequence LF of Probability Risk management is fundamentally about decision-making. The
objective of the risk-management process is to identify, assess,
SPE 142854 96 5,935 rank, and inform management decisions to mitigate risks. Risks can
SPE 86838 30 – only be managed through our decisions, and the risk-management
SPE 98852 745 245 objectives are best achieved with processes and tools that support
SPE 121094 5 – high-quality decision-making in complex and uncertain situations.
SPE 74080 94 – For centuries people have speculated on how to improve deci-
SPE 123861 28 113 sion making, and a formal approach to decision and risk analysis
SPE 162500 85 389 can be traced through the works of Bayes and Price (1763), Lap-
lace (1902), Ramsey (1931), De Finetti (1931, 1937), von Neu-
SPE 98423 16 –
mann and Morgenstern (1944), Bernoulli (1954), and Savage
IPTC 14946 1 3
(1954). Over the last several decades, important supporting fields
TABLE A-1—30 SPE PAPERS AND (SOME OF) THEIR INHERENT FLAWS
Risk-Acceptance Category-Definition
Paper Year Author(s) Inconsistency Bias Centering Bias Scoring System
Corrosion 2000 2000 Reynolds, J.T. Yes Yes Not available Ascending
SPE 61149 2000 Piper and Carlon Yes Yes Not available Descending
SPE 66516 2001 Berg, F.R. Yes Yes Not available Ascending
SPE 73892 2002 McCulloch Yes Yes Not available –
SPE 73897 2002 Smith et al. Yes Yes Yes Ascending
SPE 74080 2002 Zainuddin et al. Yes Yes Yes Descending
SPE 85299 2003 Coakley et al. Yes Yes Not available Ascending
SPE 86838 2004 Theriau et al. Yes Yes Not available Descending
SPE 98566 2006 Campbell and Tate Yes Yes Not available Ascending
SPE 98852 2006 Alkendi Yes Yes Not available Ascending
SPE 98679 2006 Clare and Armstrong Yes Yes Not available Ascending
SPE 98423 2006 Valeur and Clowers Yes Yes Not available Ascending
SPE 108279 2007 Poedjono et al. Yes Yes Not available Ascending
SPE 108853 2007 McDermott Yes Yes Not available Ascending
SPE 105319 2007 Samad et al. Yes Yes Not available Ascending
OTC 18912 2007 Truchon et al. Yes Yes Yes Descending
SPE 111549 2008 Kinsella et al. Yes Yes Not available Ascending
SPE 121094 2009 Poedjono et al. Yes Yes Not available Ascending
SPE 123457 2009 Lee Yes Yes Not available Ascending
SPE 123861 2009 Leistad and Bradley Yes No Not available Ascending
SPE 111769 2009 Jones and Bruney Yes Yes Not available Descending
SPE 137630 2010 Samad et al. Yes Yes Not available Ascending
SPE 127254 2010 Da Silva et al. Yes Yes Not available Ascending
IPTC 14434 2011 Al-Mitin et al. Yes Yes Not available Ascending
IPTC 14946 2011 Areeniyom Yes Yes Not available Ascending
SPE 146845 2011 Petrone et al. Yes Yes Yes Ascending
SPE 158114 2012 Bower-White Yes Yes Not available Ascending
SPE 162500 2012 Bensahraoui and Macwan Yes Yes Yes Ascending
SPE 142854 2012 Dethlefs and Chastain Yes Yes Yes Ascending
SPE 161547 2012 Duguay et al. Yes Yes Not available Ascending
Philip Thomas is a PhD candidate in petroleum investment ber in the Society of Decision Professionals and was made a
and decision analysis at the University of Stavanger and is member of the Norwegian Academy of Technological Scien-
advised by R.B. Bratvold. He is interested in the applications of ces for his work in petroleum investment and decision analysis.
decision analysis and real-options analysis in the O&G industry. Bratvold holds a PhD degree in petroleum engineering and a
Thomas holds a master’s degree in petroleum engineering master’s degree in mathematics, both from Stanford Univer-
from the University of Stavanger and a bachelor’s degree in sity, and obtained business and management-science edu-
petroleum engineering from Bandung Institute of Technology, cation from INSEAD and Stanford University.
Indonesia. J. Eric Bickel is an assistant professor in both the Graduate Pro-
Reidar B. Bratvold is a professor of petroleum investment and gram in Operations Research/Industrial Engineering (Depart-
decision analysis at the University of Stavanger and at the Nor- ment of Mechanical Engineering) and the Department of
wegian University of Science and Technology in Trondheim, Petroleum and Geosystems Engineering at the University of
Norway. His research interests include decision analysis, valua- Texas at Austin. In addition, he is a fellow with the Center for
tion of risky projects, portfolio analysis, real-option valuation, Petroleum Asset Risk Management. Bickel’s research interests
and behavioral challenges in decision making. Before enter- include the theory and practice of decision analysis and its
ing academia, Bratvold spent 15 years in the industry in various application in the O&G industry. Before returning to aca-
technical and management roles. He is a coauthor of the SPE demia, he was a Senior Engagement Manager for Strategic
Primer Ma king Go o d Dec isio ns. Bratvold is an associate editor Decisions Group. Bickel holds a master’s degree and a PhD
for SPE Ec o nom ic s & Ma na g em ent and has twice served as degree from the Department of Engineering-Economic Sys-
an SPE Distinguished Lecturer. He is a fellow and board mem- tems at Stanford University.