The Open FAIR Body of KnowledgeVersion 2
The Open FAIR Body of KnowledgeVersion 2
SECURITY SERIES
The Open Group Standard
2 Definitions................................................................................................................. 3
A Business Case.......................................................................................................... 35
A.1 Risk Management Decision-Making ............................................................ 35
The Open Group is a global consortium that enables the achievement of business objectives
through technology standards. Our diverse membership of more than 800 organizations includes
customers, systems and solutions suppliers, tools vendors, integrators, academics, and
consultants across multiple industries.
The mission of The Open Group is to drive the creation of Boundaryless Information Flow™
achieved by:
Working with customers to capture, understand, and address current and emerging
requirements, establish policies, and share best practices
Working with suppliers, consortia, and standards bodies to develop consensus and
facilitate interoperability, to evolve and integrate specifications and open source
technologies
Offering a comprehensive set of services to enhance the operational efficiency of
consortia
Developing and operating the industry’s premier certification service and encouraging
procurement of certified products
The Open Group publishes a wide range of technical documentation, most of which is focused
on development of Standards and Guides, but which also includes white papers, technical
studies, certification and testing documentation, and business titles. Full details and a catalog are
available at www.opengroup.org/library.
This Document
This document is The Open Group Standard for Risk Analysis (O-RA), Version 2.0.1. It has
been developed and approved by The Open Group.
This document provides a set of standards for various aspects of information security risk
analysis. It is a companion document to the Risk Taxonomy (O-RT) Standard, Version 3.0.1.
The intended audience for this document includes anyone who needs to understand and/or
analyze a risk condition. This includes, but is not limited to:
Information security and risk management professionals
Auditors and regulators
Technology professionals
Management
This document is one of several publications from The Open Group dealing with risk
management. Other publications include:
Risk Taxonomy (O-RT) Standard, Version 3.0.1
The Open Group Standard (C20B, November 2021)
This document defines a taxonomy for the factors that drive information security risk. It
was first published in January 2009, and has been revised as a result of feedback from
practitioners using the standard and continued development of the Open FAIR™
taxonomy.
Requirements for Risk Assessment Methodologies
The Open Group Guide (G081, January 2009)
This document identifies and describes the key characteristics that make up any effective
risk assessment methodology, thus providing a common set of criteria for evaluating any
given risk assessment methodology against a clearly defined common set of essential
requirements. In this way, it explains what features to look for when evaluating the
capabilities of any given methodology, and the value those features represent.
Open FAIR™ – ISO/IEC 27005 Cookbook
The Open Group Guide (C103, November 2010)
This document describes in detail how to apply the Open FAIR methodology to ISO/IEC
27002:2005. The Cookbook part of this document enables risk technology practitioners to
follow by example how to apply FAIR to other frameworks of their choice.
The Open FAIR™ – NIST Cybersecurity Framework Cookbook
The Open Group Guide (G167, October 2016)
This document describes in detail how to apply the Open FAIR factor analysis for
information risk methodology to the NIST Framework for Improving Critical
Infrastructure Cybersecurity (NIST Cybersecurity Framework).
The Open FAIR™ Risk Analysis Process Guide
The Open Group Guide (G180, January 2018)
This document offers some best practices for performing an Open FAIR risk analysis: it
aims to help risk analysts understand how to apply the Open FAIR risk analysis
methodology.
How to Put Open FAIR™ Risk Analysis Into Action: A Cost-Benefit Analysis of
Connecting Home Dialysis Machines Online to Hospitals in Norway
The Open Group White Paper (W176, May 2017)
This document offers an Open FAIR analysis of security and privacy risks and compares
those risks to the likely benefits of connecting home dialysis machines online to hospitals.
This document includes changes to the O-RA Standard that have evolved since the original
document was published. These changes came about as a result of feedback from practitioners
using the standard:
The “Confidence Level in the Most Likely Value” as a parameter to model estimates
conceptualized in the previous version of the O-RA Standard is discontinued and replaced
by the choice of distribution that would determine it
The quantitative example that utilized a qualitative scale has been removed
Open FAIR terms and definitions have been clarified
The Loss Scenario is decomposed and explained utilizing accompanying figures,
including guidance on selecting the distribution to use and Risk Factor to model
The NIST CSF five functions are incorporated
Microsoft and Excel are registered trademarks of Microsoft Corporation in the United States
and/or other countries.
All other brands, company, and product names are used for identification purposes only and may
be trademarks that are the sole property of their respective owners.
Special thanks go to Douglas Hubbard, author of How to Measure Anything: Finding the Value
of Intangibles in Business (see Referenced Documents). Many of the ideas in the calibration
section of this standard were originally described by him in this important book.
The Open Group gratefully acknowledges the contribution of the following people in the
development of earlier versions of this document:
Jack Jones
Michael Legary
James Middleton
Steve Tabacek
Chad Weinman
(Please note that the links below are good at the time of writing but cannot be guaranteed for the
future.)
Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1, April 2018,
published by the National Institute of Standards and Technology (NIST); refer to:
www.nist.gov/cyberframework/framework
How to Measure Anything: Finding the Value of Intangibles in Business, 3rd Edition,
Douglas W. Hubbard, April 2014, published by John Wiley & Sons
Risk Taxonomy (O-RT) Standard, Version 3.0.1, The Open Group Standard (C20B),
November 2021, published by The Open Group; refer to:
www.opengroup.org/library/c20b
The Open FAIR™ Risk Analysis Process Guide, The Open Group Guide (G180), January
2018, published by The Open Group; refer to: www.opengroup.org/library/g180
1.1 Objective
The objective of the Risk Analysis (O-RA) Standard is to enable risk analysts to perform
effective information security risk analysis using the Open FAIR™ framework. When coupled
with the Risk Taxonomy (O-RT) Standard, it provides risk analysts with the specific processes
necessary to perform effective risk analysis.
This document should be used with the companion O-RT Standard to:
Educate information security, risk, and audit professionals
Establish a common language for the information security and risk management
profession
Introduce rigor and consistency into analysis, which sets the stage for more effective risk
modeling
Explain the basis for risk analysis conclusions
Strengthen existing risk assessment and analysis methods
Create new risk assessment and analysis methods
Evaluate the efficacy of risk assessment and analysis methods
Establish metric standards and data sources
1.2 Overview
This document is intended to be used with the O-RT Standard, which defines the Open FAIR
taxonomy for the factors that drive information security risk. Together, these two standards
comprise a body of knowledge in the area of quantitative risk analysis.
Although the terms “risk” and “risk management” mean different things to different people, this
document is intended to be applied toward the problem of managing the frequency and
magnitude of loss that arise from a threat (whether human, animal, or natural event). In other
words, managing “how often bad things happen, and how bad they are when they occur”.
Although the concepts and standards within this document were not developed with the intention
of being applied towards other risk scenarios, experience has demonstrated that they can be
effectively applied to other risk scenarios. For example, they have been successfully applied in
managing the likelihood and consequence of adverse events associated with project management
or finance, in legal risk, and by statistical consultants in cases where probable impact is a
concern (e.g., introducing a non-native species into an ecosystem).
1.5 Terminology
For the purposes of this document, the following terminology definitions apply:
May Describes a feature or behavior that is optional. To avoid ambiguity, the opposite of
“may” is expressed as “need not”, instead of “may not”.
For the Open FAIR Glossary, see the definitions in The Open Group Standard for Risk
Taxonomy (O-RT), Version 3.0.1; accessible at: www.opengroup.org/library/c20b. Merriam-
Webster’s Collegiate Dictionary1 should be referenced for terms not defined in this section.
The first two elements above can be summarized as “scoping” the analysis – scoping is the
process of identifying a countable, easily understandable Loss Event and risk scenario statement.
Practitioners must recognize that time spent in scoping is crucial to performing effective
analyses. In fact, carefully scoping an analysis reduces time spent on the analysis due to better
clarification of data requirements and less time spent troubleshooting and revising the analysis.
More information on scoping the analysis appears in the Open FAIR Risk Analysis Process
Guide (see Referenced Documents).
Risk-related data is never perfect. In other words, there will always be some amount of
uncertainty in the values being used. As a result, measurements and estimates of risk factors
should faithfully reflect the quality of data being used. The most common approach to achieving
this is to use ranges and/or distributions rather than discrete values as inputs; by using ranges
and/or distributions for measurements and estimates of risk factors, an Open FAIR risk analysis
reflects that there is uncertainty about the future and that the data is always
imperfect/incomplete. Therefore, Open FAIR analysts use Monte Carlo or other stochastic
methods to calculate results.
Risk Management
Risk Assessment
Risk Risk
Risk Evaluation Risk Treatment Risk Monitoring
Identification Analysis
To measure risk, analysts incorporate into models what they believe they know about the future
(certainty) and what they do not or cannot know (uncertainty) to estimate what will actually
happen, and to estimate the range of outcomes of that future. The O-RT Standard describes the
risk factors and structure of a model that estimates the likelihood of foreseeable losses due to
information system-related events. Model providers build models using those risk factors as
defined in the taxonomy to make accurate estimates of future outcomes.
All models at some level are “wrong”. Models, whether based upon human judgment and
experience alone, scientific theory, mathematical equations, artificial intelligence, or a
combination of all these, are limited in their ability to specifically and exactly predict the future;
an outcome that cannot be determined with complete precision before it actually occurs and is
observed.
The best an analyst can do is estimate risk factors and apply them through an accurate model,
and even then, the model’s estimate of probable future loss may not include the actual future
outcome when it occurs; the model or the estimates may have been inaccurate. However,
modeling an uncertain future is the best humanity can do. The analyst must use the best
available, current information to make informed probabilistic statements about what they believe
may occur. (Indeed, human brains do this every moment.)
Analysts must identify and apply the knowledge they have and, at the same time, incorporate the
uncertainty of what is not known – or not knowable. Some of that unknown may be discoverable
through additional research. Other uncertainty, the uncertainty of a strictly unpredictable future
outcome, cannot be discovered. Analysts – using all the information they have available – apply
what measurements of risk factors are known, estimate risk factors that are unknown, and use
risk models to estimate the frequency and magnitude of foreseeable future outcomes.
The precision of an estimate is its range. Estimates that are too precise – that is, with a range that
is too narrow, ignoring or minimizing the expression of uncertainty – can mislead decision-
makers into thinking that there is more rigor in the risk analysis than there is. Using distributions
and ranges increases the probability that an estimate is accurate. To increase the probability that
an estimate is accurate, reduce its precision.
An example of an estimate that is precise but inaccurate would be an estimate that the wingspan
of a Boeing™ 787 is exactly 107 feet. An example of an estimate that is accurate but not
usefully precise would be an estimate that the wingspan of a Boeing 787 is between 1 foot and
1,000 feet.
An estimate is usefully precise when more precision would not improve or change the decision
made by stakeholders. To extend the airplane wingspan example, if its wingspan is estimated to
between 10 feet to 300 feet, and if the objective of the estimation exercise is to inform the
builder of a hangar how large it should be to cost-effectively house the plane, then the estimate is
likely accurate, but it is not usefully precise to support an economically efficient hangar
construction project.
To provide high-quality estimates that stakeholders can use to make effective decisions, analysts
must first ensure they are accurate. To improve the usefulness of the estimate, analysts shall
provide as much precision as the research, data, and reasoning of that estimate can support,
while keeping the estimate accurate.
Risk analysis results cannot be more precise than the input values. The number of significant
digits resulting from multiplication is the same as the number of significant digits in the least
precise input (i.e., have no more non-zero significant digits). In other words, analysts should not
represent more precision in an analysis than the data going into it can support.
Highly subjective measurements of risk factors are those which are strongly influenced by
personal feelings, interpretations, prejudice, or a simple lack of subject matter knowledge.
Highly objective risk measurements of risk factors are those which are not influenced by
personal feelings, interpretations, or prejudice, but which are supported by facts, observations,
and evidence upon which impartial observers would agree. The analyst should make
measurements as objective as is practical and useful to the analysis at hand.
To increase the objectivity of this risk measurement, the analyst has two primary approaches.
The first is to gather more data to help inform the risk estimate. The second is to better
understand how the estimates are derived – in other words, to better define and apply the factors
that make up or influence the estimates. The precise definitions and relationships provided in the
O-RT Standard help to inform this understanding.
Because the future is uncertain and estimation models are imperfect representations of the
complex natural world, a modeled estimate will be inaccurate some of the time: estimates have a
probability of being accurate. That probability can be increased at the expense of the estimate’s
precision. In other words, increasing an estimate’s range increases the probability of it being
accurate but comes at the expense of reduced precision of that estimate.
At some point, although accurate, an imprecise estimate is no longer useful. The practical
tradeoff made in this document is that estimates should be accurate 90% of the time. This
probability of accuracy makes a tradeoff between the probability of the estimate being accurate
and improved precision.
To accomplish this accuracy-precision tradeoff, the analyst should have a degree of belief that
the materialized, observed future outcome will not be below the estimate’s minimum more than
one time out of 20, or 5% of the time. Similarly, the analyst should have a degree of belief that
the materialized, observed future outcome will not be above the estimate’s maximum value more
than one time out of 20, or again 5% of the time.
The most likely value is the peak of the distribution, sometimes called the mode in a discrete
distribution, other times simply the peak of a continuous one. In the context of inputs into the
Open FAIR model, when making an estimate for a factor of a risk model, the most likely value
represents the value within the range believed to have the highest probability of being the true
value when observed in the future.
If the risk analyst knows nothing about an estimate aside from its accurate minimum and
maximum values, they would choose the uniform distribution – such a choice indicates complete
uncertainty about the likelihood of the modeled, estimated future outcome aside from its
minimum and maximum range values. If the risk analyst knows that a modeled potential loss has
the potential of a “fat tail”, they would choose a log-normal statistical distribution. Similarly, the
analyst may know that the Threat Event Frequency is best modeled through Poisson
distribution.2
Note: The Open FAIR model is agnostic on what distribution is “right”. Instead, analysts
shall use their best knowledge of that risk factor and what additional information they
know about it to select the most appropriate distribution. The Open FAIR model is also
agnostic on any required distribution choices that a risk model or calculating tool
provides to the risk analyst. From an Open FAIR standard conformance or compliance
standpoint, there are no minimum requirements placed upon tool or model suppliers to
include or exclude any statistical distribution available for the analysts to use in
modeling any risk factor.
2
Poisson distribution: https://en.wikipedia.org/wiki/Poisson_distribution.
Starting with these clearly absurd values leads to more accurate estimates, reduces the likelihood
of an inaccurate estimate, and makes it more possible to narrow in on a more realistic range of
min/max values.
In information security risk analysis, a similar broad question might be, “How much risk does
this firm have around lost laptops and Personally Identifiable Information (PII)?” To decompose
this into components that can be more easily dealt with (and for which there is data to support a
risk analysis), the analyst can ask themselves questions such as, “How many laptops have we
historically lost each year? How much PII is being stored on laptops by employees? What costs
do organizations similar to ours experience when they lose PII?”
Analysts almost always have to work with the information they have, not the information they
want. Analysts have several approaches they can take to make calibrated, accurate estimates
with missing or incomplete information. The main point is this: analysts have more information
than they think, but they must be creative in how to discover and use what they have. Enrico
Fermi developed techniques to do this, and the literature around “Fermi Problems” is referenced
in the example below.
Suppose an analyst had to estimate the number of plays attributed to William Shakespeare. An
analyst who had a copy of the complete works at home and had appeared in a handful of
Shakespeare plays may not need to decompose that value into its factors. Based upon that unique
knowledge and experience, the analyst could estimate Shakespeare’s lifetime production
between “20 to 40 plays” with relative ease.
Someone else, however, without access to that knowledge and experience and who has no idea
of Shakespeare’s lifetime achievement could treat “the number of plays attributed to
Shakespeare” as a Fermi Problem and derive it from the sub-factors “number of years of
productivity” and “number of plays per year”. Without direct knowledge of Shakespeare but
with some knowledge of Elizabethan times and the productivity of playwrights, a productive
lifetime of “10 to 30 years” and a playwright productivity of “one to three plays per year” could
be reasonably estimated and, when combined, would lead to an estimated lifetime achievement
of between 10 to 90 plays. For this analyst, who has not trodden the boards as Benvolio,
decomposing a difficult-to-estimate value into easier-to-estimate factors for which more
data/rationale was readily available, deriving an accurate estimate becomes possible. If that
The same is true when estimating risk factors in Open FAIR models. If one factor is difficult to
estimate given the available information, the analyst should research information that informs
accurate estimates of its sub-factors, ultimately producing an accurate estimate of the factor in
question.
With an initial absurd range for the value, the next step is to narrow the range to more accurately
estimate the actual values so that the analyst is confident that the actual value will fall within the
range 90% of the time (a 90% confidence interval).
Douglas Hubbard (see Referenced Documents) uses the analogy of a wheel to help narrow the
range. The analyst is offered a choice between two scenarios:
1. They will receive $1,000 if the actual value falls within their prediction.
2. Spinning a wheel with 90% of its area painted black and 10% painted red. They will win
$1,000 if the wheel stops in the black.
The wheel implements a 90% confidence interval, and the desired goal is that the analyst has no
preference between the two methods. An analyst who prefers the wheel is not confident that their
estimate represents a 90% confidence interval for the value, which demands that estimate be
revised; in fact, this analyst must have a degree of confidence less than 90% if they prefer the
wheel, and they should make their estimated range wider. The confidence interval can be
tightened by asking the analyst to make the same choice regarding whether the estimate will be
less than (or greater than) the minimum (maximum) of the specified range 95% of the time.
3
Monte Carlo method: https://en.wikipedia.org/wiki/Monte_Carlo_method.
The methodology standardized here is a basic one covering a foundational series of stages to
define, scope, decompose, and quantitatively evaluate potential Loss Events in an information
system.
Stage 1: Identify the Loss Scenario (Scope the Analysis)
Stage 2: Evaluate the Loss Event Frequency
Stage 3: Evaluate the Loss Magnitude (LM)
Stage 4: Derive and Articulate Risk
Stage 5: Model the Effect of Controls
Throughout the risk analysis process, the risk analyst shall document key assumptions and the
rationale for estimates used. Well-documented assumptions should include the reasoning for the
assumption as well as any sources that contributed to them. A well-documented rationale should
state the source of all estimates – the source may be systems (e.g., logs), groups (e.g., incident
response), or industry data.
By documenting the key assumptions and rationale used, the analyst will be better able to defend
the analysis and explain the results to decision-makers. This also allows analysts to compare
approaches if results differ. Documenting the key assumptions and rationale used adds to the
integrity of the analysis because analysts can demonstrate where they found data and why certain
data was used for estimates in the analysis.
The Loss Scenario is the story of that loss that forms a sentence:
A Threat Agent breaches or impairs an information Asset that causes an observable Loss
Event that has direct economic consequences (Primary Loss) and may have economic
consequences initiated by reactions from others (Secondary Loss).
To complete the Loss Scenario, the analyst must identify and define the Primary Stakeholder, the
Asset, the Threat Agent/Community, the Threat Event, and the Loss Event.
By examining the nature of the organization (e.g., the industry it is in) and the conditions
surrounding the Asset (e.g., an HR executive’s office), the analyst can begin to classify the
overall Threat Agent population into one or more Threat Communities that might reasonably
apply. Including every conceivable Threat Community in the analysis is likely not a good use of
time; instead, the analyst should identify the most probable Threat Communities for the Loss
Scenario.
To further define a Threat Community, the analyst can build a “profile” or list of common
characteristics associated with a given Threat Community. The Open FAIR model does not
define a standard list of attributes to evaluate for each and every Threat Community. Rather,
each organization should create a list of Threat Community attributes that can be reused across
multiple risk analyses to ensure internal consistency. Common characteristics include:
Motive
Objective
Access Method
Personal Risk Tolerance
Desired Visibility
Sponsorship
For each Threat Event type, the analyst can ask, “Who or what could potentially harm the Asset
in this way?”. However, distinguishing between probable and merely possible Threat Events is
crucial. By considering the nature of the Threat Communities relative to the industry,
organization, and Asset, the analyst selects the reasonable, probable Loss Scenarios and avoids
the speculative, highly improbable ones.
In many cases, a final consideration regarding the definition of a Threat Event under analysis is
to identify the “threat vector”. The threat vector represents the path and/or method used by the
Threat Agent to breach/impair the Asset, and each threat vector may have a different frequency
and different control levels. For example, an attacker seeking to gain access to sensitive
corporate information may try any of a number of vectors, such as technical attacks, leveraging
human targets, etc.
Every Loss Scenario must have a direct consequence evaluated as the economic cost directly
associated by the observed confidentiality, integrity, or availability loss of the Asset – this is the
Primary Loss. The impact of that observable Primary Loss then must be evaluated in economic
terms, measured in dollars, pounds, euros, yen, yuan, etc. All losses in the Open FAIR model are
measured in these economic terms.
From that Primary Loss, there is a probability that Secondary Stakeholders will react, resulting
in additional losses to the Primary Stakeholder – an additional loss resulting from the actions of
Secondary Stakeholders is the Secondary Loss. For example, regulators, customers, or the media
may react to the initial information system loss and initiate their own actions against Primary
Stakeholder Assets, usually financial Assets.
Performing a single analysis that encompasses more than one Threat Agent/Community or Asset
is generally acceptable, but careful consideration should be given if those multiple Threat
Agents/Communities have significantly different Threat Capabilities, attack the Asset through
different threat vectors, or act against the Asset at significantly different frequencies. When any
of these conditions occur, analysts should create multiple Loss Scenarios and analyze them
separately.
Finally, analysts should avoid combining different Asset types into a single analysis, for this
often makes modeling Loss Event Frequency and Loss Magnitude harder than it would be if the
Loss Scenarios were separated. For example, a Threat Agent who steals a laptop out of an
employee’s car has both the laptop and the data on it. Combining the Loss Magnitude of both the
laptop and data together may be much harder to do accurately than doing two analyses, one for
the laptop hardware that needs to be replaced, and one for the loss associated with the data on
the laptop.
Analyzing several, focused Loss Scenarios often takes less time and is more efficient than trying
to make estimates for more complex scenarios.
The analysis is scoped after defining the Loss Scenario and all key, relevant assumptions have
been made and documented.
Note: In any risk analysis, regardless of method, the analyst must make assumptions to fill in
for missing, incomplete information. Analysts must clearly document their key
assumptions to ensure all that those who review the analysis understand the basis for
the values used and to assess whether the assumptions used are reasonable for the
analysis.
In scoping the Loss Scenario, the Open FAIR risk factors are assumed to be independently
identically distributed.
Figure 3 shows the decomposition of a Loss Scenario and can be understood, from left to right,
as showing the chain of events, beginning with a Threat Agent contacting an Asset and ending
with the Loss Event(s). It will be decomposed further to show the Loss Event Frequency and
Loss Magnitude components.
Reputation
Productivity
Threat Event
Fines &
Judgment
Loss
Event Replacement
Response
Response
Competitive
Advantage
At the highest level, to understand the probable future loss associated with a given Loss
Scenario, the analyst needs an estimate of how many times the Loss Scenario is likely to occur
over a given timeframe – this is the Loss Event Frequency (LEF).
When estimating Loss Event Frequency, the analyst must choose which risk factors in the Open
FAIR taxonomy to estimate. For example, is it better to estimate Loss Event Frequency than
deriving that risk factor from its lower-level sub-factors by estimating Threat Event Frequency
and Vulnerability? Should the analyst go even further down into in the Open FAIR taxonomy,
decomposing those two risk factors into their sub-factors Contact Frequency (CF), Probability of
Action (PoA), Threat Capability (TCap), and Resistance Strength (RS) to derive all the higher
factors?
Analyses should be performed using the risk factors that have the highest quality of data to
support accurate and usefully precise calibrated estimates. When possible, analysts should
estimate factors at the highest level possible in the Open FAIR taxonomy. However, when an
accurate estimate is not usefully precise and information is available that informs accurate and
Analysts should not assume that they must always derive Loss Event Frequency from estimates
of its lower-level factors, nor does it mean that it is always advantageous to do so. In fact,
estimating factors lower in the model can involve increasing levels of abstraction and difficulty
in many scenarios without improving the quality of the analysis.
Utilizing a top-down approach and working higher in the taxonomy offers increased efficiency
and, when there is historical data supporting an estimate at the Loss Event Frequency level, can
result in a more objective analysis. By leveraging a top-down approach, the analyst tries to
accurately and sufficiently precisely estimate Loss Event Frequency, only decomposing it into
its sub-factors if useful or necessary to serve the purpose of the analysis. The guiding principle is
this: select risk factors that represent the simplest model possible that is accurate and usefully
precise.
Risk
Top-Down
Approach
Loss Event
Frequency
Threat Event
Vulnerability
Frequency
The purpose of the analysis will also determine the level of abstraction. For instance, if the
analyst is evaluating several different Control options to determine which option is most
effective from a risk reduction perspective, then deriving Vulnerability by analyzing Resistance
Strength and Threat Capability may be most useful. In this way, the analyst can estimate the
change in Resistance Strength due to the evaluated Control option.
If the analyst is unable to estimate the Loss Event Frequency directly (e.g., if there is no data on
prior Loss Events), if factors (such as Controls) have changed, if there is better/more objective
data about Threat Events than Loss Events, or if the purpose of the analysis requires it, the
analyst should step down one layer and work to estimate Threat Event Frequency (TEF) and
Vulnerability (Vuln) to derive the Loss Event Frequency.
A Threat Event Frequency estimate reflects how many times the Threat Agent will attempt to
breach or impair the Asset’s confidentiality, integrity, and/or availability. A Threat Event
Frequency estimate would be based upon how frequently contact between the Threat Agent and
the Asset occurs (the Contact Frequency) and the probability that the Threat Agent would act
against the Asset (the Probability of Action).
Threat Event
Frequency
Contact Probability of
Frequency Action
For a Loss Event to occur, a Threat Agent must first contact an Asset. After contacting the Asset,
there is then a Probability of Action for whether the Threat Agent will then try to cause a Threat
Event. However, if the Threat Agent does not act against the Asset despite making contact, no
Threat Event occurs, so not every Contact Event results in a Threat Event.
The probability that the Threat Agent would act is driven by three primary factors that affect
how the Threat Agent perceives the benefits of acting against the asset versus the costs of
conducting the act against the asset:
Perceived value of the Asset to them (based upon their motives – financial gain, revenge,
etc.)
Perceived level of effort required (i.e., how vulnerable the Asset appears to be)
Perceived risk of being caught and suffering unacceptable consequences
Probability of Action is influenced by what the Threat Agent perceives or believes, and it is
modeled in the Open FAIR framework as an economic analysis of costs and benefits of a
successful attack to the Threat Agent. A Threat Agent would have a lower Probability of Action
if the perceived payoff of a successful attack fell, the level of effort rose, or the consequences to
the Threat Agent rose. Assuming all other things are equal, examples of reducing the Probability
of Action include:
Changing PII policies – if Threat Agents regularly contact a database and discover that the
organization has changed its policies to reduce how much PII is stored in one location,
their perceived value of the Asset has been reduced because they will be unable to obtain
as much benefit from a successful attack
Encrypting databases – if after contacting a database Threat Agents discover it is
encrypted, the perceived level of effort to capture useful information from the database
has risen compared to an unencrypted database, they will reduce their attempts to
penetrate it
Installing new surveillance cameras – if Threat Agents passing by an ATM discover new
surveillance cameras installed around it and believe that their probability of being caught
Vulnerability, or its synonym susceptibility, is the probability that a Threat Event results in a
Loss Event.
Vulnerability
Resistance
Threat Capability
Strength
Figure 6: Vulnerability
If the analyst has data on Threat Events and Loss Events in a given timeframe, it is possible to
estimate Vulnerability directly by comparing the number of successful Loss Events to the total
Threat Events.
If the analyst lacks this data, Vulnerability can be estimated at the lower level by considering
how the Threat Agent’s knowledge, skills, and resources (Threat Capability) compare to the
Control environment around the Asset and the Primary Stakeholder’s ability to resist the Threat
Agent’s actions (Resistance Strength).
At the lower level, Vulnerability (Vuln) is the conditional probability that the Threat Capability
(TCap) is greater than the Resistance Strength (RS).
Threat Agents vary in their capability: some are relatively easy to resist while others can
penetrate the most heavily protected Asset. The Open FAIR model reflects the variability of a
Threat Agent’s resources, skill, organization, and persistence as its Threat Capability. Threat
Capability is measured as the percentile value representing the Threat Agent’s relative position
on the distribution of potential attackers.
Attackers exist on a continuum of skills and resources, including at one end of the continuum
attackers with little skill, little experience, and a low level of determination, and including on the
other end attackers who are highly skilled, experienced, and determined. In performing an Open
FAIR risk analysis, the analyst defines a minimum likely capability for the threat, a maximum
likely value, and a most likely value. These represent the minimum level of skills expected for
an attacker to have, the maximum level of skills an attacker might have, and the skill level of the
most likely attacker. This skill level is relative to the threat vector used; for example, an attacker
may be highly skilled at utilizing a particular threat vector but not at all skilled at utilizing a
different threat vector.
To resist the Threat Agent’s capability, the Asset has Controls and protection against a Threat
Agent’s capability, which contribute to the Asset’s Resistance Strength. Resistance Strength is
the strength of a Control as compared to the probable level of force that a Threat Agent is
capable of applying against an Asset, and it is measured as the percentile value representing the
highest Threat Capability against which it could be successfully defended.
Note: Both Threat Capability and Resistance Strength are expressed in a range to account for
increasing levels of abstraction and uncertainty involved with using the Threat
Capability continuum.
The probability that a Threat Event will result in a Loss Event – or the percentage of Threat
Events over a given timeframe that will result in Loss Events – is variable. Not all Assets of a
given type are equally protected. For example, some Assets, such as user bank accounts, have
stronger passwords than others, so an attack that fails against one user’s bank account with a
comparatively strong password will succeed against another user’s account with a comparatively
weaker password.
After estimating the ranges for Threat Capability and Resistance Strength, the analyst will use
Monte Carlo analysis to compare a random sample from Threat Capability with a random
sample from Resistance Strength and derive Vulnerability, the probability that the Threat
Capability exceeds the Resistance Strength in any given threat action of the type being analyzed.
Probability
of Action
Vulnerability
Pr(TC>RS)
Loss Event
Frequency
Contact Event
Threat Event
Loss
Threat Event
Event Frequency
Vulnerability
By leveraging a top-down approach, the analyst will first try to estimate Loss Event Frequency
itself if it is possible to make a defensible estimate, only decomposing it into its factors if useful
or necessary for the purpose of the analysis or if the type and quality of data are better/more
objective at the lower levels.
Loss Magnitude
Primary Loss
Secondary Loss
Magnitude
Any of the six Open FAIR loss forms (productivity, response, replacement, fines and judgments,
competitive advantage, and reputation) could appear as either a Primary Loss or a Secondary
Loss, and the loss forms are modeled as being statistically independent of each other. Experience
has shown that productivity and replacement costs are more commonly seen as Primary Losses,
whereas fines and judgments, competitive advantage, and reputation damage loss are more
commonly seen as Secondary Losses. Response costs are commonly seen as both/either Primary
and/or Secondary Losses.
In estimating the Primary Loss Magnitude, the analyst estimates what is expected to happen
(most likely value) versus best and worst-case (maximum/minimum values). If the analyst
chooses to evaluate the worst-case proposition, they must also reflect the (generally) much lower
frequency of such an outcome.
In determining which forms of loss (e.g., productivity, replacement, response) may apply to the
Primary Loss Magnitude, the analyst should discuss with the organization’s subject matter
experts that typically respond to or manage adverse events. This is especially useful for
analyzing Loss Events that may have not occurred in the past. These discussions around the
types of organizational involvement and loss when a given Loss Event materializes help to
The Primary Loss Magnitude (PLM) is equal to the sum of those loss forms that are the direct
consequence of the Loss Event and cause losses to the Primary Stakeholder.
After establishing which Secondary Stakeholders are relevant, the analyst should estimate the
Secondary Loss Event Frequency.
The Secondary Loss Event Frequency (SLEF) is the conditional probability that a Primary Loss
will result in a Secondary Loss; it is estimated/expressed as a probability, not as events/year.
The next step is to estimate the Secondary Loss Magnitude for each loss form resulting from the
reactions of Secondary Stakeholders. These shall be estimated as losses to the Primary
Stakeholder, not the impact to Secondary Stakeholders who react from that impact and try to
cause a loss to the Primary Stakeholder.
The Secondary Loss Magnitude (SLM) is equal to the sum of those loss forms resulting from the
reactions of Secondary Stakeholder(s) that cause additional loss(es) to the Primary Stakeholder.
Reputation
Any of the six loss forms can appear as either a Primary Loss or a Secondary Loss, and Loss
Magnitude is equal to the sum of the Primary Loss Magnitude and the Secondary Loss
Magnitude. Primary Loss Magnitude is the direct consequence of the Primary Loss Event.
For a Secondary Loss to occur, there must have been a Primary Loss that caused Secondary
Stakeholders to react and cause additional losses to the Primary Stakeholder. Therefore, the
Secondary Loss Event Frequency is the conditional probability that a Primary Loss will result in
a Secondary Loss. Secondary Loss Magnitude is the sum of additional losses to the Primary
Stakeholder resulting from Secondary Stakeholders’ reactions.
Analysts should provide data that is useful for the decision the stakeholder is trying to make. In
other words, analysts must select and present results that are fit for the purpose of the analysis.
Depending upon that purpose, analysts have to decide whether presenting one number that
summarizes the distribution of the Monte Carlo results or presenting the distribution itself best
serves the stakeholder’s needs.
There are many single number summaries available beyond those representing risk. Analysts, for
example, may choose any of the above to discuss Loss Event Frequency by itself, Single Loss
Magnitude, Single Primary Loss Magnitude, Single Secondary Loss Magnitude, etc., depending
upon the purpose of the analysis and the decisions stakeholders are making that require analysis
to support. Analysts have many choices over what summary information they can provide and
need to select the right results as appropriate to the decision that analysis supports.
4
Tornado chart: https://en.wikipedia.org/wiki/Tornado_diagram.
At a basic level, the Open FAIR model categorizes Controls by how they affect risk:
1. Avoidance Controls affect the frequency and/or probability of Threat Agents establishing
contact with Assets.
2. Deterrent Controls affect the probability that a Contact Event becomes a Threat Event.
3. Vulnerability Controls affect the probability that a Threat Event will result in a Loss
Event (the probability that Threat Capability will overcome Resistance Strength), usually
by changing the Asset’s Resistance Strength.
4. Responsive Controls affect the Loss Magnitude, either by limiting Primary Losses,
limiting the frequency of Secondary Loss Events, or limiting the magnitude of Secondary
Loss Events.
Note: The Open FAIR model does not include a specific “detective control” category
because detective controls can play a role in each of the categories listed above and
thus is not a distinct Control category. For example, system logging and monitoring
can in some circumstances be a deterrent by increasing a potential Threat Agent’s
perception of the likelihood of being caught. At the same time, logging and monitoring
can inform an organization that an event is underway, allowing it to intervene before
loss materializes. Even if intervention is not timely enough to prevent loss from
occurring, early detection can allow an organization to respond quickly enough to
minimize the magnitude of loss.
Figure 10 identifies where these Control categories play a role within the taxonomy.
Risk
Responsive
Controls
Loss Event
Loss Magnitude
Frequency
Vulnerability
Controls
Threat Event Primary Loss
Vulnerability Secondary Loss
Frequency Magnitude
Avoidance Deterrent
Controls Controls
Avoidance Controls affect the frequency and/or likelihood that a Threat Agent will come into
contact with an Asset. An Avoidance Control successfully reducing Contact Frequency will
translate into a lower Threat Event Frequency, causing a reduced Loss Event Frequency, and
lessened exposure to risk. In other words, Avoidance Controls limit the Threat Agent’s ability to
contact the Asset in the first place. Therefore, Avoidance Controls are a Loss Prevention
Control.
As with any control, the effect may not be absolute. For example, firewalls usually are
configured to permit certain types of traffic, which means that Threat Events may still occur
against Assets behind the firewall. Nonetheless, firewalls also almost invariably reduce the
frequency of Threat Events by shielding against certain types of traffic.
When considering the effect of Avoidance Controls in an analysis, the analyst can measure or
estimate the reduction in Contact Frequency specific to the Threat Agent/Community under
consideration.
Deterrent Controls reduce the probability that a Threat Agent will act against the Asset in a
manner that may result in loss – the Probability of Action. In other words, Deterrent Controls are
designed to make it less probable that, given a Contact Event, the Threat Agent will launch a
Threat Event. As with Avoidance Controls, this effect flows up the taxonomy to affect Threat
Event Frequency, Loss Event Frequency, and exposure to risk. Therefore, Deterrent Controls are
also a Loss Prevention Control.
For a Deterrent Control to impact the Threat Agent’s Probability of Action, the Threat Agent
must be aware of the Control. Deterrent Controls are designed to impact the Threat Agent’s
perceived risk in carrying out the Threat Event, perceived level of effort required to impact the
Vulnerability Controls reduce the probability that a Threat Agent’s action will result in a Loss
Event. In a scenario where the context is a malicious action, Vulnerability Controls generally
focus on increasing the difficulty a Threat Agent faces in their attempts to breach or otherwise
impair an Asset. In a scenario where the context is non-malicious (e.g., human error),
Vulnerability Controls often focus on reducing complexity and/or difficulty faced by personnel
to reduce the probability that their actions will result in harm. In other words, Vulnerability
Controls improve the Resistance Strength of the Asset. Vulnerability Controls are also a Loss
Prevention Control.
Note: Vulnerability Controls are sometimes referred to as “resistive controls”, but this term
tends to exclusively connote controls against malicious acts.
The effects of Vulnerability Controls are reflected in estimates of either Resistance Strength or
Vulnerability, depending on the level of abstraction of the analysis.
Responsive Controls are designed to reduce the magnitude of loss that results from a Loss Event.
Therefore, Responsive Controls are a Loss Mitigation Control.
Measurements and estimates regarding the effect of Responsive Controls are applied in the Loss
Magnitude branch of the taxonomy and are reflected as lower monetary Loss Magnitudes.
In closing, for a control to affect risk, it must affect one or more risk factors defined in the Open
FAIR taxonomy. Analysts must evaluate all applicable current-state controls and their overall
effectiveness. Data on these is often available by reviewing the following:
The Open FAIR Avoidance, Deterrent, and Vulnerability Control categories all map to the
Protect function, the function that prevents losses from occurring in the first place.
The Open FAIR Responsive Control category spans across the Detect, Respond, and Recover
functions, those functions that take place once a loss begins to occur.
Security practitioners can categorize controls to the NIST CSF five functions to map how
Controls affect risk.
While the analyst is responsible for helping the organization understand “How much risk do we
have?”, risk managers and decision-makers must answer different questions: “How does current-
5
NIST CSF five functions: https://www.nist.gov/cyberframework/online-learning/five-functions.
Doing something about risk consists of implementing controls that either reduce the Loss Event
Frequency (prevent the Loss Scenario from occurring as often) or reduce the Loss Magnitude of
the Loss Event once it has occurred (mitigate the severity of the loss). Analysts model the effects
of these controls by how they affect one or more of the Open FAIR risk factors.
Figure 12 combines the decomposition of the Loss Scenario with the Open FAIR Controls and
Categories as well as the NIST CSF color scheme.
Productivity (DE)
Threat Event
Fines &
Judgment C
Loss
Event Replacement A
Response T
Respond
E
Response (RS)
Competitive G
Advantage O
Vulnerability R
Controls Responsive Avoidance, Responsive Y
Avoidance Deterrent Controls Controls Recover
(Vuln, RS) Deterrent,
Controls Controls Vulnerability
(RC)
(CF) (PoA) Controls
Figure 12: Decomposing an Open FAIR Loss Scenario, including the Open FAIR Control
Categories and the NIST CSF Five Functions
Recall that well-documented assumptions should include the reasoning for the assumption as
well as any sources that contributed to them. Well-documented rationale should state the source
of all estimates. The source may be systems (e.g., logs), groups (e.g., incident response), or
industry data.
Good sources of data are ones which are more objective in nature than subjective. Objective data
is often more defensible and credible than opinion-based data. While any analyst would prefer
objective-based estimates, sometimes good data is missing. When a situation like this arises, it is
not the time to try and “hide” it by poorly documenting the rationale; instead, a credible analyst
should document the missing data and the rationale for what was assumed instead/in addition.
“The information value curve is usually steepest in the beginning. The first 100 samples reduce
uncertainty much more than the second 100.”
To the risk analyst, this means that there is a diminishing return associated with gathering more
data to support a single model or risk factor estimate. Deeper investigation of a risk factor can
come at an increased cost to the analyst, and the analyst should be aware of this cost/benefit
tradeoff.
There is also a diminishing return to estimating lower levels of the Open FAIR taxonomy. Risk
analysts should estimate the risk factors that have the highest quality of data to support accurate
and usefully precise analyses. When possible, analysts should estimate factors at the highest
levels of the Open FAIR taxonomy. However, when an accurate estimate is not usefully precise
and information is available that informs accurate and usefully precise estimates of lower-level
risk factors, the analyst should decompose the higher-level risk factor into its component sub-
factors and estimate them.
Although an organization may have a high capacity for loss, its management team may not allow
the organization to expose itself to a loss exceeding a substantially lower threshold than the
capacity for loss. That subjective preference and management mandate is the tolerance for loss.
Although there often is a strong correlation between the (objective) capacity for loss and the
(subjective) tolerance for loss, there can be significant differences if executive management is
personally loss averse. For example, a financial institution may have substantial reserves and a
resilient market presence, yet it may act highly averse to loss for management’s personal
business practice, customer demands, market norm, or regulatory reasons.
Ultimately, it is the combination of capacity for loss and tolerance for loss that determines how
an organization perceives and responds to risk.
The Fragile qualifier is used to represent conditions where the Loss Event Frequency is low in
spite of a high Threat Event Frequency, but only because a single preventative Control exists. In
other words, while the level of risk is low, it is qualified as fragile because the low risk level is
based on a single point of failure. For example, if a single control were all that kept malware-
related losses low, then that could be said to be a fragile condition.
The Unstable qualifier is used to represent conditions where the Loss Event Frequency is low
solely because Threat Event Frequency is low. In other words, no preventative controls exist to
mitigate the frequency of Loss Events. An example might be the risk associated with rogue
database administrators. For most organizations, this is a low Loss Event Frequency condition
but only because it is an inherently low Threat Event Frequency scenario. Should the Threat
Community conditions change, Loss Event Frequency rises due to the absence of any effective
Vulnerability Control.
These qualifiers are intended to compensate for the fact that in some scenarios, if a decision-
maker only looked at the low Loss Event Frequency, they may be lulled into a sense of security
that is not warranted.
These translations should be guided by scales that have been approved by management.
It is inappropriate for risk analysts to define and use qualitative scales that represent their
tolerance for loss or their personal interpretation of what they believe to be the organization’s
tolerance for loss. The challenge, of course, is that management may not be readily available to
formally define risk scales. In this circumstance, the analyst may define a scale they believe is
accurate for the organization and then have the scale reviewed by management for approval.
Table 1: An Example Scale Translating Quantitative Values to Qualitative Labels
Using the example scale shown in Table 1, if an analysis resulted in an average annualized loss
exposure of $4.5M, that could be interpreted as high risk.
6.7 Troubleshooting
When analysts or stakeholders disagree on the results of or a component of an analysis, there are
three recommended techniques to managing the disagreements.
The first technique is to revisit the scoping or rationale within an analysis and determine whether
an assumption has been made which varies from the other analysts or stakeholders. If a
difference is found, this is often easily resolved. If rationale/assumptions are not documented,
the risk analyst cannot troubleshoot or defend their analysis if there are disagreements.
The second technique is to leverage the Open FAIR taxonomy. The taxonomy breaks down the
factors that drive risk. For example, if a disagreement exists regarding estimates made at the
Loss Event Frequency level, stepping down to a lower level of abstraction may allow both sides
to find agreement, and the higher estimate will now be derived.
The third recommended technique is to perform two or more analyses to encompass the
disagreement. As an example, if one analyst believes the Threat Event Frequency is at least once
a year while a second analyst believes the Threat Event Frequency is less frequent, they can
perform analyses using both figures and observe whether there is a significant deviation in the
overall results.
Often, the majority of disagreements will be resolved after approaching the problem using the
first two techniques.
More Effective
Management
Informed
Decision Enables
Comparison
Enables
Measurement
Enables
Definition
Enables
To date, the information security profession has been hamstrung by several challenges, the least
of which is inconsistent nomenclature. For example, in some references, software flaws/faults
that could be exploited will be called a “threat”, while in other references these same software
faults will be referred to as a “risk”, and yet other references will refer to them as
“vulnerabilities”. Besides the confusion that can result, this inconsistency makes it difficult if not
impossible to normalize data and develop good metrics.
A related challenge stems from mathematical equations for risk that are either incomplete or
illogical. For example, one commonly cited equation for risk states that:
These issues have been a major contributor to why the information security profession has
consistently been challenged to find and maintain “a seat at the table” with the other
organizational functions (e.g., finance, marketing). Furthermore, while few people are likely to
become excited with the prospect of yet another set of definitions amongst the many that already
exist, the capabilities that result from a well-designed foundational taxonomy are significant.
Likewise, in order for our profession to evolve significantly, it is imperative that we operate with
a common, logical, and effective understanding of our fundamental problem space. The O-RT
Standard seeks to fill the current void and set the stage for the security profession’s maturation
and growth.
Note: Any attempt to describe the natural world is destined to be incomplete and imprecise to
some degree due to the simple fact that human understanding of the world is, and
always will be, limited. Furthermore, the act of breaking down and categorizing a
complex problem requires that black and white lines be drawn where, in reality, the
world tends to be shades of gray. Nonetheless, this is exactly what human-critical
analysis methods and science have done for millennia, resulting in a vastly improved
ability to understand the world around us, evolve, and accomplish objectives
previously believed to be unattainable.
This document is a current effort at providing the foundational understanding that is necessary
for similar evolution and accomplishment in managing information risk. Without this
foundation, our profession will continue to rely too heavily on practitioner intuition which,
although critically important, is often strongly affected by bias, myth, and commercial or
personal agenda.
2 Definitions................................................................................................................. 4
2.1 Action ............................................................................................................. 4
2.2 Asset ............................................................................................................... 4
2.3 Contact Event.................................................................................................. 4
2.4 Contact Frequency (CF).................................................................................. 4
2.5 Control ............................................................................................................ 4
2.6 FAIR ............................................................................................................... 4
2.7 Loss Event ...................................................................................................... 5
2.8 Loss Event Frequency (LEF) .......................................................................... 5
2.9 Loss Flow........................................................................................................ 5
2.10 Loss Magnitude (LM) ..................................................................................... 5
2.11 Loss Scenario .................................................................................................. 5
2.12 Primary Stakeholder ....................................................................................... 5
2.13 Probability of Action (PoA) ............................................................................ 5
2.14 Resistance Strength (RS) ................................................................................ 5
2.15 Risk ................................................................................................................. 5
2.16 Risk Analysis .................................................................................................. 6
2.17 Risk Assessment ............................................................................................. 6
2.18 Risk Factors .................................................................................................... 6
2.19 Risk Management ........................................................................................... 6
2.20 Secondary Stakeholder ................................................................................... 6
2.21 Threat .............................................................................................................. 6
2.22 Threat Agent ................................................................................................... 6
2.23 Threat Capability (TCap) ................................................................................ 6
2.24 Threat Community .......................................................................................... 6
2.25 Threat Event.................................................................................................... 7
2.26 Threat Event Frequency (TEF) ....................................................................... 7
2.27 Vulnerability (Vuln) ....................................................................................... 7
The Open Group is a global consortium that enables the achievement of business objectives
through technology standards. Our diverse membership of more than 800 organizations includes
customers, systems and solutions suppliers, tools vendors, integrators, academics, and
consultants across multiple industries.
The mission of The Open Group is to drive the creation of Boundaryless Information Flow™
achieved by:
Working with customers to capture, understand, and address current and emerging
requirements, establish policies, and share best practices
Working with suppliers, consortia, and standards bodies to develop consensus and
facilitate interoperability, to evolve and integrate specifications and open source
technologies
Offering a comprehensive set of services to enhance the operational efficiency of
consortia
Developing and operating the industry’s premier certification service and encouraging
procurement of certified products
The Open Group publishes a wide range of technical documentation, most of which is focused
on development of Standards and Guides, but which also includes white papers, technical
studies, certification and testing documentation, and business titles. Full details and a catalog are
available at www.opengroup.org/library.
This Document
This document is The Open Group Standard for Risk Taxonomy (O-RT), Version 3.0.1. It has
been developed and approved by The Open Group.
This document provides a standard definition and taxonomy for information security risk, as
well as information regarding how to use the taxonomy. It is a companion document to the Risk
Analysis (O-RA) Standard, Version 2.0.1.
The intended audience for this document includes anyone who needs to understand and/or
analyze a risk condition. This includes, but is not limited to:
Information security and risk management professionals
Auditors and regulators
Technology professionals
Note that this document is not limited to application in the information security space. It can, in
fact, be applied to any risk scenario. This agnostic characteristic enables the O-RT Standard, and
the companion O-RA Standard, to be used as a foundation for normalizing the results of risk
analyses across varied risk domains.
This document is one of several publications from The Open Group dealing with risk
management. Other publications include:
Risk Analysis (O-RA) Standard, Version 2.0.1
The Open Group Standard (C20A, November 2021)
This document provides a set of standards for various aspects of information security risk
analysis. It was first published in October 2013, and has been revised as a result of
feedback from practitioners using the standard and continued development of the Open
FAIR™ taxonomy.
Requirements for Risk Assessment Methodologies
The Open Group Guide (G081, January 2009)
This document identifies and describes the key characteristics that make up any effective
risk assessment methodology, thus providing a common set of criteria for evaluating any
given risk assessment methodology against a clearly defined common set of essential
requirements. In this way, it explains what features to look for when evaluating the
capabilities of any given methodology, and the value those features represent.
Open FAIR™ – ISO/IEC 27005 Cookbook
The Open Group Guide (C103, November 2010)
This document describes in detail how to apply the Open FAIR methodology to ISO/IEC
27002:2005. The Cookbook part of this document enables risk technology practitioners to
follow by example how to apply FAIR to other frameworks of their choice.
The Open FAIR™ – NIST Cybersecurity Framework Cookbook
The Open Group Guide (G167, October 2016)
This document describes in detail how to apply the Open FAIR factor analysis for
information risk methodology to the NIST Framework for Improving Critical
Infrastructure Cybersecurity (NIST Cybersecurity Framework).
The Open FAIR™ Risk Analysis Process Guide
The Open Group Guide (G180, January 2018)
This document offers some best practices for performing an Open FAIR risk analysis: it
aims to help risk analysts understand how to apply the Open FAIR risk analysis
methodology.
How to Put Open FAIR™ Risk Analysis Into Action: A Cost-Benefit Analysis of
Connecting Home Dialysis Machines Online to Hospitals in Norway
The Open Group White Paper (W176, May 2017)
This document offers an Open FAIR analysis of security and privacy risks and compares
those risks to the likely benefits of connecting home dialysis machines online to hospitals.
This document includes changes to the O-RT Standard that have evolved since the original
document was published. These changes came about as a result of feedback from practitioners
using the standard:
“Accomplish Assigned Mission” as a possible action taken by a Threat Agent was
removed – it could apply to any of the other actions and be a component of any of them,
but it is not its own action
The external loss factor “Detection” was changed to “External Party Detection” and the
organizational loss factor “Due Diligence” was changed to “Reasonable Care”
The quantitative example that utilized a qualitative scale has been removed
Open FAIR terms and definitions were clarified
Microsoft and Excel are registered trademarks of Microsoft Corporation in the United States
and/or other countries.
All other brands, company, and product names are used for identification purposes only and may
be trademarks that are the sole property of their respective owners.
The Open Group gratefully acknowledges the contribution of members of The Open Group
Security Forum, and the following people in the development of earlier versions of this
document:
Alex Hutton
Jack Jones
(Please note that the links below are good at the time of writing but cannot be guaranteed for the
future.)
ISO Guide 73:2009, Risk Management – Vocabulary, November 2009; refer to
https://www.iso.org/standard/44651.html
ISO 31000:2018, Risk Management – Guidelines, February 2018; refer to
https://www.iso.org/standard/65694.html
Risk Analysis (O-RA) Standard, Version 2.0.1, The Open Group Standard (C20A),
November 2021, published by The Open Group; refer to:
www.opengroup.org/library/c20a
1.1 Objective
The objective of the Risk Taxonomy (O-RT) Standard is to provide a single logical and rational
taxonomical framework for anyone who needs to understand and/or analyze information security
risk.
1.2 Overview
This document provides a taxonomy describing the factors that drive risk – their definitions and
relationships. Each factor that drives risk is identified and defined. Furthermore, the
relationships between factors are described so that mathematical functions can be defined and
used to perform quantitative calculations.
This document is limited to describing the factors that drive risk and their relationships to one
another. Measurement scales and specific assessment methodologies are not included because
there are a variety of possible approaches to those aspects of risk analysis, with some approaches
being better suited than others to specific risk problems and analysis objectives.
This document does not address how to assess or analyze risk.1 This document also does not
cover those elements of risk management that pertain to strategic and tactical risk decisions and
execution.
1
Refer to the separate standard for performing risk analysis: the Risk Analysis (O-RA) Standard; see Referenced Documents.
Risk analysts can choose to make their measurements and/or estimates at any level of abstraction
within the taxonomy. For example, rather than measure Contact Frequency, the analyst could
move up a layer of abstraction and instead measure Threat Event Frequency. This choice may be
driven by the nature or volume of data that is available, or the time available to perform the
analysis (i.e., analyses at deeper layers of abstraction take longer).
Although the terms “risk” and “risk management” mean different things to different people, this
document is intended to be applied toward the problem of managing the frequency and
magnitude of loss that arises from a threat (whether human, animal, or natural event). In other
words, managing “how often bad things happen, and how bad they are when they occur”.
In the overall context of risk management, it is important to appreciate that the business
objective in performing risk assessments is to identify and estimate levels of exposure to the
likelihood of loss, so that business managers can make informed business decisions on how to
manage those risks of loss – either by accepting each risk, or by mitigating it – through investing
in appropriate internal protective measures judged sufficient to lower the potential loss to an
acceptable level, or by investing in external indemnity. Critical to enabling good business
decision-making therefore is to use risk assessment methods which give objective, meaningful,
consistent results.
You can't effectively and consistently manage what you can't measure,
and you can't measure what you haven't defined.
The problem here is that a variety of definitions exist, but the risk management community has
not yet adopted a consistent definition for even the most fundamental terms in its vocabulary;
e.g., threat, vulnerability, even risk itself. Without a sound common understanding of what risk
is, what the factors are that drive risk, and a standard use of the terms used to describe it, risk
analysts cannot be effective in delivering meaningful, comparable risk assessment results. This
document provides the necessary foundation vocabulary, based on a fundamental analysis of
what risk is, and then shows how to apply it to produce the objective, meaningful, and consistent
results that business managers need.
1.3 Conformance
Refer to The Open Group website for conformance requirements for this document.
1.5 Terminology
For the purposes of this document, the following terminology definitions apply:
May Describes a feature or behavior that is optional. To avoid ambiguity, the opposite of
“may” is expressed as “need not”, instead of “may not”.
For the purposes of this standard, the following terms and definitions apply. Merriam-Webster’s
Collegiate Dictionary2 should be referenced for terms not defined in this section.
2.1 Action
An act taken against an Asset by a Threat Agent. Requires first that contact occurs between the
Asset and Threat Agent.
2.2 Asset
The information, information system, or information system component that is breached or
impaired by the Threat Agent in a manner whereby its value is diminished or the act introduces
liability to the Primary Stakeholder.
2.5 Control
Any person, policy, process, or technology that has the potential to reduce the Loss Event
Frequency (LEF) – Loss Prevention Controls – and/or Loss Magnitude (LM) – Loss Mitigation
Controls.
2.6 FAIR
Factor Analysis of Information Risk.
2
Merriam Webster: https://www.merriam-webster.com/.
2.15 Risk
The probable frequency and probable magnitude of future loss.
2.21 Threat
Anything that is capable of acting in a manner resulting in harm to an Asset and/or organization;
for example, acts of God (weather, geological events, etc.), malicious actors, errors, failures.
Per ISO Guide 73:2009 (see Referenced Documents), risk management refers to the
“coordinated activities to direct and control an organization with regard to risk”. A risk
assessment is the “overall process of risk identification, risk analysis, and risk evaluation”, and
risk analysis refers to the “process to comprehend the nature of risk and determine the level of
risk”.
Risk Management
Risk Assessment
Risk Risk
Risk Evaluation Risk Treatment Risk Monitoring
Identification Analysis
This document expands on this to add that all risk assessment approaches should include:
An effort to clearly identify and characterize the assets, threats, controls, and impact/loss
elements at play within the risk scenario being assessed
An understanding of the organizational context for the analysis; i.e., what is at stake from
an organizational perspective, particularly with regard to the organization’s leadership
perspective
Measurement and/or estimation of the various risk factors
Calculation of risk
Communication of the risk results to decision-makers in a form that is meaningful and
useful
An Open FAIR risk analysis fits the Risk Analysis component described by ISO Guide 73:2009
and allows decision-makers to prioritize their consistently defined and identified risks but
crucially adds the communication component, which distinguishes the risk assessment approach
defined above and the risk analysis approach defined in the O-RA Standard (see Referenced
Documents).
This concept can be illustrated in what is referred to as a “risk management stack”, showing the
relationship between these elements; see Figure 2.
Effective Management
↑
Well-Informed Decisions
↑
Effective Comparisons
↑
Meaningful Measurements
↑
Accurate Risk Model
As with similar relational constructs, it becomes immediately apparent that failures at lower
levels of the stack cripple the ability to achieve effectiveness at higher levels.
Risk
Loss Event
Loss Magnitude
Frequency
Figure 3 is not comprehensive, as deeper layers of abstraction exist that are not shown. Some of
these deeper layers are discussed further on in this document, and theoretically, the layers of
abstraction may continue indefinitely, much like the layers of abstraction that exist in the
understanding of physical matter (e.g., molecules, atoms, particles). The deeper layers of
abstraction can be useful for understanding but are not always necessary to perform effective
analyses.
Moreover, the factors within the Loss Event Frequency side of the taxonomy have relatively
clean and clear cause-and-effect relationships with one another, which simplifies calculation.
Factors within the Loss Magnitude side of the taxonomy, however, have much more complicated
relationships that defy simple calculation. As a result, Loss Magnitude measurements and
estimates generally are aggregated by loss type (e.g., $xxx of productivity loss, plus $yyy of
legal fines and judgments).
The Open FAIR risk factors are assumed to be independently identically distributed.
Note: This document defines risk as resulting in a loss; it does not consider speculative risk
that may generate either a loss or a gain (e.g., as presented in ISO 31000, see
Referenced Documents).
A risk measurement is an estimate of the likelihood and impact of adverse events (losses).
Measuring risk is never a prediction of whether some adverse event will occur – such as an
earthquake will destroy a data center causing a loss of $1M within the next year – but instead is
an estimate of the likelihood or probability of that adverse event happening within the year and
the range of economic damage from that event.
Risk measurements are accurate if the actual result (how much earthquake damage occurred
within a year) measured at the end of the year lies within the range of the estimate. A risk
measurement of earthquake risk would be a reasoned, defensible analysis whose results are
something like, “There is between a 10% to 20% probability of an earthquake damaging the data
center with damage of between $1,000 and $5,000,000”. That estimate will either be found
accurate or inaccurate after the next year, five years, ten years, or twenty years when losses from
the earthquake are observed.
Analyzing risk requires differentiating between possibility and probability. Possibility can be
thought of as binary: something is possible, or it is not. Probability, however, is a continuum that
addresses the area between certainty and impossibility. Because risk is invariably a matter of
future events, there is always some amount of uncertainty, which means executives cannot
choose or prioritize effectively based upon statements of possibility. Effective risk decision-
making can only occur when information about probabilities is provided.
Moreover, risk analyses should not be considered predictions of the future. The word
“prediction” implies a level of certainty that rarely exists in the real world, and does not help
people understand the probabilistic nature of analysis. For decision-makers, even though it is
impossible to know which roll of the dice will come up, knowing that the probability is 1-in-36
is valuable information.
With this as a starting point, the first two risk factors are loss frequency and magnitude of loss.
In this document, these are referred to as Loss Event Frequency and Loss Magnitude,
respectively.
Risk
Loss Event
Loss Magnitude
Frequency
Figure 4: Risk
For a Loss Event to occur, a Threat Agent has to act upon an Asset, such that loss results, which
leads to the next two factors: Threat Event Frequency and Vulnerability.
Risk
Loss Event
Loss Magnitude
Frequency
Threat Event
Vulnerability
Frequency
Note: The Loss Event Frequency of an event that can only occur once is specified as a
probability within a specific timeframe (event X is 10% likely to occur over the next Y
months) because almost any event is possible, and if it is possible, given enough time,
the event will occur.
The only difference between this definition and the definition for Loss Event Frequency above is
that the definition for Threat Event Frequency does not include whether Threat Agent actions are
successful. In other words, Threat Agents may act against Assets but be unsuccessful in
breaching or otherwise impairing the Asset. A common example of a malicious Threat Event
(where harm or abuse is intended) would be the hacker who unsuccessfully attacks a web server.
Such an attack would be considered a Threat Event, but not a Loss Event. An example of a non-
malicious Threat Event would be a data center technician tripping over a system’s power cord.
The act of tripping would be the Threat Event, but a Loss Event would only occur if the
technician unplugged the cord (or, depending on the scenario under analysis, if the technician
were injured).
This definition also provides the two factors that drive Threat Event Frequency: Contact
Frequency and Probability of Action.
Loss Event
Loss Magnitude
Frequency
Threat Event
Vulnerability
Frequency
Contact Probability of
Frequency Action
Contact Frequency is the probable frequency, within a given timeframe, that a Threat Agent will
come into contact with an Asset. A Threat Agent coming into contact with the Asset is referred
to as a Contact Event.
Contact can be physical or “logical” (e.g., over the network). Regardless of contact mode, three
types of contact can occur:
Random – the Threat Agent “stumbles upon” the Asset during the course of unfocused or
undirected activity
Regular – contact occurs because of the regular actions of the Threat Agent; for example,
if a cleaning crew regularly comes by at 5:15pm, leaving cash on top of the desk during
that timeframe sets the stage for contact
Intentional – the Threat Agent seeks out specific targets
Each of these types of contact is driven by various factors. A useful analogy is to consider a
container of fluid containing two types of suspended particles – threat particles and asset
particles. The probability of contact between members of these two sets of particles is driven by
various factors, including:
Size (surface area) of the particles
The number of particles
Volume of the container
How active the particles are
Viscosity of the fluid
Whether particles are attracted to one another in some fashion, etc.
Probability of Action is the probability that a Threat Agent will act against an Asset once contact
occurs.
The probability that an intentional act will take place – in other words, whether a Threat Agent
will deliberately contact an Asset – is driven by three primary factors:
Value – the Threat Agent’s perceived value proposition from performing the act
Level of effort – the Threat Agent’s expectation of how much effort it will take to
accomplish the act
Risk of detection/capture – the probability of negative consequences to the Threat
Agent; for example, the probability of getting caught and suffering unacceptable
consequences for acting maliciously
In stricter terms, Vulnerability is the conditional probability of a Loss Event given a Threat
Event. This definition of Vulnerability is identical to saying that the probability that the Threat
Agent’s force applied (the Threat Capability) against the Asset in a specific Loss Scenario
exceeds the Resistance Strength of the Controls protecting that Asset.
This means that there are two ways to estimate Vulnerability, and each way arrives at the same
result:
If Threat Event Frequency and Loss Event Frequency data is available or estimated
directly, Vulnerability can be observed (or estimated) as the fraction of Threat Events that
become Loss Events
Vulnerability can also be derived from knowing or estimating the Threat Capability and
the Asset’s Resistance Strength to that Threat Capability and then estimating or simulating
the probability that the Threat Capability exceeds Resistance Strength
Loss Event
Loss Magnitude
Frequency
Threat Event
Vulnerability
Frequency
Figure 7: Vulnerability
Vulnerability is always relative to the type of force and vector involved: the Vulnerability of an
information Asset depends upon the Loss Scenario being analyzed. As an analogy to an
information Asset’s Vulnerability to some specific Loss Scenario, the tensile strength of a rope
is pertinent only if the Threat Agent’s force is a weight applied along the length of the rope.
Tensile strength does not generally apply to a scenario where the Threat Agent is fire, chemical
erosion, etc. Likewise, a computer anti-virus product does not reduce the Vulnerability of a
payment system from an internal employee seeking to perpetrate fraud. Open FAIR risk
analysts, therefore, evaluate Vulnerability in the context of specific threat types facing the
information Asset and Control types protecting the Asset.
Threat Capability is the probable level of force (as embodied by the time, resources, and
technological capability) that a Threat Agent is capable of applying against an Asset.
Attackers exist on a continuum of skills and resources, including at one end of the continuum
attackers with little skill, little experience, and a low level of determination, to the other end with
highly skilled, experienced, and determined attackers. The Threat Capability continuum
describes attackers as existing at various percentiles, where the 25th percentile of Threat Agents
are less skilled and able than the 50th percentile of Threat Agents who are less skilled and able
than the 99th percentile of Threat Agents.
Threat Agents within a single Threat Community will not have the same capabilities. Therefore,
the probability of the most capable Threat Agent acting against an Asset is something less than
Information security professionals and risk analysts often struggle with the notion of considering
Threat Agent capability as a percentile of a Threat Community, resulting in a probability,
perhaps only a remote one, that only the most capable threat is attacking that Asset. Many
analysts and decision-makers, instead, tend to gravitate toward focusing on the worst case, but
focusing solely on the worst case is to think in terms of possibility rather than probability.
Some Threat Agents may be proficient in applying one type of force but incompetent at others.
For example, a network engineer may be proficient at applying technological forms of attack but
relatively incapable of executing complex accounting fraud.
Resistance Strength is the strength of a Control as compared to the probable level of force (as
embodied by the time, resources, and technological capability; measured as a percentile) that a
Threat Agent is capable of applying against an Asset.
Attackers exist on a continuum of skills and resources, and Resistance Strength measures the
strength of a Control as compared to that percentile of attackers, specifically measuring the
percentile of attackers that the Asset’s Control(s) can be expected to resist.
As an analogy to the strength of an information security control, a rope’s tensile strength rating
provides an indication of how much force it is capable of resisting. The baseline measure
(Resistance Strength) for this rating is Pounds per Square Inch (PSI), which is determined by the
rope’s design and construction. This Resistance Strength rating does not change when the rope is
put to use. Regardless of whether there is a 10-pound weight on the end of the 500-PSI rope or a
2,000-pound weight, the Resistance Strength does not change.
Information security controls, however, do not have a baseline scale for force that is as well-
defined as PSI. Password strength is a simple example of how to approach this. It is possible to
estimate that a password eight characters long, comprised of a mixture of upper and lowercase
letters, numbers, and special characters, will resist the cracking attempts of some percentage of
the general Threat Agent population. Therefore, password Resistance Strength can be
represented as this percentile. (Recall that Resistance Strength is relative to a particular type of
force – in this case, cracking.)
Vulnerability is determined by comparing the Resistance Strength against the capability of the
specific Threat Community under analysis and assessing the probability that the Threat
Capability will exceed the Resistance Strength. For example, password Resistance Strength may
be estimated at the 80th percentile, yet the Threat Community within a scenario might be
estimated to have better than average capabilities, such as in the 90th percentile range.
Loss Event
Frequency Factor Description Unit of Measure
Loss Event Frequency Probable number of economic losses Events per unit time (e.g., events per
within a given time period year); or the probability of a single Loss
Event in a given timeframe (e.g., 20%
chance within the next year)
Threat Event Frequency Probable number of Threat Agent Events per unit time (e.g., events per
attempts at creating a loss within a year); or the probability of a single
given time period Threat Event in a given timeframe (e.g.,
20% chance within the next year)
Contact Frequency Probable number of times a Threat Events per unit time (e.g., events per
Agent contacts the stakeholder’s year); or the probability of a single
Asset within a given time period Contact Event in a given timeframe
(e.g., 20% chance within the next year)
Probability of Action Probability that a Contact Event Probability (between 0-1 or measured as
becomes a Threat Event a percentage, between 0 and 100%)
Loss Magnitude is the probable magnitude of economic loss resulting from a Loss Event
(measured in units of currency). Loss Magnitude is expressed as a distribution of losses, not a
single value for loss, and is always evaluated from the perspective of the Primary Stakeholder,
the party that bears the economic loss from the Loss Event.
Historically, data regarding Loss Magnitude have been scarce. Many organizations still do not
measure losses when events occur, and those that do often limit their analyses to the “easy stuff”
Because Loss Magnitude can be difficult to estimate, analysts frequently exclude analyzing it,
assess only possible, speculative worst-case outcomes, or model losses with tools that are
deceptively precise. Excluding Loss Magnitude from an analysis means the analyst is not
analyzing risk: risk always has a loss component. Citing worst-case possibilities alone removes
the probability element from the analysis – by definition, risk is a combination of the probability
of a loss along with its magnitude. Computational modeling tools that present risk results with a
precision that cannot be supported by the precision of the modeled risk factors give decision-
makers an inaccurately high understanding of the certainty inherent in the analysis.
In general, the majority of losses associated with information systems are small, but there is still
the remote chance of a large loss, usually described by a Loss Magnitude distribution having a
“fat tail”, as shown in Figure 8.
For example, if a Payment Processor (Primary Stakeholder) suffers a breach, it must respond and
recover from that breach. Costs incurred while recovering from the breach are Primary Losses.
However, consumers who have credit fraud committed against them due to the breach also suffer
indirect consequential harm. When those consumers (Secondary Stakeholders) demand relief
from the Payment Processor, those consumers become new Threat Agents trying to cause harm
against the Payment Processor, usually by “attacking” its financial Assets through a lawsuit. In
this case, the Payment Processor may expend resources to provide credit monitoring to those
affected consumers to mitigate additional Secondary Losses. This is an example of a Primary
Loss (response and recovery from the initial breach) followed in time by a Secondary Loss
(credit monitoring and exposure to a lawsuit’s fine).
The first phase of the Loss Event, referred to as Primary Loss, occurs directly as a result of the
Threat Agent’s action upon the Asset. The owner of the affected Assets would be considered the
Primary Stakeholder in an analysis (e.g., The Open Group is the Primary Stakeholder in a
scenario where its website goes offline as a result of an infrastructure failure). Of the six forms
of loss described in the previous section, productivity, response, and replacement are generally
the forms of loss experienced as Primary Loss. The other three forms of loss only occur as
Primary Loss when the Threat Agent is directly responsible for those losses (e.g., fines and
judgments loss when the Threat Agent is filing charges/claims).
The second phase of the Loss Event, referred to as Secondary Loss, occurs as a result of
Secondary Stakeholders (e.g., customers, stockholders, regulators) reacting negatively to the
Primary Loss. This can be thought of as “fallout” from the Primary Loss. An example would be
customers taking their business elsewhere after their personal information had been
compromised or due to frustration experienced as a result of frequent service outages.
Secondary Loss has two primary components: Secondary Loss Event Frequency (SLEF) and
Secondary Loss Magnitude (SLM).
Secondary Loss Event Frequency allows the analyst to estimate the chance (percentage of time)
a scenario is expected to have secondary effects. Even though this variable is called a
“frequency”, it is estimated as a percentage because it represents the conditional probability that
a Primary Loss results in a Secondary Loss.
Secondary Loss Magnitude represents the losses that are expected to materialize from dealing
with Secondary Stakeholder reactions (e.g., fines and judgments, loss of market share).
Of the six forms of loss, response, fines and judgments, competitive advantage, and reputation
are most commonly associated with Secondary Loss. It is unusual to experience productivity or
replacement loss within Secondary Loss. In the case of the loss of competitive advantage
resulting from theft of trade secret information, the “secret” is lost immediately; the impact of
the loss is realized over a long time period, and it may or may not occur.
The effect of Secondary Loss on an organization can cascade. As losses pile up from initial
Secondary Losses, additional Secondary Stakeholders may react negatively, compounding the
Loss factors may contribute to either/both Primary or/and Secondary Loss, so the risk analyst
must evaluate the factors within all four of these categories. However, asset and threat loss
factors are referred to as Primary Loss Factors, while organizational and external loss factors are
referred to as Secondary Loss Factors.
Risk
Loss Magnitude
The value/liability characteristics of an Asset play a key role in both the nature and magnitude of
loss. Value/liability can be further defined as:
Criticality – characteristics of an Asset that have to do with the impact on an
organization’s productivity; for example, the impact a corrupted database would have on
the organization’s ability to generate revenue
Cost – the intrinsic value of the Asset; e.g., the cost associated with replacing it if it has
been made unavailable (e.g., stolen, destroyed); for example, the cost of replacing a stolen
laptop or rebuilding a bombed-out building
Sensitivity – the harm that can occur from unintended disclosure
Sensitivity is further broken down into four sub-categories:
— Embarrassment/Reputation – the information provides evidence of incompetent,
criminal, or unethical management and refers to reputation damage resulting from the
Asset volume simply recognizes that more Assets at risk means greater Loss Magnitude if an
event occurs (e.g., two children on a rope swing versus one child, or one sensitive customer
record versus a thousand).
Threat loss factors include action, competence, and whether the Threat Agent is internal or
external to the organization, and how the Threat Agent then uses the compromised information
Asset can affect the loss suffered by the Primary Stakeholder.
Threat Agents can take one or more of the following actions against an Asset:
Access – unauthorized access of Asset(s)
Misuse – unauthorized use of Asset(s)
Disclose – illicit disclosure of sensitive information
Modify – unauthorized changes to Asset(s)
Deny Access – prevention or denial of authorized access
Note: Any of these actions may be the assigned mission of a Threat Agent.
Threat Agents take these actions after they have succeeded in committing a confidentiality,
integrity, or availability breach against the stakeholder’s information Asset, as shown in Table 2.
Table 2: Threat Agent Actions Following a Successful Breach
Observed Information
Asset Breach Threat Agent Exploitation Post-Breach
Confidentiality Access – the Threat Agent gains unauthorized access but takes no further action
beyond “having” the data.
Misuse – the Threat Agent makes unauthorized use of the Asset in committing
consequential losses to the Primary or Secondary Stakeholders, such as
committing identity theft, setting up a pornographic distribution service on a
compromised server, etc.
Disclose – the Threat Agent illicit disclosure of sensitive information
distributes information to other unauthorized parties.
Integrity Modify – the Threat Agent creates or modifies information that makes that
information or information processing inaccurate or otherwise unreliable or
untrustworthy. The stakeholder bears consequential losses from using
inaccurate (unauthorized) information in its business processes.
Availability Deny Access – the Threat Agent prevents or denies authorized access to the
Asset. This includes deleting information, taking systems offline, and
ransomware style events.
By gaining unauthorized access to information Assets, Threat Agents may establish a foothold in
that Asset for later malicious use. If undetected, this foothold is not yet a loss to the Primary
Stakeholder. When detected, the loss will be a confidentiality, integrity, or availability loss with
the consequences that come from that loss. Threat Agents may gain a foothold as part of a long-
term strategy to accomplish their assigned mission, such as to defeat or compromise an enemy
military’s future operation.
Each of these actions affects Assets differently, which drives the degree and nature of loss. The
combination of the Asset, kind of violation, and kind of exploitation of this violation determines
the fundamental nature and degree of loss. For example, the potential for productivity loss
resulting from a destroyed or stolen Asset depends upon how critical that Asset is to the
organization’s productivity. If a critical Asset is simply illicitly accessed, there is no direct
productivity loss. Similarly, the destruction of a highly sensitive Asset that does not play a
critical role in productivity will not directly result in a significant productivity loss. However,
that same Asset, if disclosed, can result in significant loss of competitive advantage or
reputation, and generate legal costs.
Which action(s) a Threat Agent takes will be driven primarily by that attacker’s motive (e.g.,
financial gain, revenge, recreation) and the nature of the Asset. For example, a Threat Agent
bent on financial gain is less likely to destroy a critical server than to steal an easily pawned
Asset like a laptop. For this reason, the risk analyst must have a clear definition of the Threat
Community and its intended goal to evaluate Loss Magnitude effectively.
Threat competence is a measure of how able the Threat Agent is in exploiting the compromised
Asset to accomplish some Threat Agent goal; it is the amount of damage a Threat Agent is
capable of inflicting once the information Asset compromise occurs to the Primary or Secondary
Stakeholders. For instance, a Threat Agent with low threat competence may not cause a large
loss despite having sufficient Threat Capability to overcome the Asset’s Controls.
Note: Threat competence differs from Threat Capability: threat competence affects Loss
Magnitude while Threat Capability affects Loss Event Frequency.
Whether a Threat Agent is external or internal to the organization can play a pivotal role in how
much loss occurs. Specifically, Loss Events generated by malicious internal Threat Agents
(including employees, contractors, etc.) typically have not resulted in significant regulatory or
reputation losses because it is recognized that trusted insiders are exceedingly difficult to protect
against.
Internal versus
Competence Action
External
Organizational loss factors are those specific to the organization or business that suffers the loss
and include when the Loss Event occurred, whether the organization took reasonable care in
protecting the information Asset breached, and how it was able to detect the Loss Event in the
first place.
When a Loss Event occurs, its timing can significantly impact Loss Magnitude. For example, the
disclosure of earnings before public release or disclosure of a potential acquisition by an
organization before the deal is publicly disclosed.
An organization’s duty of reasonable care to protect the information Asset can affect the legal
liability an organization faces from an event. Whether reasonable and appropriate preventative
measures are in place (given the threat environment, value of the Asset, and legal and regulatory
compliance requirements) can determine the severity of consequential legal and reputational
damage.
How effectively an organization responds to an event can spell the difference between an event
nobody remembers a year later and an event that stands out as an example (good or bad) in the
annals of history. There are three components to a response:
Containment – an organization’s ability to limit the breadth and depth of an event (e.g.,
cordoning-off the network to contain the spread of a worm)
Remediation – an organization’s ability to remove the Threat Agent (e.g., eradicating the
worm)
Recovery – the ability to bring things back to normal
All three of these response components must exist, and the degree to which any of them is
deficient can have a significant impact on Loss Magnitude.
Response capabilities are usually considered solely within the context of criticality; e.g., the
ability to return productivity to normal. However, response capabilities can also significantly
affect losses resulting from sensitive information disclosure. For example, an organization that
experiences a publicly disclosed breach of confidential customer information generally can
significantly reduce its losses by being forthright in its admissions and by compensating harmed
parties fully. Conversely, an organization that denies and deflects responsibility is much more
likely to become a pariah and a media whipping post.
Note: “Undetected Loss Events”, such as advanced persistent threats or situations described
as a Threat Agent gaining a “foothold” (as described in Section 3.5.3.2) in a system to
exploit at a later time are defined as Threat Events before they are detected and, once
detected, become Loss Events.
Organizational
Loss Factors
Reasonable
Timing Response Detection
Care
External loss factors include external party detection, legal and regulatory, competitors, media,
and Secondary Stakeholders (e.g., customers, partners, stockholders, shareholder activists).
These five categories represent entities that can inflict Secondary Loss upon the organization as
a consequence of an event. In other words, events will often result in direct forms of loss (e.g.,
productivity, response, replacement) due to the criticality and inherent value characteristics of
Assets. Secondary Losses may also occur based upon the external reaction to a Loss Event (e.g.,
sensitive information disclosure).
Moreover, all of the factors within these external categories can be described as “reactive to an
event”. In other words, for an external factor to affect Loss Magnitude, the external party first
must detect the event. For example, if an employee misuses his legitimate access to customer
information to commit identity theft, the customer(s), regulators, and lawyers cannot inflict harm
upon the organization unless that theft is tied back to the organization. Likewise, if a
productivity outage is not detected by customers, partners, etc., then the organization will not be
subject to a negative response on the part of those stakeholders.
External party detection can be thought of as a binary factor on which all other external factors
are predicated. Detection of an event by an external party can happen as a consequence of the
severity of the event, through intentional actions by the Threat Agent, through unauthorized
disclosure by someone on the inside who is familiar with the event, through intentional
disclosure by the organization (either out of a sense of duty, or because it is required by law), or
by accident.
Losses associated with the competitive landscape typically have to do with the competition’s
ability and willingness to take advantage of the Primary Stakeholder’s loss of control of sensitive
information.
Media reaction can have a significant effect on how stakeholders, lawyers, and even regulators
and competitors view the event. If the media chooses to vilify the organization and keep it in the
headlines for an extended period, the result can be much more significant. Conversely, if the
media paints the organization as a well-intentioned victim that exercised reasonable care but still
suffered the event at the hands of a criminal, then legal and reputation damage can be
minimized. This is why organizations must have effective crisis communication processes in
place.
External Loss
Factors
Losses can be amplified or diminished by loss factors, which are how qualities of the asset,
threat, organization, or external environment affect losses once they begin to occur.
Table 3: Loss Magnitude Factors
Total Loss Magnitude Sum of Primary and Secondary Loss Money, currency
Magnitude; an economic loss
Primary Loss Magnitude Direct economic losses associated with a Money, currency
confidentiality, integrity, or availability loss
of information Assets
Secondary Loss Event Conditional probability that a Primary Loss Probability (between 0-1 or
Frequency will result in a Secondary Loss measured as a percentage,
between 0 and 100%)
Forms of Loss Six forms of loss that completely describe Money, currency
possible losses and can occur as a Primary or
Secondary Loss: productivity, response,
replacement, fines and judgments,
competitive advantage, and reputation.
Loss Factors Four loss factors that affect magnitude of Dimensionless scalars,
loss: asset, threat, organizational, external multipliers
Extensive discussion in development of this risk taxonomy included considerations that can be
grouped into four categories:
Concerns regarding complexity of the model
The availability of data to support statistical analyses
The iterative nature of risk analyses
Perspective
Many of these considerations are not so much critical of the Open FAIR framework, but rather
are observations and concerns that apply no matter what method is used to analyze risk.
Of course, the fact that the framework includes greater detail provides several key advantages:
The aforementioned flexibility to go deep when necessary
A greater understanding of contributing factors to risk
The ability to better troubleshoot/critique analysis performed at higher layers of
abstraction
Another consideration to keep in mind is that risk is inherently complicated. If it were not, then
there would be no need for well-defined frameworks and no challenges over analyzing risk and
communicating about it. Using over-simplified and informal models almost invariably results in
unclear and inconsistent assumptions, leading to flawed conclusions, and therefore false
recommendations. With that in mind, even the detailed Open FAIR taxonomy is not a perfect or
comprehensive treatment of the problem. There are no perfect taxonomies/models of real-world
complexity.
With regard to communicating complex risk information to business decision-makers (who often
want information like this delivered in simple form), the problem is not inherently with the
model, but rather with the user. As is the case with any complex problem, results must be
articulated in a way that is useful and digestible to decision-makers. It is also not unusual for
Good data has been and will continue to be a challenge within the risk problem space for some
time to come. In part, this stems from the absence of a detailed framework that:
Defines which metrics are needed
Provides a model for applying the data so that meaningful results can be obtained
The Open FAIR framework has been proven in practice to help solve those two issues. It does
not, of course, help with those instances where data is unavailable because events are rare. In
those cases, regardless of the analysis method chosen, the estimates cannot be as well
substantiated by data. On the other hand, the absence of data due to the infrequency of events is
data – of sorts – and can be used to help guide our estimates. As additional information is
acquired over time, it is possible to adjust the initial estimates.
A.4 Perspective
An alternative view held by some is that “exposure” should be the focus rather than “risk”. The
argument put forward here is that “risk” can be thought of as the inherent worst-case condition,
and “exposure” represents the residual risk after Controls were applied.
Setting aside the possibility that those who hold this view misinterpret the definition of risk
within the Open FAIR model, both issues are related (sort of a “before” and “after” perspective)
and relevant. The Open FAIR framework provides the means to analyze both conditions by
allowing the analyst to derive unmitigated risk as well as mitigated risk levels.
This gap is particularly evident between business managers and their IT risk/security
specialists/analysts. For example, business managers talk about “impact” of loss, not in terms of
how many servers or operational IT systems will cease to provide normal service, but rather
what the impact will be of losing these normal services on the business’s capacity to continue to
trade normally, measured in terms of $-value; or will the impact be a failure to satisfy applicable
regulatory requirements which could force them to limit or even cease trading and perhaps
become liable to heavy legal penalties.
Therefore, a business manager tends to think of a “threat” as something which could result in a
loss that the business could not absorb without seriously damaging its trading position. Compare
this with our risk taxonomy definitions for “Threat” and “Vulnerability”:
Threat: anything that is capable of acting in a manner resulting in harm to an Asset and/or
organization; for example, acts of God (weather, geological events, etc.), malicious actors,
errors, failures
Vulnerability: the probability that a Threat Event will become a Loss Event
Similar language gaps exist between other stakeholders in management of risk. Politicians and
lawyers are particularly influential stakeholders: they are in the powerful position of shaping
national and international policy (e.g., OECD, European Commission), which in turn influences
national governments to pass laws and regulatory regimes on business practices that become
effective one to three years down the line.
Many information security standards and frameworks specify that information risk assessments
should be done but provide little to no specifications on how to do them. The O-RT and O-RA
Standards along with guidance documentation from The Open Group provide a way to quantify
risk in those information security standards and frameworks.
Senior management and boards of directors are guided by best practices within their professional
domains to treat information risk as an enterprise risk, and the Open FAIR model is one standard
that allows that risk to be expressed in economic, business terms, and in the same units of
measure as other enterprise risks. This compatibility of the unit of measure of risk between
information risk and other business operational risks allows information risk to be compared,
contrasted, and aggregated to develop overall enterprise-wide risk assessments. In essence, “risk
is risk”, whether it is related to an enterprise’s operation, a bank’s loan portfolio, a trading desk’s
value at risk, or information technology: it is the probable frequency of an uncertain loss and the
magnitude of that loss expressed as a distribution of economic outcomes.
Practitioners who must perform information technology risk assessments to comply with other
industry and regulatory standards, frameworks, and methodologies can use the Open FAIR
taxonomy and framework to build consistent and defensible risk statements that are measured in
the same economic terms as other risks they have to manage.