Reliability data for safety instrumented systems PDS data handbook 2010 ed. Edition Hauge download
Reliability data for safety instrumented systems PDS data handbook 2010 ed. Edition Hauge download
https://ebookgate.com/product/reliability-data-for-safety-
instrumented-systems-pds-data-handbook-2010-ed-edition-hauge/
https://ebookgate.com/product/safety-and-reliability-of-complex-
engineered-systems-esrel-2015-kroger/
https://ebookgate.com/product/handbook-of-statistics-24-data-
mining-and-data-visualization-c-r-rao/
https://ebookgate.com/product/safety-and-reliability-of-
industrial-products-systems-and-structures-carlos-guedes-soares/
https://ebookgate.com/product/data-engineering-fuzzy-mathematics-
in-systems-theory-and-data-analysis-1st-edition-olaf-wolkenhauer/
Data Mining for Systems Biology Methods and Protocols
1st Edition Koji Tsuda
https://ebookgate.com/product/data-mining-for-systems-biology-
methods-and-protocols-1st-edition-koji-tsuda/
https://ebookgate.com/product/handbook-of-normative-data-for-
neuropsychological-assessment-2nd-edition-maura-mitrushina/
https://ebookgate.com/product/data-source-handbook-1st-edition-
pete-warden/
https://ebookgate.com/product/data-center-handbook-1st-edition-
hwaiyu-geng/
https://ebookgate.com/product/clustering-for-data-mining-a-data-
recovery-approach-1st-edition-boris-mirkin/
SINTEF REPORT
TITLE
SINTEF Technology and Society Reliability Data for Safety Instrumented Systems
Safety research
Address: NO-7465 Trondheim,
PDS Data Handbook, 2010 Edition
NORWAY
Location: S P Andersens veg 5
NO-7031 Trondheim
Telephone: +47 73 59 27 56
Fax: +47 73 59 28 96 AUTHOR(S)
This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line with
the requirements in the IEC 61508 and IEC 61511 standards.
As compared to the former 2006 edition, the following main changes are included:
• A general review and update of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.
PREFACE
The present report is an update of the 2006 edition of the Reliability Data for Control and Safety
Systems, PDS Data Handbook [12]. The handbook presents data in line with the latest available
data sources as well as data for some new equipment.
The work has been carried out as part of the research project “Managing the integrity of safety
instrumented systems”. 1
Stein Hauge
Oil Companies/Operators
• A/S Norske Shell
• BP Norge AS
• ConocoPhillips Norge
• Eni Norge AS
• Norsk Hydro ASA
• StatoilHydro ASA (Statoil ASA from Nov. 1st 2009)
• Talisman Energy Norge
• Teekay Petrojarl ASA
• TOTAL E&P NORGE AS
Governmental Bodies
• The Directorate for Civil Protection and Emergency Planning (Observer)
• The Norwegian Maritime Directorate (Observer)
• The Petroleum Safety Authority Norway (Observer)
1
This user initiated research project has been sponsored by the Norwegian Research Council and the PDS
forum participants. The project work has been carried out by SINTEF.
3
ABSTRACT
This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line
with the requirements in the IEC 61508 and IEC 61511 standards.
As compared to the former 2006 edition, the following main changes are included:
• A general review and update of the failure rates, coverage values, β-values and other
relevant parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.
4
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Table of Contents
PREFACE ........................................................................................................................... 3
ABSTRACT .................................................................................................................................... 4
1 INTRODUCTION ................................................................................................................... 9
1.1 Objective and Scope ......................................................................................................... 9
1.2 Benefits of Reliability Analysis – the PDS Method ......................................................... 9
1.3 The IEC 61508 and 61511 Standards ............................................................................. 10
1.4 Organisation of Data Handbook ..................................................................................... 10
1.5 Abbreviations ................................................................................................................. 10
2 RELIABILITY CONCEPTS ................................................................................................. 13
2.1 The Concept of Failure ................................................................................................... 13
2.2 Failure Rate and Failure Probability............................................................................... 13
2.2.1 Failure Rate Notation ...................................................................................... 13
2.2.2 Decomposition of Failure Rate........................................................................ 14
2.3 Reliability Measures and Notation ................................................................................. 15
2.4 Reliability Parameters .................................................................................................... 16
2.4.1 Rate of Dangerous Undetected Failures .......................................................... 16
2.4.2 The Coverage Factor, c ................................................................................... 17
2.4.3 Beta-factors and CMooN .................................................................................... 17
2.4.4 Safe Failure Fraction, SFF............................................................................... 18
2.5 Main Data Sources ......................................................................................................... 18
2.6 Using the Data in This Handbook .................................................................................. 19
3 RELIABILITY DATA SUMMARY ..................................................................................... 21
3.1 Topside Equipment ......................................................................................................... 21
3.2 Subsea Equipment .......................................................................................................... 27
3.3 Comments to the PDS Data ............................................................................................ 28
3.3.1 Probability of Test Independent Failures (PTIF) .............................................. 28
3.3.2 Coverage .......................................................................................................... 29
3.3.3 Fraction of Random Hardware Failures (r) ..................................................... 30
3.4 Reliability Data Uncertainties – Upper 70% Values ...................................................... 32
3.4.1 Data Uncertainties ........................................................................................... 32
3.4.2 Upper 70% Values........................................................................................... 33
3.5 What is “Sufficient Operational Experience“? – Proven in Use .................................... 34
4 MAIN FEATURES OF THE PDS METHOD ...................................................................... 37
4.1 Main Characteristics of PDS .......................................................................................... 37
4.2 Failure Causes and Failure Modes ................................................................................. 37
4.3 Reliability Performance Measures ................................................................................. 39
4.3.1 Contributions to Loss of Safety ....................................................................... 40
4.3.2 Loss of Safety due to DU Failures - Probability of Failure on Demand (PFD)40
4.3.3 Loss of Safety due to Test Independent Failures (PTIF)................................... 40
4.3.4 Loss of Safety due to Downtime Unavailability – DTU ................................. 41
4.3.5 Overall Measure for Loss of Safety– Critical Safety Unavailability .............. 41
5 DATA DOSSIERS ................................................................................................................. 43
5.1 Input Devices .................................................................................................................. 44
5.1.1 Pressure Switch ............................................................................................... 44
5.1.2 Proximity Switch (Inductive) .......................................................................... 46
5.1.3 Pressure Transmitter ........................................................................................ 47
5
5.1.4 Level (Displacement) Transmitter................................................................... 49
5.1.5 Temperature Transmitter ................................................................................. 51
5.1.6 Flow Transmitter ............................................................................................. 53
5.1.7 Catalytic Gas Detector..................................................................................... 55
5.1.8 IR Point Gas Detector...................................................................................... 57
5.1.9 IR Line Gas Detector ....................................................................................... 59
5.1.10 Smoke Detector ............................................................................................... 61
5.1.11 Heat Detector ................................................................................................... 63
5.1.12 Flame Detector ................................................................................................ 65
5.1.13 H2S Detector .................................................................................................... 68
5.1.14 ESD Push Button ............................................................................................. 70
5.2 Control Logic Units ........................................................................................................ 72
5.2.1 Standard Industrial PLC .................................................................................. 73
5.2.2 Programmable Safety System ......................................................................... 79
5.2.3 Hardwired Safety System ................................................................................ 85
5.3 Final Elements ................................................................................................................ 88
5.3.1 ESV/XV........................................................................................................... 88
5.3.2 ESV, X-mas Tree ............................................................................................ 92
5.3.3 Blowdown Valve ............................................................................................. 95
5.3.4 Pilot/Solenoid Valve........................................................................................ 97
5.3.5 Process Control Valve ................................................................................... 100
5.3.6 Pressure Relief Valve .................................................................................... 103
5.3.7 Deluge Valve ................................................................................................. 105
5.3.8 Fire Damper ................................................................................................... 106
5.3.9 Circuit Breaker .............................................................................................. 108
5.3.10 Relay.............................................................................................................. 109
5.3.11 Downhole Safety Valve – DHSV.................................................................. 110
5.4 Subsea Equipment ........................................................................................................ 111
6 REFERENCES..................................................................................................................... 116
6
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
List of Tables
List of Figures
7
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
1 INTRODUCTION
Safety standards like IEC 61508, [1] and IEC 61511, [2], require quantification of failure
probability for operation of safety systems. Such quantification may be part of design
optimization or verification that the design is according to stated performance requirements.
The use of relevant failure data is an essential part of any quantitative reliability analysis. It is also
one of the most challenging parts and raises a number of questions concerning the availability and
relevance of the data, the assumptions underlying the data and what uncertainties are related to the
data.
In this handbook recommended data for reliability quantification of Safety Instrumented Systems
(SIS) are presented. Efforts have been made to document the presented data thoroughly, both in
terms of applied data sources and underlying assumptions.
Various data sources have been applied when preparing this handbook, the most important source
being the OREDA database and handbooks (ref. section 2.5).
As compared to the former 2006 edition, [12], the following main changes are included:
• A general update / review of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.
Reliability analysis represents a systematic tool for evaluating the performance of safety
instrumented systems (SIS) from a safety and production availability point of view. Some main
applications of reliability analysis are:
• Reliability assessment and follow-up; verifying that the system fulfils its safety and
reliability requirements;
• Design optimisation; balancing the design to get an optimal solution with respect to safety,
production availability and lifecycle cost;
• Operation planning; establishing the optimal testing and maintenance strategy;
9
• Modification support; verifying that planned modifications are in line with the safety and
reliability requirements.
The PDS method has been developed in order to enable the reliability engineer and non-experts to
perform such reliability considerations in various phases of a project. The main features of the
PDS method are discussed in chapter 4.
The PDS method is in line with the main principles advocated in the IEC standards, and is a
useful tool when implementing and verifying quantitative (SIL) requirements as described in the
IEC standards.
The recommended reliability data estimates are summarised in chapter 3 of this report. A split has
been made between input devices, logic solvers and final elements.
Chapter 4 gives a brief summary of the main characteristics of the PDS method. The failure
classification for safety instrumented systems is presented together with the main reliability
performance measures used in PDS.
In chapter 5 the detailed data dossiers providing the basis for the recommended reliability data are
given. As for previous editions of the handbook, some data are scarcely available in the data
sources, and it is necessary to, partly or fully, rely on expert judgements.
1.5 Abbreviations
CCF - Common cause failure
CSU - Critical safety unavailability
DTU - Downtime unavailability
FMECA - Failure modes, effects, and criticality analysis
FMEDA - Failure modes, effects, and diagnostic analysis
IEC - International Electro technical Commission
JIP - Joint industry project
MTTR - Mean time to restoration
NDE - Normally de-energised
NE - Normally energised
OLF - The Norwegian oil industry association
OREDA - Offshore reliability data
10
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
AI - Analogue input
BDV - Blowdown valve
CPU - Central Processing Unit
DO - Digital output
ESV - Emergency shutdown valve
DHSV - Downhole safety valve
XV - Production shutdown valve
11
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
2 RELIABILITY CONCEPTS
In this chapter some selected concepts related to reliability analysis and reliability data are
discussed. For a more detailed discussion reference is made to the updated PDS method
handbook, ref. [10].
From a safety point of view, the first category will be the more critical and such failures are
defined as dangerous failures (D), i.e. they have the potential to result in loss of the ability to shut
down or go to a safe state when required.
Loss of the ability to maintain production is normally not so critical to safety and such failures
have therefore in PDS traditionally been denoted spurious trip (ST) failures whereas IEC 61508
categorise such failures as ‘safe’ (S). In the forthcoming update of the IEC 61508 standard the
definition of safe failures is more in line with the PDS interpretation. Therefore PDS have in this
updated version also applied the notation ‘S’ (instead of ‘ST’ failures).
It should be noted that a given failure may be classified as either dangerous or safe depending on
the intended application. E.g. loss of hydraulic supply to a valve actuator operating on-demand
will be dangerous in an energise-to-trip application and safe in a de-energise-to-trip application.
Hence, when applying the failure data, the assumptions underlying the data as well as the context
in which the data shall be used must be carefully considered.
λcrit = Rate of critical failures; i.e., failures that may cause loss of one of the two main
functions of the component/system (see above).
Critical failures include dangerous (D) failures which may cause loss of the ability to
shut down production when required and safe (S) failures which may cause loss of
the ability to maintain production when safe (i.e. spurious trip failures). Hence:
13
λDU = Rate of dangerous undetected failures, i.e. failures undetected both by automatic
self-test or personnel
λDD = Rate of dangerous detected failures, i.e. failures detected by automatic self-test or
personnel
λS = Rate of safe (spurious trip) failures, including both undetected as well as detected
failures. λS = λSU + λSD (see below)
λSU = Rate of safe (spurious trip) undetected failures, i.e. undetected both by automatic
self-test and personnel
λSD = Rate of safe (spurious trip) detected failures, i.e. detected by automatic self-test or
personnel
λundet = Rate of (critical) failures that are undetected both by automatic self-test and by
personnel (i.e., detected in functional testing only). λundet = λDU + λSU
λdet = Rate of (critical) failures that are detected by automatic self-test or personnel
(independent of functional testing). λdet = λDD + λSD
CMooN = Modification factor for voting configurations other than 1oo2 in the beta-factor
model (e.g. 1oo3, 2oo3 and 2oo4 voting logics)
Some important relationships between different fractions of the critical failure rate are illustrated
in Table 1 and Figure 1.
14
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Dangerous failure,
λDU undetected by automatic
self-test or personnel
λundet
Safe (spurious trip) failure,
λSU undetected by automatic
self-test or personnel
λcrit
Contribute to SFF
λDD (Safe Failure
Fraction)
λdet
λSD
Term Description
PFD Probability of failure on demand. This is the measure for loss of safety caused by
dangerous undetected failures, see section 4.3.
PTIF Probability of a test independent failure. This is the measure for loss of safety
caused by a failure not detectable by functional testing, but occurring upon a true
demand (see section 4.3).
CSU Critical safety unavailability, CSU = PFD + PTIF
15
Term Description
MTTR Mean time to restoration. Time from failure is detected/revealed until function is
restored, ("restoration period"). Note that this restoration period may depend on a
number of factors. It can be different for detected and undetected failures: The
undetected failures are revealed and handled by functional testing and could have
shorter MTTR than the detected failures. The MTTR could also depend on
configuration, operational philosophy and failure multiplicity.
STR Spurious trip rate. Rate of spurious trips of the safety system (or set of redundant
components), taking into consideration the voting configuration.
τ Interval of functional test (time between functional tests of a component)
As discussed in section 2.2.2, the critical failure rate, λ rit are split into dangerous and safe
failures, (i.e. λcrit = λD + λS) which are further split into detected and undetected failures. When
performing safety unavailability calculations, the rate of dangerous undetected failures, λDU, is of
special importance, since this parameter - together with the test interval - to a large degree
governs the prediction of how often a safety function is likely to fail on demand.
Equipment specific failure data reports prepared by manufacturers (or others) often provide λDU
estimates being an order of magnitude (or even more) lower than those reported in generic data
handbooks. There may be several causes for such exaggerated claims of performance, including
imprecise definition of equipment- and analysis boundaries, incorrect failure classification or too
optimistic predictions of the diagnostic coverage factor (see e.g. [20]).
When studying the background data for generic failure rates (λDU) presented in data sources such
as OREDA and RNNP, it is found that these data will include both random hardware failures as
well as systematic failures. Examples of the latter include incorrect parameter settings for a
pressure transmitter, an erroneous output from the control logic due to a failure during software
modification, or a PSV which fails due to excessive internal erosion or corrosion. These are all
failures that are detectable during functional testing and therefore illustrate the fact that systematic
failures may well be part of the λDU for generic data.
Since failure rates provided by manufacturers frequently tend to exclude all types of failures
related to installation, commissioning or operation of the equipment (i.e. systematic type of
failures), a mismatch between manufacturer data and generic data appears. Our question then
becomes - since systematic failures inevitably will occur - why not include these failures in
predictive reliability analyses?
In order to elucidate the fact that the failure rate will comprise random hardware failures as well
as systematic failures, the parameter r has therefore been defined as the fraction of dangerous
undetected failures originating from random hardware failures. Rough estimates of the r factor are
given in the detailed data sheets in chapter 5. For a more thorough discussion and arguments
concerning the r factor, reference is made to [10].
16
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Modules often have built-in automatic self-test, i.e. on-line diagnostic testing to detect failures
prior to an actual demand 2. The fraction of failures being detected by the automatic self-test is
called the fault coverage and quantifies the effect of the self-test. Note that the actual effect on
system performance from a failure that is detected by the automatic self-test will depend on
system configuration and operating philosophy. In particular it should be considered whether the
detected failure is configured to only raise an alarm or alternatively bring the system to a safe
state. It is often seen that failures classified as dangerous detected only raise an alarm and in such
case it must be ensured that the failure initiates an immediate response in the form of a repair
and/or introduction of risk reducing measures.
In addition to the diagnostic self-test, an operator or maintenance crew may detect dangerous
failures incidentally in between tests. For instance, the panel operator may detect a transmitter that
is “stuck” or a sensor that has been left in by-pass. Similarly, when a process segment is isolated
for maintenance, the operator may detect that one of the valves will not close. The PDS method
also aims at incorporating this effect, and defines the total coverage factor; c reflecting detection
both by automatic self-test and by operator. Further, the coverage factor for dangerous failures is
denoted cD whereas the coverage factor for safe failures is denoted cS.
Critical failures that are not detected by automatic self-testing or by observation are assumed
either to be detectable by functional (proof) testing 3 or they are so called test independent failures
(TIF) that are not detected during a functional test but appear upon a true demand (see section 2.3
and chapter 4 for further description).
It should be noted that the term “detected safe failure” (of rate λS), is interpreted as a failure
which is detected such that a spurious trip is actually avoided. Hence, a spurious closure of a
valve which is detected by, e.g., flow metering downstream the valve, can not be categorised as a
detected safe failure. On the other hand, drifting of a pressure transmitter which is detected by the
operator, such that a shutdown is avoided, will typically be a detected safe failure.
When quantifying the reliability of systems employing redundancy, e.g., duplicated or triplicated
systems, it is essential to distinguish between independent and dependent failures. Random
hardware failures due to natural stressors are assumed to be independent failures. However, all
systematic failures, e.g. failures due to excessive stresses, design related failures and maintenance
errors are by nature dependent (common cause) failures. Dependent failures can lead to
simultaneous failure of more than one (redundant) component in the safety system, and thus
reduce the advantage of redundancy.
Traditionally, the dependent or common cause failures have been accounted for by the β-factor
approach. The problem with this approach has been that for any M-out-of-N (MooN) voting
(M<N) the rate of dependent failures is the same, and thus the approach does not distinguish
between e.g. a 1oo2 and a 2oo3 voting. The PDS method extends the β-factor model, and
distinguishes between the voting logics by introducing β-factors which depend on the voting
configuration; i.e. β(MooN) = β · CMooN. Here, CMooN is a modification factor depending on the
voting configuration, MooN.
2
Also refer to IEC 61508-4, section 3.8.6 and 3.8.7
3
See also IEC 61508-4, section 3.8.5.
17
Standard (average) values for the β-factor are given in Table 7. Note that when performing
reliability calculations, application specific β-factors should preferably be obtained, e.g. by using
the checklists provided in IEC 61508-6, or by using the simplified method as described in
Appendix D of the PDS method handbook, [10].
Values for CMooN are given in Table 8. For a more complete description of the extended β-factor
approach of PDS, see [10].
The Safe Failure Fraction as described in IEC 61508 is given by the ratio between dangerous
detected failures plus safe failures and the total rate of failure; i.e. SFF = (λDD + λS) /(λD + λS).
The objective of including this measure (and the associated hardware fault tolerance; HFT) was to
prevent manufacturers from claiming excessive SILs based solely on PFD calculations. However,
experience has shown that failure modes that actually do not influence the main functions of the
SIS (ref. section 2.1) are frequently included in the safe failure rate so as to artificially increase
the SFF, [20].
It is therefore important to point out that when estimating the SFF, only failures with a potential to
actually cause a spurious trip of the component should be included among the safe failures. Non-
critical failures, such as a minor external leakage of hydraulic oil from a valve actuator, should not
be included.
The SFF figures presented in this handbook are based on reported failure mode distributions in
OREDA as well as some additional expert judgements. Higher (or lower) SFFs than given in the
tables may apply for specific equipment types and this should in such case be well documented,
e.g. by FMEDA type of analyses.
18
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
The recommended data is based on a number of assumptions concerning safe state, fail safe
design, self-test ability, loop monitoring, NE/NDE design, etc. These assumptions are, for each
piece of equipment, described in the detailed data sheets in chapter 5, Hence, when using the data
for reliability calculations, it is important to consider the relevance of these assumptions for each
specific application.
19
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Observe that λD (third column of tables 3 to 5), together with λcrit =λD + λS, will provide the λS.
The rates of undetected failures λDU and λSU follow from the given coverage values, cD and cS. I.e.
λDU = λD · (1- cD / 100%) and λSU = λS · (1- cS / 100%). The safe failure fraction, SFF, can be
calculated by SFF = ((λcrit - λDU)/ λcrit) · 100%.
Data dossiers with comprehensive information for each component are given in chapter 5 as
referred to in tables 3 to 5.
Input Devices
H2S detector 1.3 1.0 50% 30% 0.5 0.2 62% Sect. 5.1.13
21
Table 4 Failure rates, coverages and SFF for control logic
units
The following additional assumptions and notes apply for the above data on control logic units:
• A single system with analogue input, CPU/logic and digital output configuration is
generally assumed;
• For the input and output part, figures are given for one channel plus the common part of
the input/output card (except for hardwired safety system where figures for one channel
only are given);
• Single processing unit / logic part is assumed throughout;
• If the figures for input and output are to be used for redundant configurations, separate
input cards and output cards must be used since the given figures assume a common part
on each card;
• If separate Ex barriers or other interface devices are used, figures for these must be added
separately;
• The systems are generally assumed used in de-energised to trip functions, i.e. loss of
power or signal will result in a safe state.
22
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Final elements
Table 6 below gives suggested values for the PTIF, i.e. the probability of a test independent failure
occurring upon a demand.
23
Table 6 PTIF for various components
Component
Component PTIF Comments (see section 3.3.1)
group
Table 7 and 8 give suggested values for the β-factor and the configuration factor CMooN
respectively. Note that the CMooN factors have been updated as compared to previous values, ref.
[12].
Regarding the suggested β-factors it should be pointed out that these are typical values. Any
application specific factors may be implemented in the estimates by e.g. applying the checklists in
24
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
IEC 61508 or the simplified method described in appendix D in [10]. Some beta values have been
slightly increased as compared to the figures in the 2006 edition, [12]. This is based on results
from operational reviews where it was observed that a fairly large proportion of the SIS failures
actually involved more than one component.
Component
Component β Comment/source
group
Relay 0.03
25
Table 8 Numerical values for configuration factors, CMooN
M=1 C1oo2 = 1.0 C1oo3 = 0.5 C1oo4 = 0.3 C1oo5 = 0.21 C1oo6 = 0.17
Note that the CMooN factors have been updated as compared to the previous 2006 handbook, [12].
It should be pointed out that the CMooN factors are suggested values and not exact figures. C1oo5
and C1oo6 have been given to two decimal places in order to be able to distinguish the two
configurations. The reasoning behind the CMooN factors is further discussed in [10].
26
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Similarly, the values given for the safe failure fraction (SFF) should be considered as indicative
only. Higher (or lower) SFFs may apply for specific equipment types and this should in such case
be documented separately.
Subsea equipment
Solenoid control valve (in subsea control module) 0.40 0.16 0% 0.16 60 %
Production master valve (PMV), Production wing
0.26 0.18 0% 0.18 30 %
valve (PWV)
Chemical injection valve (CIV) 0.37 0.22 0% 0.22 40 %
27
3.3 Comments to the PDS Data
The data presented in Table 3 – Table 9 are mainly based on operational experience (OREDA,
RNNP, etc.) and as such reflect some kind of average expected field performance. It is stressed
that these generic data should not be used uncritically – if valid application specific data is
available, these should be preferred. When comparing the data in this handbook with figures
found in manufacturer certificates and reports, major gaps will often be found. As discussed in
section 2.4.1 such data sources often exclude failures caused by inappropriate maintenance, usage
mistakes and design related systematic failures. Care should therefore be taken when data from
certificates and similar reports are used for predicting reliability performance in the field.
For some equipment types and some of the parameters, the listed data sources provide limited
information and additional expert judgement must therefore be applied. In particular for the PTIF,
the coverage c and the r factor, the data sources are scarce, and some arguments are therefore
required concerning the recommended values.
General
No testing is 100% perfect and some dangerous undetected failures may therefore be present also
after a functional test. The suggested PTIF values attempt to quantify the likelihood of such failures
to be present after a test. Obviously, such values will depend heavily on the given application, and
specific measures may have been introduced to minimise the likelihood of test independent
failures. Hence, it may be argued to reduce (or increase) the given values. This is further
discussed in appendix D of the method handbook, [10].
Process Switch
The proposed PTIF of 10-3 applies to a pressure switch operating in clean medium, and the main
contribution is assumed to be failures during human intervention (e.g. by-pass, wrong set point,
etc.). If the switch is operating in unclean medium and clogging of the sensing line is a possibility,
the PTIF may be increased to 5·10-3.
Proximity Switch
For proximity switches a relatively high PTIF of 10-3 has been suggested. The main contributors to
this TIF are assumed to be failures during installation and maintenance, in particular mounting
and misalignment problems related to the interacting parts.
Process Transmitters
Transmitters have a “live signal”. Thus, blocking of the sensing line may be detected by the
operator and is included in the λdet. Also a significant part of the failures of the transmitter itself
(all "stuck" failures) may be detected by the operator and therefore contribute to λdet. Thus, the
PTIF for transmitters is expected to be less than that of the switch and a value of 5·10-4 has been
suggested. In previous editions of the handbook, smart and field bus transmitters have, due to
more complete self-test, been given an even smaller PTIF. However, since smart transmitter also
has some additional software, other test independent failures may be introduced. Consequently,
one common PTIF is given.
Gas Detectors
In previous versions of the PDS data handbook the given PTIF values for gas detectors
differentiated with respect to detector type, the size of the leakage, ventilation, and other
conditions expected to influence the PTIF probability for detectors. The PTIF values were then
given as intervals depending on the state of the conditions listed. It is now assumed that the
detector is already exposed and the present values generally represent lower end values of the
previously given intervals, as this represented the “best conditions”. Note that catalytic gas
28
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
detectors and H2S detectors have been given a somewhat lower PTIF than the IR gas detectors. The
catalytic gas detectors have a simpler design which is assumed to result in a lower probability of
test independent systematic failures.
Fire Detectors
PTIF values are given based on the assumptions that (1) a detector with the "appropriate" detection
principle is applied (e.g. that smoke detectors are applied where smoke fires are expected and
flame detectors where flame fires are expected), and (2) the detector is already exposed to the
flame/heat or smoke (depending on detector type). A PTIF value of 10-3 has been suggested for all
fire detectors.
Valves
The PTIF for ESV/XVs will depend on the quality of the functional testing performed. Here, a
standard functional test has been assumed where the valve is fully closed but not tested for
internal leakage. In such case a PTIF value of 10-4 is suggested. For control valves used for
shutdown purposes and blowdown valves a PTIF of 10-4 is also suggested. All these values include
PTIF for the pilot valve. For PSVs a relatively high PTIF value of 10-3 has been suggested due to the
possibility of human failures related to incorrect setting/adjustment of the PSV.
3.3.2 Coverage
General
As compared to the ’03, ‘04 and ’06 editions of the PDS Reliability data handbook, some of the
coverage factors have been updated. The reasoning behind this is partly discussed below. The
discussion is mainly limited to dangerous failures.
For process transmitters 60 % coverage for dangerous failures has been assumed. This is based on
implemented self test in the transmitter as well as casual observation by control room operator.
The latter assumes that the transmitter signal can be observed on the VDU and compared with
other signals so that e.g. stuck or drifting signal can be revealed. If a higher coverage is claimed,
e.g. due to automatic comparison between transmitters, this should be especially documented.
29
coverage. Catalytic gas detectors normally have limited built-in self-test. The same applies for
smoke and heat detectors. Hence, these detector types will have a lower coverage.
For a standard industrial PLC (single system) the coverage factor for dangerous failures, cD, has
been set lower than for a SIL certified programmable safety system. For safe failures, the
coverage factor is low, since it is assumed that upon detection of such a failure (e.g. loss of signal)
the single safety system should normally go to a safe state (i.e. a shutdown). It should be noted
that if the safety system is redundant, the rate of undetected safe (i.e. spurious trip) failures may
be reduced significantly by the use of voting.
The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or a detected failure will result in a trip action (SU).
Hence, this implies that the coverage for both dangerous and safe failures has been assumed to be
zero. Note that this applies for single systems and as a consequence hardwired safety systems are
often voted 2oo2 in order to avoid spurious trip failures.
Valves
No automatic self-test for valves is assumed. For ESV/XV valves the coverage for dangerous
failures have been slightly increased to 30% due to information from OREDA phase V-VII where
it appears that a high fraction of dangerous failures (more than 50%) are detected in between tests
by operator observation or other methods. It should be noted that this is not automatic diagnostic
coverage (as e.g. defined in IEC-61508) but will however imply that dangerous faults are detected
in between testing. For valves that are operated infrequently, the coverage will be lower, and the
cD for blowdown valves has therefore been set to 20%. Based on information from OREDA the
coverage for safe failures has been set to 10% for ESV/XV and 0% for blowdown valves.
For control valves used also for shutdown purposes, a relatively high coverage of 50% has been
estimated based on the registered observation methods for the relevant failure modes in OREDA.
It is then implicitly assumed that the control valve is frequently operated resulting in a relatively
high coverage.
Occasionally, e.g. on some onshore plants, selected control valves may be used solely for
shutdown purposes (i.e. not normally operated). In this case the valves will be operated
infrequently, resulting in a significantly lower coverage factor. For control valves used only as
shutdown valves, the coverage is therefore suggested reduced to 20%.
For PSV valves and deluge valves no coverage has generally been assumed.
General
Based on input from discussions with experts as well as a study of available OREDA data,
estimates of r have been established. As discussed previously, r is the fraction of dangerous
undetected (DU) failures that can be “explained” by random hardware failures (hence 1-r is the
fraction of DU failures that can be explained by systematic failures). Below, a brief discussion of
the r values suggested in the detailed data sheets is given.
30
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Process Switch
For process switches the reported failure causes from OREDA are scarce and the r has been
estimated by expert judgement to be approximately 50%.
Process Transmitters
Data from OREDA on critical transmitter failures, results from operational reviews as well as
discussions with experts, all indicate that a significant proportion of the critical dangerous failures
for transmitters are caused by factors such as “excessive vibration”, “erroneous maintenance”
(e.g. ‘wrong calibration’ ‘erroneous specification of measurement area’ and ‘left in inhibit’) and
“incorrect installation”. As seen all these are examples of systematic failures which according to
OREDA are detectable (either by casual observation or during functional testing/maintenance).
Based on the observed failure cause distribution, an r = 30% has therefore been proposed.
Detectors
When going through data from OREDA phase V and VI for fire and gas detectors, it is found that
for some 40% of the critical failures the failure cause is reported as being due to ‘expected wear
and tear, whereas some 60% of the critical failures are due to ‘maintenance errors’. When going in
more detail into the failure mechanisms, it is seen that the failures are described by e.g. ‘out of
adjustment’ (30%), ‘general instrument failure’ (28%), ‘contamination’ (21%), ‘vibration’ (10%)
and ‘maintenance/external/others’ (11%). Even though ‘contamination’ (i.e. typical dirty lens) and
instrument failure partly can be explained by expected wear and tear, it is seen that many of the
critical failures are systematic ones. Based on this an r = 40% has been proposed.
Control Logic
For control logic no updated OREDA data is available on failure causes and the proposed r values
are therefore entirely based on expert judgements. It has been assumed that for a standard
industrial PLC the major part of the failures can be explained by (systematic) software related
errors. Hence, a small r of 10% has been proposed. On the other hand, for a hardwired safety
system, it is assumed that a large part of the failure rate is due to random hardware failures, and a
large r of 80% has been suggested.
Valves
The reported failure causes in OREDA for critical failures are somewhat scarce and therefore
additional expert judgement has to be applied. When considering what types of valve failures that
are typically revealed upon functional testing, this includes stuck valve, insufficient actuator
force, valve not shutting off tight due to excessive erosion or corrosion (unclean medium),
incorrect installation, etc. Several of these failures represent (detectable) systematic failures,
hence it is evident that the r is significantly lower than 1 (only random hardware failures).
For ESV/XV and X-mas tree valves an r = 50% has been proposed, mainly based on expert
judgement and reported failure causes for other type of valves. For pilot valves, there are more
reported failure causes and these indicate a relatively high proportion of systematic failures. Here
an r equal to 40% has been suggested based on the reported OREDA data.
For control valves used for shutdown purposes, a somewhat higher proportion of ‘wear and tear’
failures are expected, and therefore an r equal to 60% has been proposed. Reported failures causes
for deluge valves also indicate a relatively high proportion of ‘wear an tear’ related failures and an
r equal to 60% has been proposed also for deluge valves.
For PSV valves, limited data on failure causes is available from OREDA, and an r = 50% has
been suggested.
31
3.4 Reliability Data Uncertainties – Upper 70% Values
The failure rates given in this handbook are best (mean) estimates based on the available data
sources listed in section 2.5. The data in these sources have mainly been collected on oil and gas
installations where environment, operating conditions and equipment types are comparable, but
not at all identical. The presented data are therefore associated with uncertainties due to factors
such as:
• The data collection itself; inadequate failure reporting, classification or data interpretation.
• Variations between installations; the failure rates are highly dependant upon the operating
conditions and also the equipment make will vary between installations.
• Relevance of data / equipment boundaries; what components are included / not included in
the reported data? Have equipment parts been repaired or simply replaced, etc.?
• Assumed statistical model; is the standard assumption of a constant failure rate always
relevant for the equipment type under consideration?
• Aggregated operational experience; what is the total amount of operational experience
underlying the given estimates?
The last bullet concerning amount of operational experience, is related to the possibility of
establishing a confidence interval for the failure rate. Instead of only specifying a single mean
value, an interval likely to include the parameter is given. How likely the interval is to contain the
parameter is determined by the confidence level. E.g. a 90% confidence interval for λDU may be
given by: [0.1·10-6 per hour, 5·10-6 per hour]. This means that we are 90% confident that the
failure rate will lie within this interval. It is also possible to specify one-sided confidence intervals
where the lower bound of the interval is zero. E.g. a one-sided 70% interval for λDU may be given
by: [0, 4·10-6 per hour], implying that we can be 70% certain that the failure rate is lower than
4·10-6 per hour.
In particular, in IEC 61508-2, section 7.4.7.4, it is stated that any failure rate data based on
operational experience should have a confidence level of at least 70% (a similar requirement is
found in IEC 61511-1, section 11.9.2). Hence, IEC 61508 and IEC 61511 indicate that when using
historic data one should be conservative and the recommended approach is to choose the upper
70% confidence value for the failure rate as illustrated on Figure 2 below.
0 Mean λ Conservative λ
Some data sources, such as OREDA, provide confidence intervals for the failure rate estimates,
whereas most sources, including this handbook, provide mean values only. However, in the next
section an attempt has been made to indicate failure rate values with a confidence level of at least
70% as required in the IEC 61508/61511 standards.
32
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
When looking in more detail at the data dossiers in chapter 5, it is seen that there is a varying
amount of operational experience underlying the failure rate estimates. Hence, there will also be a
varying degree of confidence associated with the given data. Based on the aggregated operational
time, number of dangerous failures and some additional expert judgement, an attempt has been
made, whenever possible, to establish a one-sided 70% confidence interval and thereby provide
some upper 70% values for the dangerous undetected failure rate. The result of this exercise is
summarised in the below table (done only for the topside data where the most detailed
information has been available).
33
1) 1)
Component λDU λDU
group Component Comments
(mean) (70%)
Final Control valve (ex. pilot)
2.2·10-6 3.5·10-6
Elements (frequently operated)
(cont.) Control valve (ex. pilot)
3.5·10-6 5.5·10-6
(shutdown service only)
Pressure relief valve, PSV 2.2·10-6 3.2·10-6
• Establishing confidence intervals based on data from different sources and different
installations is not a straightforward task. The suggested upper 70% values should
therefore be taken as rough estimates only.
• As discussed in section 2.4.1, the generic data presented in this handbook include failure
mechanisms that are frequently excluded from e.g. manufacturer failure reports and
certificates. As such, the mean failure rates given in Table 3-5 are considered
representative when predicting the expected risk reduction from the equipment. Using the
upper 70% confidence values presented above should therefore be considered as a way of
increasing the robustness of the results e.g. when performing sensitivity analyses.
• In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in
the operating phase”, [23], a procedure for updating failure rates in operation is described.
For this purpose a conservative estimate for the λDU is required. Unless other equipment
specific values are available, the above upper 70% values can then be applied.
In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in the
operating phase”, [23], it has been discussed how much operational experience is required before
a reasonable confidence in a new failure rate estimate can be established. For SIS component data
(for detectors, sensors and valves) from OREDA, it can be found that the upper 95% confidence
limit for the rate of DU-failures is typically some 2–3 times the mean value of the failure rate,
[23], [24]. A suggested “cut-off” criterion for claiming proven in use can then be that the gathered
operational experience shall be sufficient to establish a failure rate estimate with comparable
confidence, i.e. the upper 95% confidence for λDU shall be within 2–3 times the mean value.
Based on this criterion and further work from [23] and [24], some suggested rules for claiming
“proven in use” for a given piece of field equipment are:
34
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
• Minimum aggregated time in service should be 2.5 million (2.5·106) operational hours or
at least 2 dangerous undetected failures 4 should have been registered for the considered
observation period;
• Operational data should be available from at least 2 installations with comparable
operational environments;
• The data should be collected from the useful period of life of the equipment (typically this
implies that the first 6 months of operation should be considered excluded);
• A systematic data collection and reporting system should be implemented to ensure that all
failures have been formally recorded;
• It should be ensured that all equipment units included in the sample have been activated
(i.e. tested or demanded) at least once during the observation period (in order to ensure
that components that have never been activated are counted in).
Additional requirements are given in the IEC standards. It should be noted that whereas IEC
61508 uses the term ‘proven in use’, IEC 61511 applies the term ‘prior use’. However, neither
IEC 61508 nor IEC 61511 quantify the required amount of operating experience, but states that
for field equipment there may be extensive operating experience that can be used as a basis for the
evidence [for prior use, ref. IEC 61511-1, section 11.5.3].
It may be argued that the above requirement concerning aggregated time in service is difficult to
fulfil for equipment other than e.g. fire and gas detectors. However, an important part of claiming
proven in use is to have a clear understanding of failure mechanisms, how the failure is detected
and repaired and what maintenance activities are required in order to keep the equipment in an “as
good as new condition”, [21]. For this purpose considerable operational experience is necessary
and focus should therefore be on improved data collection and failure registration. Furthermore, it
will require that the manufacturers obtain feedback on operational performance from the
operators, also beyond the warranty period of the equipment.
4
In general, an increasing number of failures will result in a narrower confidence interval, i.e. a higher
confidence in the estimated mean value. Hence, experienced DU failures may “compensate” for limited
operational experience (but will anyhow require significant operational time if a low failure rate is to be
claimed).
35
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
The method gives an integrated approach to random hardware and systematic failures. Thus,
the model accounts for relevant failure causes such as:
- normal ageing
- software failures
- stress induced failures
- design failures
- installation failures
- operational related failures
The model includes all relevant failure types that may occur, and explicitly accounts for
dependent (common cause) failures and the effect from different types of testing (auto-
matic/self-test as well as manual observation).
The model distinguishes between the ways a system can fail (failure mode), such as fail-to-
operate, spurious operation and non-critical failures.
A main benefit of the PDS taxonomy is the direct relationship between failure causes and the
measures used to improve safety system performance.
The method is simple and structured:
- highlighting the important factors contributing to loss of safety and spurious operation
- promoting transparency and communication
As stressed in IEC 61508, it is important to incorporate the complete safety function when
performing reliability analyses. This is a core issue in PDS; it is function-oriented, and the
whole path from the sensors, via the control logic to the actuators is taken into consideration
when modelling the system.
The PDS method has a somewhat different approach to systematic failures compared to IEC
61508. Whereas IEC 61508 only quantifies part of the total failure rate, represented by the
random hardware failures, PDS also attempts to quantify the contribution from systematic
failures (see Figure 3 below) and therefore gives a more complete picture of how the
equipment is likely to operate in the field.
37
Failure
Random Systematic
hardware failure
failure
Random hardware failures are failures resulting from the natural degradation mechanisms of the
component. For these failures it is assumed that the operating conditions are within the design
envelope of the system.
Systematic failures are failures that can be related to a particular cause other than natural
degradation and foreseen stressors. Systematic failures are due to errors made during
specification, design, operation and maintenance phases of the lifecycle. Such failures can
therefore normally be eliminated by a modification, either of the design or manufacturing process,
the testing and operating procedures, the training of personnel or changes to documentation.
There are several possible schemes for classifying systematic failures. Here, a further split into
five categories has been suggested:
• Software faults may be due to programming errors, compilation errors, inadequate testing,
unforeseen application conditions, change of system parameters, etc. Such faults are present
from the point where the incorrect code is developed until the fault is detected either through
testing or through improper operation of the safety function. Software faults can also be
introduced during modification to existing process facilities, e.g. inadequate update of the
application software to reflect the revised shutdown sequences or erroneous setting of a high
alarm outside its operational limits.
• Design related failures, are failures (other than software faults) introduced during the design
phase of the equipment. It may be a failure arising from incorrect, incomplete or ambiguous
system or software specification, a failure in the manufacturing process and/or in the quality
assurance of the component. Examples are a valve failing to close due to insufficient actuator
force or a sensor failing to discriminate between true and false demands.
• Installation failures are failures introduced during the last phases prior to operation, i.e. during
installation or commissioning. If detected, such failures are typically removed during the first
months of operation and such failures are therefore often excluded from data bases. These
failures may however remain inherent in the system for a long period and can materialise
38
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
during an actual demand. Examples are erroneous location of e.g. fire/gas detectors, a valve
installed in the wrong direction or a sensor that has been erroneously calibrated during
commissioning.
• Excessive stress failures occur when stresses beyond the design specification are placed upon
the component. The excessive stresses may be caused either by external causes or by internal
influences from the medium. Examples may be damage to process sensors as a result of
excessive vibration or valve failure caused by unforeseen sand production.
• Operational failures are initiated by human errors during operation or maintenance/testing.
Examples are loops left in the override position after completion of maintenance or a process
sensor isolation valve left in closed position so that the instrument does not sense the medium.
• Dangerous (D). Safety system/module does not operate on demand (e.g. sensor stuck upon
demand)
• Safe (S). Safety system/module may operate without demand (e.g. sensor provides signal
without demand – potential spurious trip)
• Non-Critical (NONC). Main functions not affected (e.g. sensor imperfection, which has no
direct effect on control path)
The first two of these failure modes, dangerous (D) and safe (S) are considered "critical" in the
sense that they have a potential to affect the operation of the safety function. The safe failures
have a potential to cause a trip of the safety function, while the dangerous failures may cause the
safety function not to operate upon a demand. The failure modes above are further split into the
following categories:
Note that for high demand mode systems IEC 61508 uses PFH (Probability of Failure per Hour)
as the measure for loss of safety. PFH is not discussed here but is treated separately in the updated
method handbook, [10].
39
4.3.1 Contributions to Loss of Safety
The potential contributors to loss of safety (safety unavailability) can be split into the following
categories:
1) Unavailability due to dangerous undetected (DU) failures. For a single component, these
failures occur with rate λDU. The average period of unavailability due to such a failure is τ/2
(where τ = period of functional testing), since the failure can have occurred anywhere inside
the test interval.
2) Unavailability due to failures not revealed during functional testing. This unavailability is
caused by “unknown” ("dormant"), dangerous and undetected failures which can only be
detected during a true demand. These failures are denoted Test Independent Failures (TIF), as
they are not detected during functional testing.
Below, we discuss separately the loss of safety measures for the three failure categories, and
finally an overall measure for loss of safety is given.
The PFD quantifies the loss of safety due to dangerous undetected failures (with rate λDU), during
the period when it is unknown that the function is unavailable. The average duration of this period
is τ/2, where τ = test period. For a single (1oo1) component the PFD can be approximated by:
For a MooN voting logic (M<N), the main contribution to PFD (accounting for common cause
failures) is given by:
Here, CMooN is a modification factor depending on the voting configuration, ref. Table 8. Further,
for a NooN voting, we approximately have:
In reliability analysis it is often assumed that functional testing is “perfect” and as such detects
100% of the failures. In true life this is not necessarily the case; the test conditions may differ
from the real demand conditions, and some dangerous failures can therefore remain in the SIS
after the functional test. In PDS this is catered for by adding the probability of so called test
independent failures (TIF) to the PFD.
PTIF = The Probability that the component/system will fail to carry out its intended
function due to a (latent) failure not detectable by functional testing (therefore the
name “test independent failure”)
40
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
It should be noted that if an imperfect testing principle is adopted for the functional testing, this
will lead to an increase of the TIF probability. For instance, if a gas detector is tested by
introducing a dedicated test gas to the housing via a special port, the test will not reveal a
blockage of the main ports. Another example is the use of partial stroke testing for valves. This
type of testing is likely to increase the PTIF for the valve, since the valve is not fully proof tested
during such a test.
Hence, for a single component, PTIF expresses the likelihood of a component having just been
functionally tested, to fail on demand (irrespective of the interval of manual testing). For
redundant components, the TIF contribution to loss of safety will for a MooN voting be given by
the general formula: CMooN · β · PTIF, where the numerical values of CMooN are assumed identical
to those used for calculating PFD, ref. Table 8.
This represents the downtime part of the safety unavailability as described in category 3 above.
The DTU (Downtime Unavailability) quantifies the loss of safety due to:
• repair of dangerous failures, resulting in a period when it is known that the function is
unavailable due to repair. We refer to this unavailability as DTUR;
• planned downtime (or inhibition time) resulting from activities such as testing, maintenance
and inspection. We refer to this unavailability as DTUT.
Depending on the specific application, operational philosophy and the configuration of the
process plant and the SIS, it must be considered whether it is relevant to include (part of) the DTU
in the overall measure for loss of safety. For further discussions on how to quantify the DTUR and
DTUT contributions, reference is made to [10].
The total loss of safety is quantified by the critical safety unavailability (CSU). The CSU is the
probability that the module/safety system (either due to a random hardware or a systematic
failure) will fail to automatically carry out a successful safety action on the occurrence of a
hazardous /accidental event. Thus, we have the relation:
If we want to include also the “known” downtime unavailability, the formula becomes:
The contributions from PTIF and λDU to the Critical Safety Unavailability (CSU) are illustrated in
Figure 4. Failures contributing to the PTIF are systematic test independent failures. These failures
will repeat themselves unless modification/redesign is initiated. The contribution to the CSU from
such systematic failures has been assumed constant, independent of the frequency of functional
testing. Dangerous undetected (DU) failures are assumed eliminated at the time of functional
testing and will thereafter increase throughout the test period.
41
Critical safety unavailability ( CSU )
τ 2τ 3τ 4τ 5τ Time
τ
Functional test
interval
As seen from the figure the CSU will vary throughout time. The CSU is at its maximum right
before a functional test and at its minimum right after a test. However, when we calculate the
CSU and the PFD we actually calculate the average value as illustrated in the figure.
42
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
5 DATA DOSSIERS
The following pages present the data dossiers of the control and safety system components. The
dossiers are input to the tables in chapter 3 that summarise the generic input data to PDS analyses.
Note that the generic data, by nature represent a wide variation of equipment populations and as
such should be considered on individual grounds when using the data for a specific application.
The data dossiers are based on the data dossiers in previous editions of the handbook, [12], [13],
[14], and have been updated according to the work done in the PDS-BIP and the new data
available.
Adapting the definitions used in OREDA, several severity class types are referred to in the data
dossiers. The definitions of the various types are, [3]:
• Critical failure: A failure which causes immediate and complete loss of a system's capability
of providing its output.
• Degraded failure: A failure which is not critical, but it prevents the system from providing its
output within specifications. Such a failure would usually, but not necessarily, be gradual or
partial, and may develop into a critical failure in time.
• Incipient failure: A failure which does not immediately cause loss of the system's capability of
providing its output, but if not attended to, could result in a critical or degraded failure in the
near future.
• Unknown: Failure severity was not recorded or could not be deduced.
Note that only the critical failures are included as a basis for the failure rate estimates (i.e. the
λcrit). From the description of the failure mode, the critical failures are further split into dangerous
and safe failures (i.e. λcrit = λD + λS). E.g. for shutdown valves a “fail to close on demand” failure
will be classified as dangerous whereas a “spurious operation” failure will be classified as a safe
(spurious trip) failure.
The following failure modes are referred in the data dossier tables:
43
5.1 Input Devices
λD = 2.3 per 106 hrs cD = 0.15 λDU = 2.0 per 106 hrs
λS = 1.1 per 106 hrs cS = 0.10 λSU = 1.0 per 106 hrs
r = 0.5
Assessment
The given failure rate applies to pressure switches. The failure rate estimate is mainly based on
OREDA phase III data, older OREDA data and comparison with other generic data sources
(OREDA phase IV contains no data on process switches, whereas phase V contains only 6
switches). The estimated coverage is based on expert judgement; We assume 5 % coverage due
to line monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before the shutdown actually occurs.
The PTIF and the r estimates are mainly based on expert judgements. A summary of some of the
main arguments is provided in section 3.3.
44
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 5
No. of inventories = 6
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 295 632
λcrit = 1.4 D: 1.39 OREDA phase III database, [8]
ST: 0.0 Data relevant for conventional process switches.
Observed: Filter:
CD = 100% Inv. Equipment Class = Process Sensors AND
(based on only one Inv. Design Class = Pressure AND
failure) Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 3
No. of inventories = 12
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance Time (hours) = 719 424
λDU = 3.6 per 106 hrs Exida [15]: Generic DP / pressure switch
λSU = 2.4 per 106 hrs
SFF = 40%
Funct. 0.44
ST 1.02 T-Book [16]: Pressure sensor
Other crit 0.37
45
5.1.2 Proximity Switch (Inductive)
λD = 3.5 per 106 hrs cD = 0.15 λDU = 3.0 per 106 hrs
λS = 2.2 per 106 hrs cS = 0.10 λSU = 2.0 per 106 hrs
r = 0.3
Assessment
The estimated coverage is based on expert judgement; We assume 5 % coverage due to line
monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before a trip actually occurs. It should
be noted that (SIL rated) limit switches with significantly higher coverage factors are available.
In such case mechanical installation of the parts must ensure that alignment problems are
minimised.
The PTIF and the r estimates are mainly based on expert judgements. The PTIF is assumed to be
relatively high since mechanical alignment of the interacting parts is often a problem. Such
failures may not be revealed due to inadequate testing. Similarly a relatively high proportion of
systematic failures is assumed resulting in a low r factor
λDU = 3.6 per 106 hrs Exida [15]: Generic position limit switch
λSU = 2.4 per 106 hrs
SFF = 40%
Failure to change T-Book [16]: Electronic limit switch
state: 1.9 per 107 hrs
Spurious change of
state: 5.2 per 107 hrs
46
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 0.8 per 106 hrs cD = 0.60 λDU = 0.3 per 106 hrs
λS = 0.5 per 106 hrs cS = 0.30 λSU = 0.4 per 106 hrs
r = 0.3
Assessment
The failure rate estimate is mainly based on data from OREDA phase III. An insufficient amount
of data has been found in OREDA phase IV in order to update this estimate (no data from phase V,
VI or VII). The rate of DU failures is estimated assuming coverage of 60 % for dangerous failures.
This is based on implemented self test in the transmitter as well as casual observation by control
room operator (the latter assumes that the signal can be observed on the VDU and compared with
other signals). If a higher coverage is claimed, e.g. due to automatic comparison between
transmitters, this should be especially documented / verified. The rate of detected safe failures has
been estimated by expert judgement to 30% (as compared to 50 % in the previous 2006-edition).
This is due to the fact that safe failures will be difficult to detect before a trip has actually
occurred. No data available for pressure transmitters from OREDA phase VI and VII.
The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
Failure Rate References
Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.3 per 106 hrs edition, [12]
λSTU = 0.3 per 106 hrs
Assumed cD = 60%
PTIF = 5·10-4
λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.3 per 106 hrs edition, [13]
λSTU = 0.4 per 106 hrs
Assumed cD = 60%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively
47
Module: Input Devices
PDS Reliability Data Dossier
Component: Pressure Transmitter
λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-
λcrit = 1.3 λSTU = 0.4 per 106 hrs edition, [13]
PTIF = 3·10-4 - 5·10-4 1)
Assumed cD = 90%
1)
For smart/conventional respectively
N/A D: N/A OREDA phase IV database, [6]
ST: N/A Data relevant for conventional pressure trans-
mitters.
Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4
No. of inventories = 21
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 332 784
λcrit = 1.3 D: 0.64 OREDA phase III database, [8]
ST: 0.64 Data relevant for conventional pressure trans-
mitters.
Observed:
cD = 100 % Filter criteria: TAXCOD='PSPR' .AND. FUNCTN='OP' .OR.
(Calculated for 'GP'
transmitters having No. of inventories = 186
Total no. of critical failures = 6
some kind of self-test
Cal. time = 4 680 182 hrs
arrangement only)
λDU = 0.6 per 106 hrs Exida [15]: Generic smart DP / pressure
transmitter
SFF = 60%
Fail. to obtain signal: T-Book [16]: Pressure transmitter
0.83
Fail. to obtain signal: T-Book [16]: Pressure difference transmitter/
0.91 pressure difference cell
48
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition
λD = 1.4 per 106 hrs cD = 0.60 λDU = 0.6 per 106 hrs
λS = 1.6 per 106 hrs cS = 0.30 λSU = 1.1 per 106 hrs
r = 0.3
Assessment
The failure rate estimate is mainly based on data from the OREDA phase III database with
additional data from OREDA phase IV and V. The rate of DU failures is estimated by assuming
coverage of 60% for dangerous failures. This is based on implemented self test in the transmitter
as well as casual observation by control room operator (the latter assumes that the signal can be
observed on the VDU and compared with other signals). If a higher coverage is claimed, special
documentation/verification should be required. The rate of safe failures has been estimated by
expert judgement to 30% (as compared to 50 % in the previous 2006-edition). This is due to the
fact that safe failures will be difficult to detect before a trip has actually occurred.
The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-edition,
λcrit = 3.0 λSTU = 0.8 per 106 hrs [13]
Assumed cD = 90%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively
49
Discovering Diverse Content Through
Random Scribd Documents
The Project Gutenberg eBook of
The Boy Fortune Hunters in
Yucatan
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
Language: English
CHICAGO
THE REILLY & BRITTON CO.
PUBLISHERS
i
COPYRIGHT 1910 BY
THE REILLY & BRITTON CO.
ii
CONTENTS
CHAPTER PAGE
I We Meet Lieutenant Allerton 9
II We Listen to a Strange Proposition 26
III We Undertake the Yucatan Adventure 39
IV We Scent Danger Ahead 55
V We Inspect a Novel Aerial Invention 65
VI We See an Astonishing Thing 72
VII We Outwit the Enemy 80
VIII We Fight a Good Fight 95
IX We Find Ourselves Outnumbered 105
X We Escape Annihilation 113
XI We Enter the City of Itza 125
XII We Sight the Quarry 137
XIII We Seek Safety in Flight 150
XIV We Interview the Red-Beard 164
XV We Become Prisoners of the Tcha 179
XVI We View the Hidden City 191
XVII We are Condemned by the Tribunal 204
XVIII We Argue with the High Priestess 214
XIX We Save a Valuable Life 231
XX We Find the Tcha Grateful 239
XXI We Lose Poor Pedro 254
XXII We Face a Deadly Peril 265
XXIII We Become Aggressive 277
XXIV We Witness a Daring Deed 287
XXV We Repel the Invaders 298
XXVI We Hear Strange News 314
XXVII We Settle an Old Score 332
XXVIII We Win and Lose 340
The Boy
Fortune Hunters
in Yucatan
CHAPTER I
WE MEET LIEUTENANT ALLERTON
Over against the rail stands Ned Britton, our first mate.
Ned is father’s right bower. They have sailed together
many years and have acquired a mutual understanding
and respect. Ned has been thoroughly tested in the
past: a blunt, bluff sailor-man, as brave as a lion and as
guileless as a babe. His strong point is obeying orders
and doing his duty on all occasions.
And now you know all of us on the quarter deck, and I’ll
just say a word about our two blacks, Nux and Bryonia.
They are South Sea Islanders, picked up by Uncle
Naboth years ago and devoted now to us all—especially
to my humble self. We’ve been together in many
adventures, these ebony skinned men and I, and more
than once I have owed my life to their fidelity. Nux is
cabin master and steward; he’s the stockiest of the big
fellows. Bryonia is ship’s cook, and worthy the post of
chef at Sherry’s. He can furnish the best meal from the
least material of any one I’ve ever known, and with our
ample supplies you may imagine we live like pigs in
clover aboard the Seagull.
“Daybreak, sir.”
26
CHAPTER II
WE LISTEN TO A STRANGE PROPOSITION
“Do you think this thing is all straight and above board,
Sam?” asked my friend.
His tone was so crestfallen that I felt sorry for him, and
Joe turned quickly and said:
“That’s unlucky, sir; but I’ve some funds that are not in
use just now, and if you’ll permit me to loan you
whatever you require I shall be very happy to be of
service.”
ebookgate.com