100% found this document useful (2 votes)
17 views80 pages

Reliability data for safety instrumented systems PDS data handbook 2010 ed. Edition Hauge download

The document is the 2010 edition of the 'Reliability Data for Safety Instrumented Systems' handbook, which provides updated reliability data estimates for components of control and safety systems, including input devices, control logic, and final elements. It includes thorough documentation of data sources and assumptions, aligning with IEC 61508 and IEC 61511 standards, and features updates on failure rates and new equipment groups compared to the 2006 edition. The handbook is intended for use in reliability analyses and is part of a research project sponsored by the Norwegian Research Council.

Uploaded by

eblinkitkarl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
17 views80 pages

Reliability data for safety instrumented systems PDS data handbook 2010 ed. Edition Hauge download

The document is the 2010 edition of the 'Reliability Data for Safety Instrumented Systems' handbook, which provides updated reliability data estimates for components of control and safety systems, including input devices, control logic, and final elements. It includes thorough documentation of data sources and assumptions, aligning with IEC 61508 and IEC 61511 standards, and features updates on failure rates and new equipment groups compared to the 2006 edition. The handbook is intended for use in reliability analyses and is part of a research project sponsored by the Norwegian Research Council.

Uploaded by

eblinkitkarl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Reliability data for safety instrumented systems

PDS data handbook 2010 ed. Edition Hauge


download

https://ebookgate.com/product/reliability-data-for-safety-
instrumented-systems-pds-data-handbook-2010-ed-edition-hauge/

Get Instant Ebook Downloads – Browse at https://ebookgate.com


Get Your Digital Files Instantly: PDF, ePub, MOBI and More
Quick Digital Downloads: PDF, ePub, MOBI and Other Formats

Safety and Reliability of Complex Engineered Systems


ESREL 2015 Kröger

https://ebookgate.com/product/safety-and-reliability-of-complex-
engineered-systems-esrel-2015-kroger/

Handbook of Statistics 24 Data Mining and Data


Visualization C.R. Rao

https://ebookgate.com/product/handbook-of-statistics-24-data-
mining-and-data-visualization-c-r-rao/

Safety and Reliability of Industrial Products Systems


and Structures Carlos Guedes Soares

https://ebookgate.com/product/safety-and-reliability-of-
industrial-products-systems-and-structures-carlos-guedes-soares/

Data Engineering Fuzzy Mathematics in Systems Theory


and Data Analysis 1st Edition Olaf Wolkenhauer

https://ebookgate.com/product/data-engineering-fuzzy-mathematics-
in-systems-theory-and-data-analysis-1st-edition-olaf-wolkenhauer/
Data Mining for Systems Biology Methods and Protocols
1st Edition Koji Tsuda

https://ebookgate.com/product/data-mining-for-systems-biology-
methods-and-protocols-1st-edition-koji-tsuda/

Handbook of Normative Data for Neuropsychological


Assessment 2nd Edition Maura Mitrushina

https://ebookgate.com/product/handbook-of-normative-data-for-
neuropsychological-assessment-2nd-edition-maura-mitrushina/

Data Source Handbook 1st Edition Pete Warden

https://ebookgate.com/product/data-source-handbook-1st-edition-
pete-warden/

Data Center Handbook 1st Edition Hwaiyu Geng

https://ebookgate.com/product/data-center-handbook-1st-edition-
hwaiyu-geng/

Clustering for Data Mining A Data Recovery Approach 1st


Edition Boris Mirkin

https://ebookgate.com/product/clustering-for-data-mining-a-data-
recovery-approach-1st-edition-boris-mirkin/
SINTEF REPORT
TITLE

SINTEF Technology and Society Reliability Data for Safety Instrumented Systems
Safety research
Address: NO-7465 Trondheim,
PDS Data Handbook, 2010 Edition
NORWAY
Location: S P Andersens veg 5
NO-7031 Trondheim
Telephone: +47 73 59 27 56
Fax: +47 73 59 28 96 AUTHOR(S)

Enterprise No.: NO 948 007 029 MVA


Stein Hauge and Tor Onshus
CLIENT(S)

Multiclient - PDS Forum


REPORT NO. CLASSIFICATION CLIENTS REF.

SINTEF A13502 Unrestricted


CLASS. THIS PAGE ISBN PROJECT NO. NO. OF PAGES/APPENDICES

Unrestricted 978-82-14-04849-0 504091.17 116


ELECTRONIC FILE CODE PROJECT MANAGER (NAME, SIGN.) CHECKED BY (NAME, SIGN.)

Stein Hauge Per Hokstad


FILE CODE DATE APPROVED BY (NAME, POSITION, SIGN.)

2009-12-18 Lars Bodsberg, Research Director


ABSTRACT

This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line with
the requirements in the IEC 61508 and IEC 61511 standards.

As compared to the former 2006 edition, the following main changes are included:

• A general review and update of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.

KEYWORDS ENGLISH NORWEGIAN

GROUP 1 Safety Sikkerhet


GROUP 2 Reliability Pålitelighet
SELECTED BY AUTHOR Data Data
Safety Instrumented Systems (SIS) Instrumenterte sikkerhetssystemer
SIL calculations SIL beregninger
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

PREFACE
The present report is an update of the 2006 edition of the Reliability Data for Control and Safety
Systems, PDS Data Handbook [12]. The handbook presents data in line with the latest available
data sources as well as data for some new equipment.

The work has been carried out as part of the research project “Managing the integrity of safety
instrumented systems”. 1

Trondheim, December 2009

Stein Hauge

PDS Forum Participants in the Project Period 2007 - 2009

Oil Companies/Operators
• A/S Norske Shell
• BP Norge AS
• ConocoPhillips Norge
• Eni Norge AS
• Norsk Hydro ASA
• StatoilHydro ASA (Statoil ASA from Nov. 1st 2009)
• Talisman Energy Norge
• Teekay Petrojarl ASA
• TOTAL E&P NORGE AS

Control and Safety System Vendors


• ABB AS
• FMC Kongsberg Subsea AS
• Honeywell AS
• Kongsberg Maritime AS
• Bjørge Safety Systems AS
• Siemens AS
• Simtronics ASA

Engineering Companies and Consultants


• Aker Kværner Engineering & Technology
• Det Norske Veritas AS
• Lilleaker Consulting AS
• NEMKO AS
• Safetec Nordic AS
• Scandpower AS

Governmental Bodies
• The Directorate for Civil Protection and Emergency Planning (Observer)
• The Norwegian Maritime Directorate (Observer)
• The Petroleum Safety Authority Norway (Observer)

1
This user initiated research project has been sponsored by the Norwegian Research Council and the PDS
forum participants. The project work has been carried out by SINTEF.
3
ABSTRACT
This report provides reliability data estimates for components of control and safety systems. Data
dossiers for input devices (sensors, detectors, etc.), control logic (electronics) and final elements
(valves, etc.) are presented, including some data for subsea equipment. Efforts have been made to
document the presented data thoroughly, both in terms of applied data sources and underlying
assumptions. The data are given on a format suitable for performing reliability analyses in line
with the requirements in the IEC 61508 and IEC 61511 standards.

As compared to the former 2006 edition, the following main changes are included:

• A general review and update of the failure rates, coverage values, β-values and other
relevant parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.

4
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Table of Contents

PREFACE ........................................................................................................................... 3
ABSTRACT .................................................................................................................................... 4
1 INTRODUCTION ................................................................................................................... 9
1.1 Objective and Scope ......................................................................................................... 9
1.2 Benefits of Reliability Analysis – the PDS Method ......................................................... 9
1.3 The IEC 61508 and 61511 Standards ............................................................................. 10
1.4 Organisation of Data Handbook ..................................................................................... 10
1.5 Abbreviations ................................................................................................................. 10
2 RELIABILITY CONCEPTS ................................................................................................. 13
2.1 The Concept of Failure ................................................................................................... 13
2.2 Failure Rate and Failure Probability............................................................................... 13
2.2.1 Failure Rate Notation ...................................................................................... 13
2.2.2 Decomposition of Failure Rate........................................................................ 14
2.3 Reliability Measures and Notation ................................................................................. 15
2.4 Reliability Parameters .................................................................................................... 16
2.4.1 Rate of Dangerous Undetected Failures .......................................................... 16
2.4.2 The Coverage Factor, c ................................................................................... 17
2.4.3 Beta-factors and CMooN .................................................................................... 17
2.4.4 Safe Failure Fraction, SFF............................................................................... 18
2.5 Main Data Sources ......................................................................................................... 18
2.6 Using the Data in This Handbook .................................................................................. 19
3 RELIABILITY DATA SUMMARY ..................................................................................... 21
3.1 Topside Equipment ......................................................................................................... 21
3.2 Subsea Equipment .......................................................................................................... 27
3.3 Comments to the PDS Data ............................................................................................ 28
3.3.1 Probability of Test Independent Failures (PTIF) .............................................. 28
3.3.2 Coverage .......................................................................................................... 29
3.3.3 Fraction of Random Hardware Failures (r) ..................................................... 30
3.4 Reliability Data Uncertainties – Upper 70% Values ...................................................... 32
3.4.1 Data Uncertainties ........................................................................................... 32
3.4.2 Upper 70% Values........................................................................................... 33
3.5 What is “Sufficient Operational Experience“? – Proven in Use .................................... 34
4 MAIN FEATURES OF THE PDS METHOD ...................................................................... 37
4.1 Main Characteristics of PDS .......................................................................................... 37
4.2 Failure Causes and Failure Modes ................................................................................. 37
4.3 Reliability Performance Measures ................................................................................. 39
4.3.1 Contributions to Loss of Safety ....................................................................... 40
4.3.2 Loss of Safety due to DU Failures - Probability of Failure on Demand (PFD)40
4.3.3 Loss of Safety due to Test Independent Failures (PTIF)................................... 40
4.3.4 Loss of Safety due to Downtime Unavailability – DTU ................................. 41
4.3.5 Overall Measure for Loss of Safety– Critical Safety Unavailability .............. 41
5 DATA DOSSIERS ................................................................................................................. 43
5.1 Input Devices .................................................................................................................. 44
5.1.1 Pressure Switch ............................................................................................... 44
5.1.2 Proximity Switch (Inductive) .......................................................................... 46
5.1.3 Pressure Transmitter ........................................................................................ 47
5
5.1.4 Level (Displacement) Transmitter................................................................... 49
5.1.5 Temperature Transmitter ................................................................................. 51
5.1.6 Flow Transmitter ............................................................................................. 53
5.1.7 Catalytic Gas Detector..................................................................................... 55
5.1.8 IR Point Gas Detector...................................................................................... 57
5.1.9 IR Line Gas Detector ....................................................................................... 59
5.1.10 Smoke Detector ............................................................................................... 61
5.1.11 Heat Detector ................................................................................................... 63
5.1.12 Flame Detector ................................................................................................ 65
5.1.13 H2S Detector .................................................................................................... 68
5.1.14 ESD Push Button ............................................................................................. 70
5.2 Control Logic Units ........................................................................................................ 72
5.2.1 Standard Industrial PLC .................................................................................. 73
5.2.2 Programmable Safety System ......................................................................... 79
5.2.3 Hardwired Safety System ................................................................................ 85
5.3 Final Elements ................................................................................................................ 88
5.3.1 ESV/XV........................................................................................................... 88
5.3.2 ESV, X-mas Tree ............................................................................................ 92
5.3.3 Blowdown Valve ............................................................................................. 95
5.3.4 Pilot/Solenoid Valve........................................................................................ 97
5.3.5 Process Control Valve ................................................................................... 100
5.3.6 Pressure Relief Valve .................................................................................... 103
5.3.7 Deluge Valve ................................................................................................. 105
5.3.8 Fire Damper ................................................................................................... 106
5.3.9 Circuit Breaker .............................................................................................. 108
5.3.10 Relay.............................................................................................................. 109
5.3.11 Downhole Safety Valve – DHSV.................................................................. 110
5.4 Subsea Equipment ........................................................................................................ 111
6 REFERENCES..................................................................................................................... 116

6
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

List of Tables

Table 1 Decomposition of critical failure rate, λcrit ........................................................................15


Table 2 Performance measures and reliability parameters .............................................................15
Table 3 Failure rates, coverages and SFF for input devices ...........................................................21
Table 4 Failure rates, coverages and SFF for control logic units ...................................................22
Table 5 Failure rates, coverages and SFF for final elements..........................................................23
Table 6 PTIF for various components ..............................................................................................24
Table 7 β-factors for various components ......................................................................................25
Table 8 Numerical values for configuration factors, CMooN ...........................................................26
Table 9 Failure rates for subsea equipment - input devices, control system units and
output devices ........................................................................................................................27
Table 10 Estimated upper 70% confidence values for topside equipment .....................................33
Table 11 Discussion of proposed subsea data ..............................................................................111

List of Figures

Figure 1 Decomposition of critical failure rate, λcrit .......................................................................15


Figure 2 Illustration of failure rate with confidence level of 70% .................................................32
Figure 3 Failure classification by cause of failure ..........................................................................38
Figure 4 Contributions to critical safety unavailability (CSU).......................................................42

7
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

1 INTRODUCTION
Safety standards like IEC 61508, [1] and IEC 61511, [2], require quantification of failure
probability for operation of safety systems. Such quantification may be part of design
optimization or verification that the design is according to stated performance requirements.

The use of relevant failure data is an essential part of any quantitative reliability analysis. It is also
one of the most challenging parts and raises a number of questions concerning the availability and
relevance of the data, the assumptions underlying the data and what uncertainties are related to the
data.

In this handbook recommended data for reliability quantification of Safety Instrumented Systems
(SIS) are presented. Efforts have been made to document the presented data thoroughly, both in
terms of applied data sources and underlying assumptions.

Various data sources have been applied when preparing this handbook, the most important source
being the OREDA database and handbooks (ref. section 2.5).

1.1 Objective and Scope


When performing reliability quantification, the analyst will need information on a number of
parameters related to the equipment under consideration. This includes basic failure rates,
distribution of critical failure modes, diagnostic coverage factors and common cause factors. In
this handbook best estimates for these reliability parameters are presented for selected equipment.
The data are given on a format suitable for performing analyses in line with the requirements in
the IEC 61508/61511 standards and the PDS method, [10].

As compared to the former 2006 edition, [12], the following main changes are included:

• A general update / review of the failure rates, coverage values, β-values and other relevant
parameters;
• Some new equipment groups have been added;
• Data for control logic units have been updated and refined.

1.2 Benefits of Reliability Analysis – the PDS Method


Instrumented safety systems such as emergency shutdown systems, fire and gas systems and
process shutdown systems, are installed to prevent abnormal operating conditions from
developing into an accident. High reliability of such systems is therefore paramount with respect
to safe - as well as commercial - operation.

Reliability analysis represents a systematic tool for evaluating the performance of safety
instrumented systems (SIS) from a safety and production availability point of view. Some main
applications of reliability analysis are:

• Reliability assessment and follow-up; verifying that the system fulfils its safety and
reliability requirements;
• Design optimisation; balancing the design to get an optimal solution with respect to safety,
production availability and lifecycle cost;
• Operation planning; establishing the optimal testing and maintenance strategy;
9
• Modification support; verifying that planned modifications are in line with the safety and
reliability requirements.

The PDS method has been developed in order to enable the reliability engineer and non-experts to
perform such reliability considerations in various phases of a project. The main features of the
PDS method are discussed in chapter 4.

1.3 The IEC 61508 and 61511 Standards


The IEC 61508 and IEC 61511 standards, [1] and [2], present requirements to safety instrumented
systems (SIS) for all the relevant lifecycle phases, and have become leading standards for SIS
specification, design, implementation and operation. IEC 61508 is a generic standard common to
several industries, whereas IEC 61511 has been developed especially for the process industry.
These standards present a unified approach to achieve a rational and consistent technical policy
for all SIS systems. The Norwegian Oil Industry Association (OLF) has developed a guideline to
support the use of IEC 61508/61511, [19].

The PDS method is in line with the main principles advocated in the IEC standards, and is a
useful tool when implementing and verifying quantitative (SIL) requirements as described in the
IEC standards.

1.4 Organisation of Data Handbook


In chapter 2 important reliability aspects are discussed and definitions of the applied notations are
given.

The recommended reliability data estimates are summarised in chapter 3 of this report. A split has
been made between input devices, logic solvers and final elements.

Chapter 4 gives a brief summary of the main characteristics of the PDS method. The failure
classification for safety instrumented systems is presented together with the main reliability
performance measures used in PDS.

In chapter 5 the detailed data dossiers providing the basis for the recommended reliability data are
given. As for previous editions of the handbook, some data are scarcely available in the data
sources, and it is necessary to, partly or fully, rely on expert judgements.

1.5 Abbreviations
CCF - Common cause failure
CSU - Critical safety unavailability
DTU - Downtime unavailability
FMECA - Failure modes, effects, and criticality analysis
FMEDA - Failure modes, effects, and diagnostic analysis
IEC - International Electro technical Commission
JIP - Joint industry project
MTTR - Mean time to restoration
NDE - Normally de-energised
NE - Normally energised
OLF - The Norwegian oil industry association
OREDA - Offshore reliability data

10
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

PDS - Norwegian acronym for “reliability of computer based safety systems”


PFD - Probability of failure on demand
RNNP - Project: Risk level in Norwegian petroleum production
www.ptil.no
SIL - Safety integrity level
SIS - Safety instrumented system
SFF - Safe failure fraction
STR - Spurious trip rate
TIF - Test independent failure

Additional abbreviations (equipment related)

AI - Analogue input
BDV - Blowdown valve
CPU - Central Processing Unit
DO - Digital output
ESV - Emergency shutdown valve
DHSV - Downhole safety valve
XV - Production shutdown valve

11
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

2 RELIABILITY CONCEPTS
In this chapter some selected concepts related to reliability analysis and reliability data are
discussed. For a more detailed discussion reference is made to the updated PDS method
handbook, ref. [10].

2.1 The Concept of Failure


A failure is in IEC 61508-4 defined as the termination of the ability of a functional unit to perform
a required function. The two main functions of a safety system are [10]; the ability to shut down
or go to a predefined safe state when production is not safe and the ability to maintain production
when it is safe. Hence, a failure may have two facets; (1) loss of the ability to shut down or go to a
safe state when required or (2) loss of the ability to maintain production.

From a safety point of view, the first category will be the more critical and such failures are
defined as dangerous failures (D), i.e. they have the potential to result in loss of the ability to shut
down or go to a safe state when required.

Loss of the ability to maintain production is normally not so critical to safety and such failures
have therefore in PDS traditionally been denoted spurious trip (ST) failures whereas IEC 61508
categorise such failures as ‘safe’ (S). In the forthcoming update of the IEC 61508 standard the
definition of safe failures is more in line with the PDS interpretation. Therefore PDS have in this
updated version also applied the notation ‘S’ (instead of ‘ST’ failures).

It should be noted that a given failure may be classified as either dangerous or safe depending on
the intended application. E.g. loss of hydraulic supply to a valve actuator operating on-demand
will be dangerous in an energise-to-trip application and safe in a de-energise-to-trip application.
Hence, when applying the failure data, the assumptions underlying the data as well as the context
in which the data shall be used must be carefully considered.

2.2 Failure Rate and Failure Probability


The failure rate (numbers of failures per time unit) for a component is essential for the reliability
calculations. In section 2.2.1, definitions and notation related to the failure rate are given, whereas
in section 2.2.2 the decomposition of this failure rate into its various elements is further discussed.

2.2.1 Failure Rate Notation

λcrit = Rate of critical failures; i.e., failures that may cause loss of one of the two main
functions of the component/system (see above).

Critical failures include dangerous (D) failures which may cause loss of the ability to
shut down production when required and safe (S) failures which may cause loss of
the ability to maintain production when safe (i.e. spurious trip failures). Hence:

λcrit = λD + λS (see below)

λD = Rate of dangerous (D) failures, including both undetected as well as detected


failures. λD = λDU + λDD (see below)

13
λDU = Rate of dangerous undetected failures, i.e. failures undetected both by automatic
self-test or personnel

λDD = Rate of dangerous detected failures, i.e. failures detected by automatic self-test or
personnel

λS = Rate of safe (spurious trip) failures, including both undetected as well as detected
failures. λS = λSU + λSD (see below)

λSU = Rate of safe (spurious trip) undetected failures, i.e. undetected both by automatic
self-test and personnel

λSD = Rate of safe (spurious trip) detected failures, i.e. detected by automatic self-test or
personnel

λundet = Rate of (critical) failures that are undetected both by automatic self-test and by
personnel (i.e., detected in functional testing only). λundet = λDU + λSU

λdet = Rate of (critical) failures that are detected by automatic self-test or personnel
(independent of functional testing). λdet = λDD + λSD

c = Coverage: percentage of critical failures detected either by the automatic self-test or


(incidentally) by personnel observation

cD = Coverage of dangerous failures. cD = (λDD / λD ) · 100%


Note that λDU then can be calculated as: λDU = λD · (1- cD / 100%)

cS = Coverage of safe (spurious trip) failures. cS = (λSD / λS) ·100%


Note that λSU then can be calculated as: λSU = λS · (1- cS / 100%)

r = Fraction of dangerous undetected (DU) failures originating from random hardware


failures (1-r will then be the fraction originating from systematic failures)

SFF = Safe failure fraction = (1 - λDU / λ rit ) · 100 %

β = The fraction of failures of a single component that causes both components of a


redundant pair to fail “simultaneously”

CMooN = Modification factor for voting configurations other than 1oo2 in the beta-factor
model (e.g. 1oo3, 2oo3 and 2oo4 voting logics)

2.2.2 Decomposition of Failure Rate

Some important relationships between different fractions of the critical failure rate are illustrated
in Table 1 and Figure 1.

14
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Table 1 Decomposition of critical failure rate, λcrit

Spurious trip failures Dangerous failures Sum


Undetected λSU λDU λundet
Detected λSD λDD λdet
Sum λS λD λcrit

Dangerous failure,
λDU undetected by automatic
self-test or personnel
λundet
Safe (spurious trip) failure,
λSU undetected by automatic
self-test or personnel
λcrit
Contribute to SFF
λDD (Safe Failure
Fraction)
λdet

λSD

Figure 1 Decomposition of critical failure rate, λcrit

2.3 Reliability Measures and Notation


Table 2 lists some performance measures for safety and reliability, and some other main
parameters in the PDS method. A more complete description is found in the updated PDS Method
Handbook, 2010 Edition, [10].

Table 2 Performance measures and reliability parameters

Term Description

PFD Probability of failure on demand. This is the measure for loss of safety caused by
dangerous undetected failures, see section 4.3.

PTIF Probability of a test independent failure. This is the measure for loss of safety
caused by a failure not detectable by functional testing, but occurring upon a true
demand (see section 4.3).
CSU Critical safety unavailability, CSU = PFD + PTIF

15
Term Description

MTTR Mean time to restoration. Time from failure is detected/revealed until function is
restored, ("restoration period"). Note that this restoration period may depend on a
number of factors. It can be different for detected and undetected failures: The
undetected failures are revealed and handled by functional testing and could have
shorter MTTR than the detected failures. The MTTR could also depend on
configuration, operational philosophy and failure multiplicity.

STR Spurious trip rate. Rate of spurious trips of the safety system (or set of redundant
components), taking into consideration the voting configuration.
τ Interval of functional test (time between functional tests of a component)

2.4 Reliability Parameters


In this section some of the reliability parameters defined above is further discussed.

2.4.1 Rate of Dangerous Undetected Failures

As discussed in section 2.2.2, the critical failure rate, λ rit are split into dangerous and safe
failures, (i.e. λcrit = λD + λS) which are further split into detected and undetected failures. When
performing safety unavailability calculations, the rate of dangerous undetected failures, λDU, is of
special importance, since this parameter - together with the test interval - to a large degree
governs the prediction of how often a safety function is likely to fail on demand.

Equipment specific failure data reports prepared by manufacturers (or others) often provide λDU
estimates being an order of magnitude (or even more) lower than those reported in generic data
handbooks. There may be several causes for such exaggerated claims of performance, including
imprecise definition of equipment- and analysis boundaries, incorrect failure classification or too
optimistic predictions of the diagnostic coverage factor (see e.g. [20]).

When studying the background data for generic failure rates (λDU) presented in data sources such
as OREDA and RNNP, it is found that these data will include both random hardware failures as
well as systematic failures. Examples of the latter include incorrect parameter settings for a
pressure transmitter, an erroneous output from the control logic due to a failure during software
modification, or a PSV which fails due to excessive internal erosion or corrosion. These are all
failures that are detectable during functional testing and therefore illustrate the fact that systematic
failures may well be part of the λDU for generic data.

Since failure rates provided by manufacturers frequently tend to exclude all types of failures
related to installation, commissioning or operation of the equipment (i.e. systematic type of
failures), a mismatch between manufacturer data and generic data appears. Our question then
becomes - since systematic failures inevitably will occur - why not include these failures in
predictive reliability analyses?

In order to elucidate the fact that the failure rate will comprise random hardware failures as well
as systematic failures, the parameter r has therefore been defined as the fraction of dangerous
undetected failures originating from random hardware failures. Rough estimates of the r factor are
given in the detailed data sheets in chapter 5. For a more thorough discussion and arguments
concerning the r factor, reference is made to [10].

16
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

2.4.2 The Coverage Factor, c

Modules often have built-in automatic self-test, i.e. on-line diagnostic testing to detect failures
prior to an actual demand 2. The fraction of failures being detected by the automatic self-test is
called the fault coverage and quantifies the effect of the self-test. Note that the actual effect on
system performance from a failure that is detected by the automatic self-test will depend on
system configuration and operating philosophy. In particular it should be considered whether the
detected failure is configured to only raise an alarm or alternatively bring the system to a safe
state. It is often seen that failures classified as dangerous detected only raise an alarm and in such
case it must be ensured that the failure initiates an immediate response in the form of a repair
and/or introduction of risk reducing measures.

In addition to the diagnostic self-test, an operator or maintenance crew may detect dangerous
failures incidentally in between tests. For instance, the panel operator may detect a transmitter that
is “stuck” or a sensor that has been left in by-pass. Similarly, when a process segment is isolated
for maintenance, the operator may detect that one of the valves will not close. The PDS method
also aims at incorporating this effect, and defines the total coverage factor; c reflecting detection
both by automatic self-test and by operator. Further, the coverage factor for dangerous failures is
denoted cD whereas the coverage factor for safe failures is denoted cS.

Critical failures that are not detected by automatic self-testing or by observation are assumed
either to be detectable by functional (proof) testing 3 or they are so called test independent failures
(TIF) that are not detected during a functional test but appear upon a true demand (see section 2.3
and chapter 4 for further description).

It should be noted that the term “detected safe failure” (of rate λS), is interpreted as a failure
which is detected such that a spurious trip is actually avoided. Hence, a spurious closure of a
valve which is detected by, e.g., flow metering downstream the valve, can not be categorised as a
detected safe failure. On the other hand, drifting of a pressure transmitter which is detected by the
operator, such that a shutdown is avoided, will typically be a detected safe failure.

2.4.3 Beta-factors and CMooN

When quantifying the reliability of systems employing redundancy, e.g., duplicated or triplicated
systems, it is essential to distinguish between independent and dependent failures. Random
hardware failures due to natural stressors are assumed to be independent failures. However, all
systematic failures, e.g. failures due to excessive stresses, design related failures and maintenance
errors are by nature dependent (common cause) failures. Dependent failures can lead to
simultaneous failure of more than one (redundant) component in the safety system, and thus
reduce the advantage of redundancy.

Traditionally, the dependent or common cause failures have been accounted for by the β-factor
approach. The problem with this approach has been that for any M-out-of-N (MooN) voting
(M<N) the rate of dependent failures is the same, and thus the approach does not distinguish
between e.g. a 1oo2 and a 2oo3 voting. The PDS method extends the β-factor model, and
distinguishes between the voting logics by introducing β-factors which depend on the voting
configuration; i.e. β(MooN) = β · CMooN. Here, CMooN is a modification factor depending on the
voting configuration, MooN.

2
Also refer to IEC 61508-4, section 3.8.6 and 3.8.7
3
See also IEC 61508-4, section 3.8.5.
17
Standard (average) values for the β-factor are given in Table 7. Note that when performing
reliability calculations, application specific β-factors should preferably be obtained, e.g. by using
the checklists provided in IEC 61508-6, or by using the simplified method as described in
Appendix D of the PDS method handbook, [10].

Values for CMooN are given in Table 8. For a more complete description of the extended β-factor
approach of PDS, see [10].

2.4.4 Safe Failure Fraction, SFF

The Safe Failure Fraction as described in IEC 61508 is given by the ratio between dangerous
detected failures plus safe failures and the total rate of failure; i.e. SFF = (λDD + λS) /(λD + λS).
The objective of including this measure (and the associated hardware fault tolerance; HFT) was to
prevent manufacturers from claiming excessive SILs based solely on PFD calculations. However,
experience has shown that failure modes that actually do not influence the main functions of the
SIS (ref. section 2.1) are frequently included in the safe failure rate so as to artificially increase
the SFF, [20].

It is therefore important to point out that when estimating the SFF, only failures with a potential to
actually cause a spurious trip of the component should be included among the safe failures. Non-
critical failures, such as a minor external leakage of hydraulic oil from a valve actuator, should not
be included.

The SFF figures presented in this handbook are based on reported failure mode distributions in
OREDA as well as some additional expert judgements. Higher (or lower) SFFs than given in the
tables may apply for specific equipment types and this should in such case be well documented,
e.g. by FMEDA type of analyses.

2.5 Main Data Sources


The most important data source when preparing this handbook has been the OREDA database and
handbooks. OREDA is a project organisation whose main purpose is to collect and exchange
reliability data among the participating companies (i.e. BP, ENI, ExxonMobil, ConocoPhillips,
Shell, Statoil, TOTAL and Gassco). A special thanks to the OREDA Joint Industry Project (JIP)
for providing access to an agreed set of the OREDA JIP data. For more information about the
OREDA project, any feedback to OREDA JIP concerning the data or name of contact persons,
reference is made to http://www.oreda.com. Equipment for which reliability data are missing or
additional data desirable should be reported to the OREDA project manager or one of the
participating OREDA companies, as this will provide valuable input to future OREDA data
collection plans.

Other important data sources have been;

• Recent data from the RNNP (Norwegian: “Risikonivået i Norsk Petroleumsindustri”)


project on safety critical equipment;
• Failure data and failure mode distributions from safety system manufacturers;
• Experience data from operational reviews on Norwegian offshore and onshore
installations;
• Other commercially published data handbooks such as Exida, [15] and the T-book, [16];
• Discussions and interviews with experts.

A complete list of data sources and references is given in chapter 6.

18
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

2.6 Using the Data in This Handbook


The data in this handbook provide best (average) estimates of equipment failure rates based on
experience gathered mainly throughout the petroleum industry.

The recommended data is based on a number of assumptions concerning safe state, fail safe
design, self-test ability, loop monitoring, NE/NDE design, etc. These assumptions are, for each
piece of equipment, described in the detailed data sheets in chapter 5, Hence, when using the data
for reliability calculations, it is important to consider the relevance of these assumptions for each
specific application.

19
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

3 RELIABILITY DATA SUMMARY

3.1 Topside Equipment


The tables 3 to 8 summarise the input data to be used in reliability analyses. The definitions of the
column headings relate to the parameter definitions given in section 2.2 and 2.3. Some additional
comments on the values for PTIF, coverage and r, are given in section 3.3.

Observe that λD (third column of tables 3 to 5), together with λcrit =λD + λS, will provide the λS.
The rates of undetected failures λDU and λSU follow from the given coverage values, cD and cS. I.e.
λDU = λD · (1- cD / 100%) and λSU = λS · (1- cS / 100%). The safe failure fraction, SFF, can be
calculated by SFF = ((λcrit - λDU)/ λcrit) · 100%.

Data dossiers with comprehensive information for each component are given in chapter 5 as
referred to in tables 3 to 5.

Table 3 Failure rates, coverages and SFF for input devices

Input Devices

Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.

Pressure switch 3.4 2.3 15 % 10 % 2.0 1.0 41 % Sect. 5.1.1


Proximity switch,
5.7 3.5 15 % 10 % 3.0 2.0 47 % Sect. 5.1.2
inductive
Pressure transmitter 1.3 0.8 60 % 30 % 0.3 0.4 77 % Sect. 5.1.3
Level (displacement)
3.0 1.4 60 % 30 % 0.6 1.1 80 % Sect. 5.1.4
transmitter
Temperature transmitter 2.0 0.7 60 % 30 % 0.3 0.9 85 % Sect. 5.1.5

Flow transmitter 3.7 1.5 60 % 30 % 0.6 1.5 84 % Sect. 5.1.6

Gas detector, catalytic 5.0 3.5 50 % 30 % 1.8 1.1 64 % Sect. 5.1.7

Gas detector, IR point 4.7 2.5 75 % 50 % 0.6 1.1 88 % Sect. 5.1.8

Gas detector, IR line 5.0 2.8 75 % 50 % 0.7 1.1 86 % Sect. 5.1.9

Smoke detector 3.2 1.2 40 % 30 % 0.7 1.4 78 % Sect. 5.1.10

Heat detector 2.5 1.0 40 % 40 % 0.6 0.9 76 % Sect. 5.1.11

Flame detector 6.5 2.7 70 % 50 % 0.8 1.9 88 % Sect. 5.1.12

H2S detector 1.3 1.0 50% 30% 0.5 0.2 62% Sect. 5.1.13

ESD push button 0.8 0.5 20 % 10 % 0.4 0.3 50 % Sect. 5.1.14


1)
All failure rates given per 106 hours

21
Table 4 Failure rates, coverages and SFF for control logic
units

Control Logic Units – industrial PLC


2) 2)
Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.

Analogue input (single) 3.6 1.8 60 % 20 % 0.7 1.4 80 % Sect. 5.2.1.1

CPU (1oo1) 17.6 8.8 60 % 20 % 3.5 7.0 80 % Sect. 5.2.1.2

Digital output (single) 3.6 1.8 60 % 20 % 0.7 1.4 80 % Sect. 5.2.1.3

Control Logic Units – programmable safety system


2) 2)
Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.

Analogue input (single) 3.2 1.6 90 % 20 % 0.16 1.3 95 % Sect. 5.2.2.1

CPU (1oo1) 9.6 4.8 90 % 20 % 0.48 3.8 95 % Sect. 5.2.2.2

Digital output (single) 3.2 1.6 90 % 20 % 0.16 1.3 95 % Sect. 5.2.2.3

Control Logic Units – hardwired safety system


2) 2)
Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.
Trip amplifier / analogue
0.44 0.04 0 0 0.04 0.4 91 % Sect. 5.2.3.1
input (single)
Logic (1oo1) 0.33 0.03 0 0 0.03 0.3 91 % Sect. 5.2.3.2

Digital output (single) 0.33 0.03 0 0 0.03 0.3 91 % Sect. 5.2.3.3


1)
All failure rates given per 106 hours
2)
For control logic units, the coverage c will mainly include failures detected by automatic self-testing. Casual
observation of control logic failures is unlikely.

The following additional assumptions and notes apply for the above data on control logic units:

• A single system with analogue input, CPU/logic and digital output configuration is
generally assumed;
• For the input and output part, figures are given for one channel plus the common part of
the input/output card (except for hardwired safety system where figures for one channel
only are given);
• Single processing unit / logic part is assumed throughout;
• If the figures for input and output are to be used for redundant configurations, separate
input cards and output cards must be used since the given figures assume a common part
on each card;
• If separate Ex barriers or other interface devices are used, figures for these must be added
separately;
• The systems are generally assumed used in de-energised to trip functions, i.e. loss of
power or signal will result in a safe state.

22
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Table 5 Failure rates, coverages and SFF for final elements

Final elements

Component λcrit 1) λD 1) cD cS λDU 1) λSU 1) SFF Ref.


ESV/XV incl. actuator
5.3 3.0 30 % 10 % 2.1 2.1 60 % Sect. 5.3.1
(ex. pilot)
Topside X-mas tree ESV
2.0 1.1 30 % 10 % 0.8 0.8 60 % Sect. 5.3.2
incl. actuator (ex. pilot)
Blowdown valve incl.
3.9 2.6 20 % 0% 2.1 1.3 46 % Sect. 5.3.3
actuator (ex. pilot)
Pilot/solenoid valve 3.0 1.1 30 % 10 % 0.8 1.7 73 % Sect. 5.3.4
Control valve
6.9 4.4 50 % 50 % 2.2 1.3 68 % Sect. 5.3.5
(frequently operated) 2)
Control valve
6.9 4.4 20 % 20 % 3.5 2.0 49 % Sect. 5.3.5
(shutdown service only) 3)
Pressure relief valve, PSV 3.3 2.2 0% 10 % 2.2 4) 1.0 33 % Sect. 5.3.6

Deluge valve (complete) 4.5 3.0 0% 0% 3.0 1.5 33 % Sect. 5.3.7


Fire damper (incl.
5.5 3.2 0% 0% 3.2 2.3 43 % Sect. 5.3.8
solenoid valve)
Circuit breaker (large) 0.8 0.3 0% 0% 0.3 0.5 63 % Sect. 5.3.9

Relay 0.5 0.2 0% 0% 0.2 0.3 60 % Sect. 5.3.10


1)
All failure rates are given per 106 hours
2)
Fail to close data for control valves applied in combined control and shutdown purpose. Failure rate for pilot /
solenoid valve should be added
3)
Fail to close data for control valves applied only for shutdown (i.e. normally not operated). Failure rate for pilot /
solenoid valve should be added
4)
The dangerous undetected failure rate applies for a fail to open failure within 20% of the set point pressure. If a
critical failure is defined as fail to open at a higher pressure, a reduced failure rate is expected. For a ‘fail to open
before test pressure’ failure; λDU = 1.1·10-6 is suggested

Table 6 below gives suggested values for the PTIF, i.e. the probability of a test independent failure
occurring upon a demand.

23
Table 6 PTIF for various components

Component
Component PTIF Comments (see section 3.3.1)
group

1·10-3 When operating in clean medium


Pressure switch
Unclean medium - clogging of sensing
5·10-3
line possible
Proximity switch 1·10-3 Based on expert judgement
Applies for pressure, level, temperature
Process transmitters 5·10-4
and flow transmitters
Gas detector, catalytic 5·10-4
Input
Devices IR gas detector 1·10-3
The PTIF values are given assuming that
Smoke detector 1·10-3 the detector is already exposed.

Heat detector 1·10-3 Catalytic H2S detector is assumed.

Flame detector 1·10-3

H2S detector 5·10-4

ESD push button 1·10-5 Previous SINTEF estimate


Standard industrial PLC –
5·10-4
single system
Control Programmable safety system –
5·10-5 Mainly due to software errors.
Logic Units single system
Hardwired safety system –
5·10-6
single system
ESV/XV and X-mas tree Applies for a standard/average functional
1·10-4
valves test
Blowdown valve 1·10-4 Assuming a standard functional test

Control valve 1·10-4 Assuming a standard functional test

Final Pressure relief valve, PSV 1·10-3 Previous SINTEF estimate


1)
Elements
Deluge valve 1·10-3 SINTEF estimate

Fire damper 1·10-3 SINTEF estimate

Circuit breaker (large) 5·10-5 SINTEF estimate

Relay 5·10-5 SINTEF estimate


1)
For all valves the PTIF applies for the complete valve including pilot/solenoid

Table 7 and 8 give suggested values for the β-factor and the configuration factor CMooN
respectively. Note that the CMooN factors have been updated as compared to previous values, ref.
[12].

Regarding the suggested β-factors it should be pointed out that these are typical values. Any
application specific factors may be implemented in the estimates by e.g. applying the checklists in

24
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

IEC 61508 or the simplified method described in appendix D in [10]. Some beta values have been
slightly increased as compared to the figures in the 2006 edition, [12]. This is based on results
from operational reviews where it was observed that a fairly large proportion of the SIS failures
actually involved more than one component.

Table 7 β-factors for various components

Component
Component β Comment/source
group

Pressure switch 0.05

Proximity switch 0.05 Updated SINTEF estimates based on


former values and additional
Input devices Process transmitters 0.04
knowledge from operational reviews
Fire/gas detectors 0.06

ESD push button 0.03

Standard industrial PLC 0.07


Control logic Updated SINTEF estimates based on
Programmable safety system 0.05
units additional judgements
Hardwired safety system 0.03
ESV/XV incl. X-mas tree
0.03
valves (main valve + actuator)
Blowdown valves (main valve
0.03
+ actuator)
Pilot valves on same valve 0.10

Pilot valves on different valves 0.03


Updated SINTEF estimates based on
Final Control valves 0.03 former values and additional
Elements 1) knowledge from operational reviews
Pressure relief valve, PSV 0.05

Deluge valve 0.03

Fire damper 0.03

Relay 0.03

Circuit breaker 0.03


1)
β value for (redundant) PSVs on the same equipment/vessel. For PSVs on different equipment, a value of 0.03 is
suggested

25
Table 8 Numerical values for configuration factors, CMooN

M\ N N=2 N=3 N=4 N=5 N=6

M=1 C1oo2 = 1.0 C1oo3 = 0.5 C1oo4 = 0.3 C1oo5 = 0.21 C1oo6 = 0.17

M=2 - C2oo3 = 2.0 C2oo4 = 1.1 C2oo5 = 0.7 C2oo6 = 0.4

M=3 - - C3oo4 = 2.9 C3oo5 = 1.8 C3oo6 = 1.1

M=4 - - - C4oo5 = 3.7 C4oo6 = 2.4

M=5 - - - - C5oo6 = 4.3

Note that the CMooN factors have been updated as compared to the previous 2006 handbook, [12].
It should be pointed out that the CMooN factors are suggested values and not exact figures. C1oo5
and C1oo6 have been given to two decimal places in order to be able to distinguish the two
configurations. The reasoning behind the CMooN factors is further discussed in [10].

26
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

3.2 Subsea Equipment


Table 9 summarises the reliability input data for subsea equipment. For a more thorough
discussion of the data, reference is made to section 5.4. It should be noted that for the subsea
equipment, focus has been on dangerous failures and only values for the coverage factor for
dangerous failures, cD, is specified. Hence, the rate of undetected safe failures, λSU, is not given.
Furthermore, specific PTIF and β-factors for subsea components are not given. As a starting point,
estimates for topside equipment can be used for these unspecified parameters. They should,
however, be assessed on a case by case basis depending on their specific (subsea) application.

Similarly, the values given for the safe failure fraction (SFF) should be considered as indicative
only. Higher (or lower) SFFs may apply for specific equipment types and this should in such case
be documented separately.

Table 9 Failure rates for subsea equipment - input devices, control


system units and output devices

Subsea equipment

Component 1) λcrit 2) λD 2) cD λDU 2) SFF

Pressure sensor 0.62 0.37 60 % 0.15 76 %

Temperature sensor 0.30 0.18 60 % 0.07 76 %

Combined pressure and temperature sensor 2.5 1.3 60 % 0.50 80 %

Flow sensor 2.0 1.4 60 % 0.56 72 %


ESD/PSD logic including input/output (located
16.0 8.0 90 % 0.80 95 %
topside)
MCS - Master control station (located topside) 9.4 2.8 60 % 1.1 88 %

Umbilical hydraulic/chemical line (per line) 0.31 0.22 80 % 0.04 87 %

Umbilical power/signal line (per line) 0.51 0.36 80 % 0.07 86 %

SEM – subsea electronic module 9.9 4.0 70 % 1.2 84 %

Manifold isolation valve 1.32 0.40 0% 0.40 70 %

Solenoid control valve (in subsea control module) 0.40 0.16 0% 0.16 60 %
Production master valve (PMV), Production wing
0.26 0.18 0% 0.18 30 %
valve (PWV)
Chemical injection valve (CIV) 0.37 0.22 0% 0.22 40 %

Downhole safety valve (DHSV) 5.6 3.2 0% 3.2 42 %

Subsea isolation valve (SSIV) 0.52 0.21 0% 0.21 60 %


1)
Further reference is made to Table 11 in section 5.4 for additional details on the recommended data
2)
All failure rates are given per 106 hours

27
3.3 Comments to the PDS Data
The data presented in Table 3 – Table 9 are mainly based on operational experience (OREDA,
RNNP, etc.) and as such reflect some kind of average expected field performance. It is stressed
that these generic data should not be used uncritically – if valid application specific data is
available, these should be preferred. When comparing the data in this handbook with figures
found in manufacturer certificates and reports, major gaps will often be found. As discussed in
section 2.4.1 such data sources often exclude failures caused by inappropriate maintenance, usage
mistakes and design related systematic failures. Care should therefore be taken when data from
certificates and similar reports are used for predicting reliability performance in the field.

For some equipment types and some of the parameters, the listed data sources provide limited
information and additional expert judgement must therefore be applied. In particular for the PTIF,
the coverage c and the r factor, the data sources are scarce, and some arguments are therefore
required concerning the recommended values.

3.3.1 Probability of Test Independent Failures (PTIF)

General
No testing is 100% perfect and some dangerous undetected failures may therefore be present also
after a functional test. The suggested PTIF values attempt to quantify the likelihood of such failures
to be present after a test. Obviously, such values will depend heavily on the given application, and
specific measures may have been introduced to minimise the likelihood of test independent
failures. Hence, it may be argued to reduce (or increase) the given values. This is further
discussed in appendix D of the method handbook, [10].

Process Switch
The proposed PTIF of 10-3 applies to a pressure switch operating in clean medium, and the main
contribution is assumed to be failures during human intervention (e.g. by-pass, wrong set point,
etc.). If the switch is operating in unclean medium and clogging of the sensing line is a possibility,
the PTIF may be increased to 5·10-3.

Proximity Switch
For proximity switches a relatively high PTIF of 10-3 has been suggested. The main contributors to
this TIF are assumed to be failures during installation and maintenance, in particular mounting
and misalignment problems related to the interacting parts.

Process Transmitters
Transmitters have a “live signal”. Thus, blocking of the sensing line may be detected by the
operator and is included in the λdet. Also a significant part of the failures of the transmitter itself
(all "stuck" failures) may be detected by the operator and therefore contribute to λdet. Thus, the
PTIF for transmitters is expected to be less than that of the switch and a value of 5·10-4 has been
suggested. In previous editions of the handbook, smart and field bus transmitters have, due to
more complete self-test, been given an even smaller PTIF. However, since smart transmitter also
has some additional software, other test independent failures may be introduced. Consequently,
one common PTIF is given.

Gas Detectors
In previous versions of the PDS data handbook the given PTIF values for gas detectors
differentiated with respect to detector type, the size of the leakage, ventilation, and other
conditions expected to influence the PTIF probability for detectors. The PTIF values were then
given as intervals depending on the state of the conditions listed. It is now assumed that the
detector is already exposed and the present values generally represent lower end values of the
previously given intervals, as this represented the “best conditions”. Note that catalytic gas

28
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

detectors and H2S detectors have been given a somewhat lower PTIF than the IR gas detectors. The
catalytic gas detectors have a simpler design which is assumed to result in a lower probability of
test independent systematic failures.

Fire Detectors
PTIF values are given based on the assumptions that (1) a detector with the "appropriate" detection
principle is applied (e.g. that smoke detectors are applied where smoke fires are expected and
flame detectors where flame fires are expected), and (2) the detector is already exposed to the
flame/heat or smoke (depending on detector type). A PTIF value of 10-3 has been suggested for all
fire detectors.

Control Logic Units


The PTIF for the control logic is mainly due to software errors. For dedicated high quality safety
systems, the overall estimate equals 5·10-5, i.e., the required action will fail to be carried out
successfully in 1 out of 20 000 demands due to (an undetectable) software error. For hardwired
safety systems without software, the corresponding estimate is a factor 10 lower, whereas for
standard industrial PLC systems, the estimate PTIF = 5·10-4 applies. Furthermore, it must be
assumed that the quality assurance program during design and modifications is more extensive for
programmable safety systems (and hardwired safety systems) than for a standard industrial PLC.
Consequently, this also indicates a lower PTIF for a programmable (and hardwired) safety system
than for a standard industrial PLC.

Valves
The PTIF for ESV/XVs will depend on the quality of the functional testing performed. Here, a
standard functional test has been assumed where the valve is fully closed but not tested for
internal leakage. In such case a PTIF value of 10-4 is suggested. For control valves used for
shutdown purposes and blowdown valves a PTIF of 10-4 is also suggested. All these values include
PTIF for the pilot valve. For PSVs a relatively high PTIF value of 10-3 has been suggested due to the
possibility of human failures related to incorrect setting/adjustment of the PSV.

3.3.2 Coverage

General
As compared to the ’03, ‘04 and ’06 editions of the PDS Reliability data handbook, some of the
coverage factors have been updated. The reasoning behind this is partly discussed below. The
discussion is mainly limited to dangerous failures.

Switches and Transmitters


For process switches and proximity switches a total coverage of 15% has been assumed. It is then
assumed 5 % coverage due to line monitoring of the connections and additional 10% detection for
dangerous failures due to operator observation during operation.

For process transmitters 60 % coverage for dangerous failures has been assumed. This is based on
implemented self test in the transmitter as well as casual observation by control room operator.
The latter assumes that the transmitter signal can be observed on the VDU and compared with
other signals so that e.g. stuck or drifting signal can be revealed. If a higher coverage is claimed,
e.g. due to automatic comparison between transmitters, this should be especially documented.

Fire and Gas Detectors


For all detectors, the given coverage values apply for analogue sensors (where the value/reading
can be monitored by the operator in the CCR). IR gas-, IR flame- and UV flame detectors have
build-in monitoring and self-test of electronics and optics, and therefore have a relatively high

29
coverage. Catalytic gas detectors normally have limited built-in self-test. The same applies for
smoke and heat detectors. Hence, these detector types will have a lower coverage.

Control Logic Units


For control logic no updated OREDA data has been available and the quantitative input from the
safety system vendors has focused on the rate of dangerous undetected failures and the failure rate
distribution between the functional parts. Therefore the coverage values are mainly based on
judgements and discussions with experts.

For a standard industrial PLC (single system) the coverage factor for dangerous failures, cD, has
been set lower than for a SIL certified programmable safety system. For safe failures, the
coverage factor is low, since it is assumed that upon detection of such a failure (e.g. loss of signal)
the single safety system should normally go to a safe state (i.e. a shutdown). It should be noted
that if the safety system is redundant, the rate of undetected safe (i.e. spurious trip) failures may
be reduced significantly by the use of voting.

The hardwired safety system is assumed to be a fail safe design without diagnostic coverage, i.e.
failures will either be dangerous undetected or a detected failure will result in a trip action (SU).
Hence, this implies that the coverage for both dangerous and safe failures has been assumed to be
zero. Note that this applies for single systems and as a consequence hardwired safety systems are
often voted 2oo2 in order to avoid spurious trip failures.

Valves
No automatic self-test for valves is assumed. For ESV/XV valves the coverage for dangerous
failures have been slightly increased to 30% due to information from OREDA phase V-VII where
it appears that a high fraction of dangerous failures (more than 50%) are detected in between tests
by operator observation or other methods. It should be noted that this is not automatic diagnostic
coverage (as e.g. defined in IEC-61508) but will however imply that dangerous faults are detected
in between testing. For valves that are operated infrequently, the coverage will be lower, and the
cD for blowdown valves has therefore been set to 20%. Based on information from OREDA the
coverage for safe failures has been set to 10% for ESV/XV and 0% for blowdown valves.

For control valves used also for shutdown purposes, a relatively high coverage of 50% has been
estimated based on the registered observation methods for the relevant failure modes in OREDA.
It is then implicitly assumed that the control valve is frequently operated resulting in a relatively
high coverage.

Occasionally, e.g. on some onshore plants, selected control valves may be used solely for
shutdown purposes (i.e. not normally operated). In this case the valves will be operated
infrequently, resulting in a significantly lower coverage factor. For control valves used only as
shutdown valves, the coverage is therefore suggested reduced to 20%.

For PSV valves and deluge valves no coverage has generally been assumed.

3.3.3 Fraction of Random Hardware Failures (r)

General
Based on input from discussions with experts as well as a study of available OREDA data,
estimates of r have been established. As discussed previously, r is the fraction of dangerous
undetected (DU) failures that can be “explained” by random hardware failures (hence 1-r is the
fraction of DU failures that can be explained by systematic failures). Below, a brief discussion of
the r values suggested in the detailed data sheets is given.

30
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Process Switch
For process switches the reported failure causes from OREDA are scarce and the r has been
estimated by expert judgement to be approximately 50%.

Process Transmitters
Data from OREDA on critical transmitter failures, results from operational reviews as well as
discussions with experts, all indicate that a significant proportion of the critical dangerous failures
for transmitters are caused by factors such as “excessive vibration”, “erroneous maintenance”
(e.g. ‘wrong calibration’ ‘erroneous specification of measurement area’ and ‘left in inhibit’) and
“incorrect installation”. As seen all these are examples of systematic failures which according to
OREDA are detectable (either by casual observation or during functional testing/maintenance).
Based on the observed failure cause distribution, an r = 30% has therefore been proposed.

Detectors
When going through data from OREDA phase V and VI for fire and gas detectors, it is found that
for some 40% of the critical failures the failure cause is reported as being due to ‘expected wear
and tear, whereas some 60% of the critical failures are due to ‘maintenance errors’. When going in
more detail into the failure mechanisms, it is seen that the failures are described by e.g. ‘out of
adjustment’ (30%), ‘general instrument failure’ (28%), ‘contamination’ (21%), ‘vibration’ (10%)
and ‘maintenance/external/others’ (11%). Even though ‘contamination’ (i.e. typical dirty lens) and
instrument failure partly can be explained by expected wear and tear, it is seen that many of the
critical failures are systematic ones. Based on this an r = 40% has been proposed.

Control Logic
For control logic no updated OREDA data is available on failure causes and the proposed r values
are therefore entirely based on expert judgements. It has been assumed that for a standard
industrial PLC the major part of the failures can be explained by (systematic) software related
errors. Hence, a small r of 10% has been proposed. On the other hand, for a hardwired safety
system, it is assumed that a large part of the failure rate is due to random hardware failures, and a
large r of 80% has been suggested.

Valves
The reported failure causes in OREDA for critical failures are somewhat scarce and therefore
additional expert judgement has to be applied. When considering what types of valve failures that
are typically revealed upon functional testing, this includes stuck valve, insufficient actuator
force, valve not shutting off tight due to excessive erosion or corrosion (unclean medium),
incorrect installation, etc. Several of these failures represent (detectable) systematic failures,
hence it is evident that the r is significantly lower than 1 (only random hardware failures).

For ESV/XV and X-mas tree valves an r = 50% has been proposed, mainly based on expert
judgement and reported failure causes for other type of valves. For pilot valves, there are more
reported failure causes and these indicate a relatively high proportion of systematic failures. Here
an r equal to 40% has been suggested based on the reported OREDA data.

For control valves used for shutdown purposes, a somewhat higher proportion of ‘wear and tear’
failures are expected, and therefore an r equal to 60% has been proposed. Reported failures causes
for deluge valves also indicate a relatively high proportion of ‘wear an tear’ related failures and an
r equal to 60% has been proposed also for deluge valves.

For PSV valves, limited data on failure causes is available from OREDA, and an r = 50% has
been suggested.

31
3.4 Reliability Data Uncertainties – Upper 70% Values

3.4.1 Data Uncertainties

The failure rates given in this handbook are best (mean) estimates based on the available data
sources listed in section 2.5. The data in these sources have mainly been collected on oil and gas
installations where environment, operating conditions and equipment types are comparable, but
not at all identical. The presented data are therefore associated with uncertainties due to factors
such as:

• The data collection itself; inadequate failure reporting, classification or data interpretation.
• Variations between installations; the failure rates are highly dependant upon the operating
conditions and also the equipment make will vary between installations.
• Relevance of data / equipment boundaries; what components are included / not included in
the reported data? Have equipment parts been repaired or simply replaced, etc.?
• Assumed statistical model; is the standard assumption of a constant failure rate always
relevant for the equipment type under consideration?
• Aggregated operational experience; what is the total amount of operational experience
underlying the given estimates?

The last bullet concerning amount of operational experience, is related to the possibility of
establishing a confidence interval for the failure rate. Instead of only specifying a single mean
value, an interval likely to include the parameter is given. How likely the interval is to contain the
parameter is determined by the confidence level. E.g. a 90% confidence interval for λDU may be
given by: [0.1·10-6 per hour, 5·10-6 per hour]. This means that we are 90% confident that the
failure rate will lie within this interval. It is also possible to specify one-sided confidence intervals
where the lower bound of the interval is zero. E.g. a one-sided 70% interval for λDU may be given
by: [0, 4·10-6 per hour], implying that we can be 70% certain that the failure rate is lower than
4·10-6 per hour.

In particular, in IEC 61508-2, section 7.4.7.4, it is stated that any failure rate data based on
operational experience should have a confidence level of at least 70% (a similar requirement is
found in IEC 61511-1, section 11.9.2). Hence, IEC 61508 and IEC 61511 indicate that when using
historic data one should be conservative and the recommended approach is to choose the upper
70% confidence value for the failure rate as illustrated on Figure 2 below.

70% confidence range for λ

0 Mean λ Conservative λ

Figure 2 Illustration of failure rate with confidence level of 70%

Some data sources, such as OREDA, provide confidence intervals for the failure rate estimates,
whereas most sources, including this handbook, provide mean values only. However, in the next
section an attempt has been made to indicate failure rate values with a confidence level of at least
70% as required in the IEC 61508/61511 standards.

32
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

3.4.2 Upper 70% Values

When looking in more detail at the data dossiers in chapter 5, it is seen that there is a varying
amount of operational experience underlying the failure rate estimates. Hence, there will also be a
varying degree of confidence associated with the given data. Based on the aggregated operational
time, number of dangerous failures and some additional expert judgement, an attempt has been
made, whenever possible, to establish a one-sided 70% confidence interval and thereby provide
some upper 70% values for the dangerous undetected failure rate. The result of this exercise is
summarised in the below table (done only for the topside data where the most detailed
information has been available).

Table 10 Estimated upper 70% confidence values for topside


equipment
1) 1)
Component λDU λDU
group Component Comments
(mean) (70%)
Pressure switch 2.0·10-6 4.8·10-6

Proximity switch 3.0·10-6 - Insufficient data available

Pressure transmitter 0.3·10-6 0.5·10-6

Level transmitter 0.6·10-6 1.2·10-6

Temperature transmitter 0.3·10-6 0.6·10-6

Flow transmitter 0.6·10-6 1.0·10-6

Gas detector, catalytic 1.8·10-6 2.4·10-4


Input
Devices
IR gas detector, point 0.6·10-6 0.9·10-3

IR gas detector, line 0.7·10-6 1.3·10-6

Smoke detector 0.7·10-6 0.9·10-6

Heat detector 0.6·10-6 0.9·10-6

Flame detector 0.8·10-6 1.2·10-6

H2S detector 0.5·10-6 0.8·10-6

ESD push button 0.5·10-6 1.1·10-6


Control
AI / CPU / DO - - Insufficient data available
Logic Units
ESV/XV (ex. pilot) 2.1·10-6 2.8·10-6

Final X-mas tree valve (ex. pilot) 0.8·10-6 1.3·10-6


Elements
Blowdown valve (ex. pilot) 2.1·10-6 2.8·10-6

Pilot/solenoid valve 0.8·10-6 1.1·10-6

33
1) 1)
Component λDU λDU
group Component Comments
(mean) (70%)
Final Control valve (ex. pilot)
2.2·10-6 3.5·10-6
Elements (frequently operated)
(cont.) Control valve (ex. pilot)
3.5·10-6 5.5·10-6
(shutdown service only)
Pressure relief valve, PSV 2.2·10-6 3.2·10-6

Deluge valve 3.0·10-6 5.7·10-6

Fire damper 3.2·10-6 5.3·10-6

Circuit breaker / relay - - Insufficient data available

Downhole safety valve 3.2·10-6 5.0·10-6 Based mainly on RNNS data


1)
All failure rates given per hour

Some comments to the above table should be made:

• Establishing confidence intervals based on data from different sources and different
installations is not a straightforward task. The suggested upper 70% values should
therefore be taken as rough estimates only.
• As discussed in section 2.4.1, the generic data presented in this handbook include failure
mechanisms that are frequently excluded from e.g. manufacturer failure reports and
certificates. As such, the mean failure rates given in Table 3-5 are considered
representative when predicting the expected risk reduction from the equipment. Using the
upper 70% confidence values presented above should therefore be considered as a way of
increasing the robustness of the results e.g. when performing sensitivity analyses.
• In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in
the operating phase”, [23], a procedure for updating failure rates in operation is described.
For this purpose a conservative estimate for the λDU is required. Unless other equipment
specific values are available, the above upper 70% values can then be applied.

3.5 What is “Sufficient Operational Experience“? – Proven in Use


As an alternative to developing a product fully in line with the systematic capability requirements
given in IEC 61508, manufacturers may claim “proven in use” based on operational experience
and return data for a specific piece of equipment. A question which frequently comes up is “when
has sufficient operational experience been gained in order to claim proven in use?”

In the SINTEF report “Guidelines for follow-up of Safety Instrumented Systems (SIS) in the
operating phase”, [23], it has been discussed how much operational experience is required before
a reasonable confidence in a new failure rate estimate can be established. For SIS component data
(for detectors, sensors and valves) from OREDA, it can be found that the upper 95% confidence
limit for the rate of DU-failures is typically some 2–3 times the mean value of the failure rate,
[23], [24]. A suggested “cut-off” criterion for claiming proven in use can then be that the gathered
operational experience shall be sufficient to establish a failure rate estimate with comparable
confidence, i.e. the upper 95% confidence for λDU shall be within 2–3 times the mean value.

Based on this criterion and further work from [23] and [24], some suggested rules for claiming
“proven in use” for a given piece of field equipment are:

34
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

• Minimum aggregated time in service should be 2.5 million (2.5·106) operational hours or
at least 2 dangerous undetected failures 4 should have been registered for the considered
observation period;
• Operational data should be available from at least 2 installations with comparable
operational environments;
• The data should be collected from the useful period of life of the equipment (typically this
implies that the first 6 months of operation should be considered excluded);
• A systematic data collection and reporting system should be implemented to ensure that all
failures have been formally recorded;
• It should be ensured that all equipment units included in the sample have been activated
(i.e. tested or demanded) at least once during the observation period (in order to ensure
that components that have never been activated are counted in).

Additional requirements are given in the IEC standards. It should be noted that whereas IEC
61508 uses the term ‘proven in use’, IEC 61511 applies the term ‘prior use’. However, neither
IEC 61508 nor IEC 61511 quantify the required amount of operating experience, but states that
for field equipment there may be extensive operating experience that can be used as a basis for the
evidence [for prior use, ref. IEC 61511-1, section 11.5.3].

It may be argued that the above requirement concerning aggregated time in service is difficult to
fulfil for equipment other than e.g. fire and gas detectors. However, an important part of claiming
proven in use is to have a clear understanding of failure mechanisms, how the failure is detected
and repaired and what maintenance activities are required in order to keep the equipment in an “as
good as new condition”, [21]. For this purpose considerable operational experience is necessary
and focus should therefore be on improved data collection and failure registration. Furthermore, it
will require that the manufacturers obtain feedback on operational performance from the
operators, also beyond the warranty period of the equipment.

4
In general, an increasing number of failures will result in a narrower confidence interval, i.e. a higher
confidence in the estimated mean value. Hence, experienced DU failures may “compensate” for limited
operational experience (but will anyhow require significant operational time if a low failure rate is to be
claimed).
35
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

4 MAIN FEATURES OF THE PDS METHOD


This section briefly presents the main characteristics of the PDS method, the failure classification
scheme, and reliability performance measures in brief. Please note that the objective is not to give
a full and detailed presentation of the method, but to give an introduction to the model taxonomy
and the basic ideas. For a more comprehensive description of the PDS method and the detailed
formulas, see the updated PDS method handbook, [10].

4.1 Main Characteristics of PDS


Some main characteristics of the PDS method are:

 The method gives an integrated approach to random hardware and systematic failures. Thus,
the model accounts for relevant failure causes such as:
- normal ageing
- software failures
- stress induced failures
- design failures
- installation failures
- operational related failures
 The model includes all relevant failure types that may occur, and explicitly accounts for
dependent (common cause) failures and the effect from different types of testing (auto-
matic/self-test as well as manual observation).
 The model distinguishes between the ways a system can fail (failure mode), such as fail-to-
operate, spurious operation and non-critical failures.
 A main benefit of the PDS taxonomy is the direct relationship between failure causes and the
measures used to improve safety system performance.
 The method is simple and structured:
- highlighting the important factors contributing to loss of safety and spurious operation
- promoting transparency and communication
 As stressed in IEC 61508, it is important to incorporate the complete safety function when
performing reliability analyses. This is a core issue in PDS; it is function-oriented, and the
whole path from the sensors, via the control logic to the actuators is taken into consideration
when modelling the system.
 The PDS method has a somewhat different approach to systematic failures compared to IEC
61508. Whereas IEC 61508 only quantifies part of the total failure rate, represented by the
random hardware failures, PDS also attempts to quantify the contribution from systematic
failures (see Figure 3 below) and therefore gives a more complete picture of how the
equipment is likely to operate in the field.

4.2 Failure Causes and Failure Modes


Failures can be categorised according to failure cause and the IEC standards differentiate between
random hardware failure and systematic failures. In PDS the same split is made, but a somewhat
more detailed breakdown of the systematic failures has been performed, as indicated in Figure 3.

37
Failure

Random Systematic
hardware failure
failure

Aging failure Software faults Operational failure


Installation failure
- Valve left in wrong
- Random failures due to - Programming error - Gas detector cover left on position
natural (and foreseen) - Compilation error after commisioning - Sensor calibration
stressors - Error during software - Valve installed in wrong failure
update direction - Detector in override mode
- Incorrect sensor location

Design related failure Excessive stress


failure
- Inadequate or erroneous
specificaton - Excessive vibration
- Inadequate or erroneous - Unforeseen sand prod.
implementation - Too high temperature

Figure 3 Failure classification by cause of failure

The following failure categories (causes) are defined:

Random hardware failures are failures resulting from the natural degradation mechanisms of the
component. For these failures it is assumed that the operating conditions are within the design
envelope of the system.

Systematic failures are failures that can be related to a particular cause other than natural
degradation and foreseen stressors. Systematic failures are due to errors made during
specification, design, operation and maintenance phases of the lifecycle. Such failures can
therefore normally be eliminated by a modification, either of the design or manufacturing process,
the testing and operating procedures, the training of personnel or changes to documentation.

There are several possible schemes for classifying systematic failures. Here, a further split into
five categories has been suggested:

• Software faults may be due to programming errors, compilation errors, inadequate testing,
unforeseen application conditions, change of system parameters, etc. Such faults are present
from the point where the incorrect code is developed until the fault is detected either through
testing or through improper operation of the safety function. Software faults can also be
introduced during modification to existing process facilities, e.g. inadequate update of the
application software to reflect the revised shutdown sequences or erroneous setting of a high
alarm outside its operational limits.
• Design related failures, are failures (other than software faults) introduced during the design
phase of the equipment. It may be a failure arising from incorrect, incomplete or ambiguous
system or software specification, a failure in the manufacturing process and/or in the quality
assurance of the component. Examples are a valve failing to close due to insufficient actuator
force or a sensor failing to discriminate between true and false demands.
• Installation failures are failures introduced during the last phases prior to operation, i.e. during
installation or commissioning. If detected, such failures are typically removed during the first
months of operation and such failures are therefore often excluded from data bases. These
failures may however remain inherent in the system for a long period and can materialise

38
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

during an actual demand. Examples are erroneous location of e.g. fire/gas detectors, a valve
installed in the wrong direction or a sensor that has been erroneously calibrated during
commissioning.
• Excessive stress failures occur when stresses beyond the design specification are placed upon
the component. The excessive stresses may be caused either by external causes or by internal
influences from the medium. Examples may be damage to process sensors as a result of
excessive vibration or valve failure caused by unforeseen sand production.
• Operational failures are initiated by human errors during operation or maintenance/testing.
Examples are loops left in the override position after completion of maintenance or a process
sensor isolation valve left in closed position so that the instrument does not sense the medium.

The PDS method considers three failure modes:

• Dangerous (D). Safety system/module does not operate on demand (e.g. sensor stuck upon
demand)
• Safe (S). Safety system/module may operate without demand (e.g. sensor provides signal
without demand – potential spurious trip)
• Non-Critical (NONC). Main functions not affected (e.g. sensor imperfection, which has no
direct effect on control path)

The first two of these failure modes, dangerous (D) and safe (S) are considered "critical" in the
sense that they have a potential to affect the operation of the safety function. The safe failures
have a potential to cause a trip of the safety function, while the dangerous failures may cause the
safety function not to operate upon a demand. The failure modes above are further split into the
following categories:

• Dangerous undetected (DU)


Dangerous failures not detected by automatic self-test or personnel; i.e. only detected by a
functional test (or a true demand)
• Dangerous detected (DD)
Dangerous failures detected by automatic self-test or personnel
• Safe undetected (SU)
Safe (spurious trip) failures not detected by automatic self-test or personnel.
• Safe detected (SD)
Safe (spurious trip) failures detected by automatic self-test or personnel that can prevent an
actual trip to occur.

4.3 Reliability Performance Measures


This section presents the main measures for loss of safety used in PDS. All these reflect safety
unavailability of the function, i.e. the probability of a failure on demand. The measure for loss of
safety used in IEC for systems operating in low demand mode, is denoted PFD (Probability of
Failure on Demand), and this is also one of the measures adopted in the PDS method.

Note that for high demand mode systems IEC 61508 uses PFH (Probability of Failure per Hour)
as the measure for loss of safety. PFH is not discussed here but is treated separately in the updated
method handbook, [10].

39
4.3.1 Contributions to Loss of Safety

The potential contributors to loss of safety (safety unavailability) can be split into the following
categories:

1) Unavailability due to dangerous undetected (DU) failures. For a single component, these
failures occur with rate λDU. The average period of unavailability due to such a failure is τ/2
(where τ = period of functional testing), since the failure can have occurred anywhere inside
the test interval.

2) Unavailability due to failures not revealed during functional testing. This unavailability is
caused by “unknown” ("dormant"), dangerous and undetected failures which can only be
detected during a true demand. These failures are denoted Test Independent Failures (TIF), as
they are not detected during functional testing.

3) Unavailability due to known or planned downtime. This is the unavailability or downtime


caused by components which are either known to have failed or are taken out for test-
ing/maintenance.

Below, we discuss separately the loss of safety measures for the three failure categories, and
finally an overall measure for loss of safety is given.

4.3.2 Loss of Safety due to DU Failures - Probability of Failure on Demand (PFD)

The PFD quantifies the loss of safety due to dangerous undetected failures (with rate λDU), during
the period when it is unknown that the function is unavailable. The average duration of this period
is τ/2, where τ = test period. For a single (1oo1) component the PFD can be approximated by:

PFD ≈ λDU · τ/2

For a MooN voting logic (M<N), the main contribution to PFD (accounting for common cause
failures) is given by:

PFD ≈ CMooN · β · (λDU ⋅ τ/2); (M<N)

Here, CMooN is a modification factor depending on the voting configuration, ref. Table 8. Further,
for a NooN voting, we approximately have:

PFD ≈ N ⋅ λDU ⋅ τ/2

4.3.3 Loss of Safety due to Test Independent Failures (PTIF)

In reliability analysis it is often assumed that functional testing is “perfect” and as such detects
100% of the failures. In true life this is not necessarily the case; the test conditions may differ
from the real demand conditions, and some dangerous failures can therefore remain in the SIS
after the functional test. In PDS this is catered for by adding the probability of so called test
independent failures (TIF) to the PFD.

PTIF = The Probability that the component/system will fail to carry out its intended
function due to a (latent) failure not detectable by functional testing (therefore the
name “test independent failure”)

40
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

It should be noted that if an imperfect testing principle is adopted for the functional testing, this
will lead to an increase of the TIF probability. For instance, if a gas detector is tested by
introducing a dedicated test gas to the housing via a special port, the test will not reveal a
blockage of the main ports. Another example is the use of partial stroke testing for valves. This
type of testing is likely to increase the PTIF for the valve, since the valve is not fully proof tested
during such a test.

Hence, for a single component, PTIF expresses the likelihood of a component having just been
functionally tested, to fail on demand (irrespective of the interval of manual testing). For
redundant components, the TIF contribution to loss of safety will for a MooN voting be given by
the general formula: CMooN · β · PTIF, where the numerical values of CMooN are assumed identical
to those used for calculating PFD, ref. Table 8.

4.3.4 Loss of Safety due to Downtime Unavailability – DTU

This represents the downtime part of the safety unavailability as described in category 3 above.
The DTU (Downtime Unavailability) quantifies the loss of safety due to:

• repair of dangerous failures, resulting in a period when it is known that the function is
unavailable due to repair. We refer to this unavailability as DTUR;
• planned downtime (or inhibition time) resulting from activities such as testing, maintenance
and inspection. We refer to this unavailability as DTUT.

Depending on the specific application, operational philosophy and the configuration of the
process plant and the SIS, it must be considered whether it is relevant to include (part of) the DTU
in the overall measure for loss of safety. For further discussions on how to quantify the DTUR and
DTUT contributions, reference is made to [10].

4.3.5 Overall Measure for Loss of Safety– Critical Safety Unavailability

The total loss of safety is quantified by the critical safety unavailability (CSU). The CSU is the
probability that the module/safety system (either due to a random hardware or a systematic
failure) will fail to automatically carry out a successful safety action on the occurrence of a
hazardous /accidental event. Thus, we have the relation:

CSU = PFD + PTIF

If we want to include also the “known” downtime unavailability, the formula becomes:

CSUTOT = PFD + PTIF + DTU

The contributions from PTIF and λDU to the Critical Safety Unavailability (CSU) are illustrated in
Figure 4. Failures contributing to the PTIF are systematic test independent failures. These failures
will repeat themselves unless modification/redesign is initiated. The contribution to the CSU from
such systematic failures has been assumed constant, independent of the frequency of functional
testing. Dangerous undetected (DU) failures are assumed eliminated at the time of functional
testing and will thereafter increase throughout the test period.

41
Critical safety unavailability ( CSU )

Time dependent Maximum CSU Average


CSU CSU

Dangerous undetected failures, (PFD) =λDU ·τ/2

Test Independent Failures, (PTIF)

τ 2τ 3τ 4τ 5τ Time
τ
Functional test
interval

Figure 4 Contributions to critical safety unavailability (CSU).

As seen from the figure the CSU will vary throughout time. The CSU is at its maximum right
before a functional test and at its minimum right after a test. However, when we calculate the
CSU and the PFD we actually calculate the average value as illustrated in the figure.

42
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5 DATA DOSSIERS
The following pages present the data dossiers of the control and safety system components. The
dossiers are input to the tables in chapter 3 that summarise the generic input data to PDS analyses.
Note that the generic data, by nature represent a wide variation of equipment populations and as
such should be considered on individual grounds when using the data for a specific application.

The data dossiers are based on the data dossiers in previous editions of the handbook, [12], [13],
[14], and have been updated according to the work done in the PDS-BIP and the new data
available.

Adapting the definitions used in OREDA, several severity class types are referred to in the data
dossiers. The definitions of the various types are, [3]:

• Critical failure: A failure which causes immediate and complete loss of a system's capability
of providing its output.
• Degraded failure: A failure which is not critical, but it prevents the system from providing its
output within specifications. Such a failure would usually, but not necessarily, be gradual or
partial, and may develop into a critical failure in time.
• Incipient failure: A failure which does not immediately cause loss of the system's capability of
providing its output, but if not attended to, could result in a critical or degraded failure in the
near future.
• Unknown: Failure severity was not recorded or could not be deduced.

Note that only the critical failures are included as a basis for the failure rate estimates (i.e. the
λcrit). From the description of the failure mode, the critical failures are further split into dangerous
and safe failures (i.e. λcrit = λD + λS). E.g. for shutdown valves a “fail to close on demand” failure
will be classified as dangerous whereas a “spurious operation” failure will be classified as a safe
(spurious trip) failure.

The following failure modes are referred in the data dossier tables:

DOP - Delayed operation


EXL - External leakage
FTC - Fail to close on demand
FTO - Fail to open on demand
FTR - Fail to regulate
INL - Internal leakage
LCP - Leakage in closed position
LOO - Low output
NOO - No output
PLU - Plugged/choked
SHH - Spurious high level alarm
SLL - Spurious low level alarm
SPO - Spurious operation
STD - Structural deficiency
VLO - Very low output

43
5.1 Input Devices

5.1.1 Pressure Switch

Module: Input Devices


PDS Reliability Data Dossier
Component: Pressure Switch

Description / equipment boundaries Date of Revision


Includes sensing element / pneumatic switch 2009-12-18
and process connections Remarks

Recommended Values for Calculation


Total rate Coverage Undetected rate

λD = 2.3 per 106 hrs cD = 0.15 λDU = 2.0 per 106 hrs

λS = 1.1 per 106 hrs cS = 0.10 λSU = 1.0 per 106 hrs

λcrit = 3.4 per 106 hrs PTIF = 1 · 10-3 (clean medium)


= 5 · 10-3 (unclean medium)

r = 0.5
Assessment
The given failure rate applies to pressure switches. The failure rate estimate is mainly based on
OREDA phase III data, older OREDA data and comparison with other generic data sources
(OREDA phase IV contains no data on process switches, whereas phase V contains only 6
switches). The estimated coverage is based on expert judgement; We assume 5 % coverage due
to line monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before the shutdown actually occurs.

The PTIF and the r estimates are mainly based on expert judgements. A summary of some of the
main arguments is provided in section 3.3.

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 3.4 λD = 2.3 per 106 hrs Recommended values for calculation in 2006-
λDU = 1.6 per 106 hrs edition, [12]
λSTU = 1.0 per 106 hrs
Assumed cD = 30%
PTIF = 10-3 - 5·10-3 1)
1)
Clean / unclean medium
λDU = 0.2 per 106 hrs Recommended values for calculation in 2003-
λcrit = 3.4 λSTU = 0.9 per 106 hrs edition, [14]
PTIF = 10-3 - 5·10-3 1)
Assumed cD = 90%
1)
Without/with the sensing line

44
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

Module: Input Devices


PDS Reliability Data Dossier
Component: Pressure Switch
N/A D: N/A OREDA phase V database, [6]
ST: N/A Data relevant for conventional process switches.

Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 5

No. of inventories = 6
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 295 632
λcrit = 1.4 D: 1.39 OREDA phase III database, [8]
ST: 0.0 Data relevant for conventional process switches.

Observed: Filter:
CD = 100% Inv. Equipment Class = Process Sensors AND
(based on only one Inv. Design Class = Pressure AND
failure) Inv. Att. Type – process sensor = Switch AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 3

No. of inventories = 12
No. of critical D failures = 1
No. of critical ST failures = 0
Surveillance Time (hours) = 719 424
λDU = 3.6 per 106 hrs Exida [15]: Generic DP / pressure switch
λSU = 2.4 per 106 hrs

SFF = 40%
Funct. 0.44
ST 1.02 T-Book [16]: Pressure sensor
Other crit 0.37

45
5.1.2 Proximity Switch (Inductive)

Module: Input Devices


PDS Reliability Data Dossier
Component: Inductive Proximity Switch

Description / equipment boundaries Date of Revision


Includes sensing element, electronics and 2009-12-18
moving metal target Remarks

Recommended Values for Calculation


Total rate Coverage Undetected rate

λD = 3.5 per 106 hrs cD = 0.15 λDU = 3.0 per 106 hrs

λS = 2.2 per 106 hrs cS = 0.10 λSU = 2.0 per 106 hrs

λcrit = 5.7 per 106 hrs PTIF = 1 · 10-3

r = 0.3
Assessment
The estimated coverage is based on expert judgement; We assume 5 % coverage due to line
monitoring of the connections and additional 10% detection for dangerous failures due to
operator observation during operation. The coverage for safe failures has been set to 10%, since
there is a small probability that such failures are detected before a trip actually occurs. It should
be noted that (SIL rated) limit switches with significantly higher coverage factors are available.
In such case mechanical installation of the parts must ensure that alignment problems are
minimised.

The PTIF and the r estimates are mainly based on expert judgements. The PTIF is assumed to be
relatively high since mechanical alignment of the interacting parts is often a problem. Such
failures may not be revealed due to inadequate testing. Similarly a relatively high proportion of
systematic failures is assumed resulting in a low r factor

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λDU = 2.0 per 106 hrs Internal SINTEF project data applied for SIL
classified proximity switches.

λDU = 3.6 per 106 hrs Exida [15]: Generic position limit switch
λSU = 2.4 per 106 hrs

SFF = 40%
Failure to change T-Book [16]: Electronic limit switch
state: 1.9 per 107 hrs
Spurious change of
state: 5.2 per 107 hrs

46
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.3 Pressure Transmitter

Module: Input Devices


PDS Reliability Data Dossier
Component: Pressure Transmitter
Description / equipment boundaries Date of Revision
The pressure transmitter includes the 2009-12-18
sensing element, local electronics and the Remarks
process isolation valves / process
connections.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 0.8 per 106 hrs cD = 0.60 λDU = 0.3 per 106 hrs

λS = 0.5 per 106 hrs cS = 0.30 λSU = 0.4 per 106 hrs

λcrit = 1.3 per 106 hrs PTIF = 5 · 10-4

r = 0.3
Assessment
The failure rate estimate is mainly based on data from OREDA phase III. An insufficient amount
of data has been found in OREDA phase IV in order to update this estimate (no data from phase V,
VI or VII). The rate of DU failures is estimated assuming coverage of 60 % for dangerous failures.
This is based on implemented self test in the transmitter as well as casual observation by control
room operator (the latter assumes that the signal can be observed on the VDU and compared with
other signals). If a higher coverage is claimed, e.g. due to automatic comparison between
transmitters, this should be especially documented / verified. The rate of detected safe failures has
been estimated by expert judgement to 30% (as compared to 50 % in the previous 2006-edition).
This is due to the fact that safe failures will be difficult to detect before a trip has actually
occurred. No data available for pressure transmitters from OREDA phase VI and VII.

The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.
Failure Rate References
Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2006-
λDU = 0.3 per 106 hrs edition, [12]
λSTU = 0.3 per 106 hrs
Assumed cD = 60%
PTIF = 5·10-4

λcrit = 1.3 λD = 0.8 per 106 hrs Recommended values for calculation in 2004-
λDU = 0.3 per 106 hrs edition, [13]
λSTU = 0.4 per 106 hrs
Assumed cD = 60%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively

47
Module: Input Devices
PDS Reliability Data Dossier
Component: Pressure Transmitter

λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-
λcrit = 1.3 λSTU = 0.4 per 106 hrs edition, [13]
PTIF = 3·10-4 - 5·10-4 1)
Assumed cD = 90%
1)
For smart/conventional respectively
N/A D: N/A OREDA phase IV database, [6]
ST: N/A Data relevant for conventional pressure trans-
mitters.
Observed: Filter:
cD = N/A Inv. Equipment Class = Process Sensors AND
cST = N/A Inv. Design Class = Pressure AND
Inv. Att. Type process sensor = Transmitter AND
Inv. Phase = 4 AND
(Inv. System = Gas processing OR
Oil processing OR
Condensate processing) AND
Inv. Phase = 4

No. of inventories = 21
No. of critical (D or ST) failures = 0
Surveillance Time (hours) = 332 784
λcrit = 1.3 D: 0.64 OREDA phase III database, [8]
ST: 0.64 Data relevant for conventional pressure trans-
mitters.
Observed:
cD = 100 % Filter criteria: TAXCOD='PSPR' .AND. FUNCTN='OP' .OR.
(Calculated for 'GP'
transmitters having No. of inventories = 186
Total no. of critical failures = 6
some kind of self-test
Cal. time = 4 680 182 hrs
arrangement only)

λDU = 0.6 per 106 hrs Exida [15]: Generic smart DP / pressure
transmitter
SFF = 60%
Fail. to obtain signal: T-Book [16]: Pressure transmitter
0.83
Fail. to obtain signal: T-Book [16]: Pressure difference transmitter/
0.91 pressure difference cell

48
Reliability Data for Safety Instrumented Systems
PDS Data Handbook, 2010 Edition

5.1.4 Level (Displacement) Transmitter

Module: Input Devices


PDS Reliability Data Dossier
Component: Level (Displacement) Transmitter

Description / equipment boundaries Date of Revision


The level transmitter includes the sensing 2009-12-18
element, local electronics and the process Remarks
isolation valves / process connections. Only displacement level transmitters are included in
the OREDA phase III, IV and V data. No data from
later phases available.
Recommended Values for Calculation
Total rate Coverage Undetected rate

λD = 1.4 per 106 hrs cD = 0.60 λDU = 0.6 per 106 hrs

λS = 1.6 per 106 hrs cS = 0.30 λSU = 1.1 per 106 hrs

λcrit = 3.0 per 106 hrs PTIF = 5 · 10-4

r = 0.3
Assessment
The failure rate estimate is mainly based on data from the OREDA phase III database with
additional data from OREDA phase IV and V. The rate of DU failures is estimated by assuming
coverage of 60% for dangerous failures. This is based on implemented self test in the transmitter
as well as casual observation by control room operator (the latter assumes that the signal can be
observed on the VDU and compared with other signals). If a higher coverage is claimed, special
documentation/verification should be required. The rate of safe failures has been estimated by
expert judgement to 30% (as compared to 50 % in the previous 2006-edition). This is due to the
fact that safe failures will be difficult to detect before a trip has actually occurred.

The PTIF is entirely based on expert judgements. The estimated r is based on reported failure
causes in OREDA as well as expert judgements. A summary of some of the main arguments is
provided in section 3.3.

Failure Rate References


Overall
failure rate Failure mode
Data source/comment
(per 106 hrs) distribution
λcrit = 3.0 λD = 1.4 per 106 hrs Recommended values for calculation in 2006-edition,
λDU = 0.6 per 106 hrs [12] and 2004-edition, [13]
λSTU = 0.8 per 106 hrs
Assumed cD = 60%
PTIF = 5·10-4

λDU = 0.1 per 106 hrs Recommended values for calculation in 2003-edition,
λcrit = 3.0 λSTU = 0.8 per 106 hrs [13]
Assumed cD = 90%
PTIF = 3·10-4 - 5·10-4 1)
1)
For smart/conventional respectively

49
Discovering Diverse Content Through
Random Scribd Documents
The Project Gutenberg eBook of
The Boy Fortune Hunters in
Yucatan
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: The Boy Fortune Hunters in Yucatan

Author: L. Frank Baum

Illustrator: George A. Rieman

Release date: October 20, 2020 [eBook #63508]


Most recently updated: October 18, 2024

Language: English

Credits: Produced by Mary Glenn Krause, Aunt Julie Turner, Uncle


Tim Turner, Stephen Hutcheson, and the Online
Distributed
Proofreading Team at https://www.pgdp.net

*** START OF THE PROJECT GUTENBERG EBOOK THE BOY


FORTUNE HUNTERS IN YUCATAN ***
The Boy
Fortune Hunters
in Yucatan
By
FLOYD AKERS
Author of
“The Boy Fortune Hunters in Alaska”
“The Boy Fortune Hunters in Panama”
“The Boy Fortune Hunters in Egypt”
“The Boy Fortune Hunters in China”

CHICAGO
THE REILLY & BRITTON CO.
PUBLISHERS

i
COPYRIGHT 1910 BY
THE REILLY & BRITTON CO.

ii
CONTENTS

CHAPTER PAGE
I We Meet Lieutenant Allerton 9
II We Listen to a Strange Proposition 26
III We Undertake the Yucatan Adventure 39
IV We Scent Danger Ahead 55
V We Inspect a Novel Aerial Invention 65
VI We See an Astonishing Thing 72
VII We Outwit the Enemy 80
VIII We Fight a Good Fight 95
IX We Find Ourselves Outnumbered 105
X We Escape Annihilation 113
XI We Enter the City of Itza 125
XII We Sight the Quarry 137
XIII We Seek Safety in Flight 150
XIV We Interview the Red-Beard 164
XV We Become Prisoners of the Tcha 179
XVI We View the Hidden City 191
XVII We are Condemned by the Tribunal 204
XVIII We Argue with the High Priestess 214
XIX We Save a Valuable Life 231
XX We Find the Tcha Grateful 239
XXI We Lose Poor Pedro 254
XXII We Face a Deadly Peril 265
XXIII We Become Aggressive 277
XXIV We Witness a Daring Deed 287
XXV We Repel the Invaders 298
XXVI We Hear Strange News 314
XXVII We Settle an Old Score 332
XXVIII We Win and Lose 340

The Boy
Fortune Hunters
in Yucatan
CHAPTER I
WE MEET LIEUTENANT ALLERTON

“What do you say, Sam, to making a stop at Magdalena


Bay?” asked Uncle Naboth, as we stood on the deck of
the Seagull, anchored in Golden Gate Harbor.

“Magdalena!” I exclaimed; “why, it’s a wilderness.”

“I know,” he replied; “but the torpedo fleet is there,


doin’ target practice, an’ Admiral Seebre has asked us to
drop some mail an’ dispatches there, as well as a few
supplies missed by the transport that left last Tuesday.”

“Oh, Admiral Seebre,” I rejoined. “That puts a different 10


face on the matter. We’ll stop anywhere the admiral
wants us to.” Merchantmen though we are, none of us
can fail in genuine admiration for Uriel Seebre, the most
typical sea dog on earth—or on water, rather.

So we waited to ship the supplies and mail, and by


sunset were shrouded in golden glory as we slowly
steamed out of the harbor and headed south.

It’s a pretty trip. Past old Santa Barbara, the man-made


harbor of San Pedro—the port of Los Angeles—and
along the coast of beautiful Coronado, we hugged the
shore line to enjoy the splendid panorama of scenery;
but once opposite the Mexican coast we stood out to
sea until, three days afterward, we made Magdalena
Bay and dropped anchor amid the rakish, narrow-nosed
fleet of the torpedo flotilla.

There isn’t much to see at Magdalena. The bay itself is


fairly attractive, but the shore is uninteresting and
merely discloses a motley group of frame and adobe
huts. Yet here the Pacific Squadron comes semiannually
to practise target shooting.

As it was four o’clock when our anchor reeled out we 11


decided to lie in the bay until sunrise next morning. We
signaled “mail and supplies” and two boats put out from
the Paul Jones, the flagship of the miniature but
formidable fleet, and soon boarded us. They were in
charge of Lieutenant Paul Allerton, whom we found a
very decent fellow, without a hint of that contempt for
merchantmen affected by so many Annapolis fledglings.

We soon had the stores lowered—they were not many—


and delivered the mail pouch and dispatch box, getting
a formal receipt for them. As supercargo and purser, I
attended to this business personally.

“I’m glad to have met you, Mr. Steele,” said Lieutenant


Allerton, “and to have seen your famous boat, the
Seagull. We’ve heard a good deal of your curious
adventures, you know.”

I laughed, and Uncle Naboth Perkins, who stood beside


me, remarked:

“Our days of adventure are about over, I guess, Mr.


Allerton.”
“Have you bagged so much treasure you are ready to
retire?” asked the officer.

“It isn’t that,” replied my uncle. “We’ve been tramps a 12


long time, an’ sailed in many seas; but the life’s a bit
too strenuous for us, so’s to speak. These boys o’ ours
are reckless enough to git us inter a heap o’ trouble, an’
keep us there, too, if we didn’t call a halt. So, seein’ as
life counts for more’n anything else, Cap’n Steele an’ I
hev made the youngsters turn over a new leaf. We’re
now on our way to the Atlantic, ’round the Horn, an’
perpose to do peaceful tradin’ from now on.”

Allerton listened with thoughtful interest. He seemed on


the point of saying something in return, but hesitated
and then touched his cap.

“I must be going, gentlemen. You know how grateful we


exiles are for the mail and tinned stuff, and I tender the
thanks of the fleet for your courtesy.”

Then he went away and we considered the incident


closed.

We were a strangely assorted group as we congregated


on the deck of our beautiful craft the Seagull, after
dinner that evening, and perhaps here is an excellent
opportunity to introduce ourselves to the reader.

Our ship, which we believe has been termed “the pride 13


of the merchant marine,” was constructed under our
personal supervision, and sails or steams as we desire.
It is about a thousand-tons burden, yacht built, and as
trim as a man-o’-war. It is commanded by my father,
Captain Richard Steele, one of the most experienced
and capable sailors of his time. He is one-third owner,
and I have the same interest, being proud to state that
I furnished my share of the money from funds I had
personally earned. Uncle Naboth Perkins, my dead
mother’s only brother, owns the remaining third.

Uncle Naboth is a “natural born trader” and a wonder in


his way. He isn’t a bit of a practical sailor, but has
followed the seas from his youth and has won the
confidence and esteem of every shipper who ever
entrusted a cargo to his care. He has no scholastic
learning but is very wise in mercantile ways and is noted
for his sterling honesty.

My father has a wooden leg; he is old and his face 14


resembles ancient parchment. He uses words only for
necessary expression, yet his reserve is neither morose
nor disagreeable. He knows how to handle the Seagull
in any emergency and his men render him alert
obedience because they know that he knows.

I admit that I am rather young to have followed the


seas for so long. I can’t well object to being called a
boy, because I am a boy in years, and experience hasn’t
made my beard grow or added an inch to my height. My
position on the Seagull is that of purser and assistant
supercargo. In other words, I keep the books, check up
the various cargoes, render bills and pay our expenses.
I know almost as little of navigation as Uncle Naboth,
who is the most important member of our firm because
he makes all our contracts with shippers and attends to
the delivery of all cargoes.

Over against the rail stands Ned Britton, our first mate.
Ned is father’s right bower. They have sailed together
many years and have acquired a mutual understanding
and respect. Ned has been thoroughly tested in the
past: a blunt, bluff sailor-man, as brave as a lion and as
guileless as a babe. His strong point is obeying orders
and doing his duty on all occasions.

Here is our second mate, too, squatted on a coil of rope 15


just beside me—a boy a year or two younger than I am
myself. I may as well state right here that Joe Herring is
a mystery to me, and I’m the best and closest friend he
has in all the world. He is long and lanky, a bit tall for
his age and has muscles like steel. He moves slowly; he
speaks slowly; he spends hours in silent meditation. Yet
I have seen this boy in action when he moved swift as a
lightning bolt—not striking at random, either, but with
absolute intelligence.

Once Joe was our cabin boy, promoted to that station


from a mere waif. Now he is second mate, with the full
respect of Captain Steele, Ned Britton and the entire
crew. He wears a common sailor suit, you’ll notice, with
nothing to indicate his authority. When he is on duty
things go like clockwork.

And now I shall probably startle you by the statement 16


that Joe is the rich man, the financial autocrat, of all our
little group. His bank account is something to
contemplate with awe and reverence. He might own a
dozen more expensive ships than the Seagull, yet I
question if you could drive him away from her deck
without making the lad absolutely miserable. Money
counts for little with Joe; his associates and his simple if
somewhat adventurous life completely satisfy him.

Reclining at my feet is a burly youth rejoicing in the


name of Archibald Sumner Ackley. He isn’t a sailor; he
isn’t a passenger even; Archie is just a friend and a
chum of Joe’s and mine, and he happens to be aboard
just because he won’t quit and go home to his anxious
parents in Boston.

I fear that at the moment of this introduction Archie


doesn’t show up to the best advantage. The boy is
chubby and stout and not exactly handsome of feature.
He wears a gaudy checked flannel shirt, no cravat,
yellowish green knickerbockers, and a brown jacket so
marvelously striped with green that it reminds one of a
prison garb. I never can make out where Archie
manages to find all his “striking” effects in raiment; I’m
sure no other living being would wear such clothes. If
any one ever asks: “Where’s Archie?” Uncle Naboth has
a whimsical way of putting his hand to his ear and
saying: “Hush; listen!”

With all this I’m mighty fond of Archie, and so are we 17


all. Once on a time we had to get used to his
peculiarities, for he is stubborn as a mule, denies any
one’s right to dictate to him and is bent on having his
own way, right or wrong. But the boy is true blue in any
emergency; faithful to his friends, even to death; faces
danger with manly courage and is a tower of strength in
any encounter. He sails with the Seagull because he
likes the life and can’t be happy, he claims, away from
Joe and me.

And now you know all of us on the quarter deck, and I’ll
just say a word about our two blacks, Nux and Bryonia.
They are South Sea Islanders, picked up by Uncle
Naboth years ago and devoted now to us all—especially
to my humble self. We’ve been together in many
adventures, these ebony skinned men and I, and more
than once I have owed my life to their fidelity. Nux is
cabin master and steward; he’s the stockiest of the big
fellows. Bryonia is ship’s cook, and worthy the post of
chef at Sherry’s. He can furnish the best meal from the
least material of any one I’ve ever known, and with our
ample supplies you may imagine we live like pigs in
clover aboard the Seagull.

Our crew consists of a dozen picked and tested men, all 18


but one having sailed with us ever since the ship was
launched. We lost a man on the way back from China a
while ago, and replaced him in San Francisco with a
stalwart, brown-skinned Mexican, Pedro by name. He
wasn’t one of the lazy, “greaser” sort, but an active
fellow with an intelligent face and keen eyes. Captain
Hildreth of the Anemone gave us the man, and said he
had given good service on two long voyages. But Pedro
had had enough of the frozen north by that time and
when he heard we were short a man begged to join us,
knowing we were headed south. Captain Hildreth, who
is our good friend, let us have him, and my father is
pleased with the way the Mexican does his work.

The Seagull was built for commerce and has been 19


devoted mainly to commerce; yet we do not like the
tedium of regular voyages between given ports and
have been quite successful in undertaking “tramp”
consignments of freight to be delivered in various far-off
foreign lands. During these voyages we have been led
more than once into dangerous “side” adventures, and
on our last voyage Joe, Archie and I had barely escaped
with our lives—and that by the merest chance—while
engaged in one of these reckless undertakings. It was
this incident that caused Uncle Naboth and my father to
look grave and solemn whenever their eyes fell upon us
three, and while we lay anchored in San Francisco
harbor they announced to me their decision to avoid
any such scrapes in the future by undertaking to cover a
regular route between Cuba and Key West, engaging in
the tobacco and cigar trade.

I did not fancy this arrangement very much, but was


obliged to submit to my partners and superiors. Archie
growled that he would “quit us cold” at the first Atlantic
port, but intended to accompany us around the Horn,
where there might be a “little excitement” if bad
weather caught us. Joe merely shrugged his shoulders
and refrained from comment. And so we started from
the Golden Gate en route for Cuba, laden only with our
necessary stores for ballast, although our bunkers were
full of excellent Alaska coal.

The stop in Magdalena Bay would be our last one for 20


some time; so, being at anchor, with no duties of
routine confronting us, we sat on deck enjoying the
beautiful tropical evening and chatting comfortably
while the sailors grouped around the forecastle and
smoked their pipes with unalloyed and unaccustomed
indolence.

The lights of the near-by torpedo fleet were beginning


to glimmer in the gathering dusk when a small boat
boarded us and we were surprised to see Lieutenant
Allerton come on board again and approach our
company. This time, however, he wore civilian’s clothes
instead of his uniform.

Greeting us with quiet respect he asked:

“May I sit down, gentlemen? I’d like a little talk with


you.”

Captain Steele pointed to a chair at his side.

“You are very welcome, sir,” he answered.


Allerton sat down.

“The despatches you brought,” said he, “conveyed to


me some joyful news. I have been granted a three
months’ leave of absence.”

As he paused I remarked, speaking for us all:

“You are to be congratulated, Lieutenant. Isn’t that a


rather unusual leave?”

“Indeed it is,” he returned, laughingly. “I’ve been trying 21


for it for nearly two years, and it might not have been
allowed now had I not possessed an influential friend at
Washington—my uncle, Simeon Wells.”

“Simeon Wells!” ejaculated Uncle Naboth. “What, the


great electrician who is called ‘the master Wizard’?”

“I believe my uncle has gained some distinction in


electrical inventions,” was the modest reply.

“Distinction! Why, I’m told he can skin old Edison to a


frazzle,” remarked Archie, who was not very choice in
his selection of words, as he rolled over upon his back
and looked up at the officer wonderingly. “Didn’t Wells
invent the great storage battery ‘multum in parvo’, and
the new aeroplane motors?”

“Uncle Simeon is not very ambitious for honor,” said Mr.


Allerton quietly. “He has given the government the
control of but few—a very few—of his really clever
electrical devices. His greatest delight he finds in
inventing. When he has worked out a problem and
brought it to success he cares little what becomes of it.
That, I suppose, is the mark of an unpractical genius—
unpractical from a worldly sense. Still, his relations with
the government, limited as they are, proved greatly to
my advantage; for when he found my heart was set on
this leave of absence, he readily obtained it.”

“The request of Simeon Wells ought to accomplish much 22


more than that, considering his invaluable services,” I
suggested.

“For that reason he has warned me he will not interfere


again in my behalf,” answered Allerton; “so I must make
the most of this leave. It is this consideration that
induced me to come to see you to-night.”

We remained silent, waiting for him to proceed.

“I understood from what you said this afternoon that


you are bound for Cuba, by way of the Horn,” he
resumed, after a moment of thought.

Captain Steele nodded.

“You have a fast ship, you leave immediately, and you


are going very near to my own destination,” continued
Allerton. “Therefore, I have come to ask if you will
accept me as a passenger.”

I cast an inquiring glance at Uncle Naboth, and after


meeting his eye replied:

“We do not carry passengers, Lieutenant Allerton; but it 23


will please us to have you accept such hospitality as we
can offer on the voyage. You will be a welcome guest.”

He flushed, as I could see under the light of the


swinging lantern: for evening had fallen with its usual
swift tropical custom. And I noticed, as well, that Mr.
Allerton seemed undecided how to handle what was
evidently an unexpected situation.

“I—I wanted to take my man with me,” he stammered.

“Your servant, sir?”

“One of our seamen, whose leave I obtained with my


own. He is a Maya from Yucatan.”

“Bring him along, sir,” said my father, heartily; “it’s all


the same to us.”

“Thank you,” he returned, and then sat silent, swinging


his cap between his hands. Allerton had a thin, rather
careworn face, for so young a man, for he could not be
more than thirty at most. He was of medium height, of
athletic build, and carried himself erect—a tribute to his
training at the Naval Academy and his service aboard
ship. There was something in the kindly expression of
his deep-set, dark eyes and the pleasantly modulated
tones of his voice that won our liking, and I am sure we
were sincere in declaring he would be a welcome guest
on the ensuing voyage.

“I—I have several boxes—chests,” he said, presently. 24

“We’ve room for a cargo, sir,” responded Uncle Naboth.

“At what hour do you sail?” inquired Allerton, seeming


well pleased by our consideration for him.

“Daybreak, sir.”

“Then may we come aboard to-night?”


“Any hour you like,” said I. It was Joe’s watch, so I
introduced him more particularly to our second mate, as
well as to the other members of our party.

“Shall we send for you, Mr. Allerton?” asked Joe.

“Oh, no,” he replied. “The ship’s boat will bring me


aboard, thank you. The boys are sorry to see me go so
suddenly, but I feel I must take advantage of this
fortunate occasion to secure passage. I might wait here
a week or two before any sort of tub came this way, and
I need every minute of my leave. Cuba lies only a
hundred and twenty miles across the channel from
Yucatan.”

With this he returned to the torpedo boat he served, 25


and so accustomed were we to little surprises of this
nature that we paid small heed to the fact that we had
accepted an unlooked for addition to our party for the
long voyage that loomed ahead of us. We were quite a
happy family aboard the Seagull, and Lieutenant
Allerton appeared to be a genial fellow who would add
rather than detract from the association we enjoyed.

26
CHAPTER II
WE LISTEN TO A STRANGE PROPOSITION

I did not hear our passengers come aboard that night,


being sound asleep. By the time I left my room next
morning we were under way and steaming briskly over
one of those quiet seas for which the Pacific is
remarkable.

In the main cabin I found Lieutenant Allerton sitting at


breakfast with Captain Steele, Uncle Naboth and Archie.
Joe was snoozing after his late watch and Ned Britton
was on deck. Behind our guest’s chair stood the
handsomest Indian I have ever seen, the Maya he had
mentioned to us and whose name was Chaka. Our
South Sea Islanders were genuine black men, but
Chaka’s skin was the color of golden copper. He had
straight black hair, but not the high cheek bones of the
typical American Indian, and the regularity of his
features was certainly remarkable. His eyes were large,
frank in expression and dark brown in color; he seemed
intelligent and observant but never spoke unless first
addressed, and then in modest but dignified tones. His
English was expressive but not especially fluent, and it
was easy to understand that he had picked up the
language mainly by hearing it spoken.
My first glance at Chaka interested me in the fellow, yet 27
of course during that first meeting I discovered few of
the characteristics I have described. At this time he was
silent and motionless as a statue save when opportunity
offered to serve his master.

The lieutenant wore this morning a white duck mess


costume for which he apologized by saying that his
civilian wardrobe was rather limited, and if we would
pardon the formality he would like to get all the wear
from his old uniforms that was possible, during the
voyage.

We told him to please himself. I thought he looked more


manly and imposing in naval uniform than in “cits.”

“But I don’t see how he can be shy of clothes,” 28


remarked Archie, after breakfast, as we paced the deck
together. “Allerton lugged seven big boxes aboard last
night; I saw them come up the side; and if they don’t
contain his togs I’d like to know what on earth they do
hold.”

“That’s none of our affair, Archie,” I remarked.

“Do you think this thing is all straight and above board,
Sam?” asked my friend.

“Of course, Archie. He’s a lieutenant in the United States


navy, and has a regular leave of absence. He joined us
with the approval and good will of his commander.”

“I know; but it seems queer, somehow. Take that


copper-faced fellow, for instance, who looks more like a
king than a servant; what has Allerton got such a body
guard as that for? I never knew any other naval officer
to have the like. And three months’ leave—on private
business. Suff’rin’ Pete! what’s that for?”

“You might ask the lieutenant,” I replied, indifferently.

“Then there’s the boxes; solid redwood and clamped


with brass; seven of ’em! What’s in ’em, Sam, do you
suppose?”

“Archie boy, you’re getting unduly suspicious. And you’re 29


minding some one else’s business. Get the quoits and
I’ll toss a game with you.”

Our passenger was very quiet during the following day


or two. He neither intruded nor secluded himself, but
met us frankly when we were thrown together, listened
carefully to our general conversation, and refrained from
taking part in it more than politeness required.

Joe thought the young fellow seemed thoughtful and ill


at ease, and confided to me that he had noticed Allerton
now made more of a companion of the Maya than a
dependant, although the man, for his part, never abated
his deferent respect. Chaka seemed to regard Allerton
with the love and fidelity of a dog for its master; yet if
any of the sailors, or even Nux or Bryonia, spoke to the
Indian with undue familiarity Chaka would draw himself
up proudly and assume the pose of a superior.

We were much interested in the personality of these 30


two unusual personages—that is, Joe, Archie and I were
—and we often discussed them among ourselves. We
three boys, being chums of long standing, were much
together and had come to understand one another
pretty well. We all liked Paul Allerton, for there was
something winning in his personality. As for the Indian,
Chaka, he did not repel us as much as he interested and
fascinated us. One night big Bry, whom we admitted to
perfect familiarity because of his long service, said to
us:

“That Maya no common Injun, Mars’ Sam, yo’ take my


word. He say his country Yucatan, an’ that place
Yucatan don’ mean nuthin’ to me, nohow, ’cause I never
been there. But, wherever it is, Chaka’s people mus’ be
good people, an’ Chaka hisself never had any other
marsa than Mars’ Allerton.”

“He has been serving in the navy, Bry.”

“That don’ count, sah. Yo’ know what I mean.”

On the evening of the third day out from Magdalena we


were clustered as usual upon the deck, amusing
ourselves by casual conversation, when Lieutenant
Allerton approached us and said:

“I’d like to have a few moments’ confidential talk with 31


whoever is in authority here. It’s rather hard for a
stranger to determine who that may be, as you all seem
alike interested in the career of the Seagull. But of
course some one directs your policies and decides upon
your business ventures, and that is the person I ought
properly to address.”

We were a little puzzled and astonished by this speech.


Uncle Naboth removed his pipe from his lips to say:

“This group is pretty near a partnership, sir, seein’ as


we’ve been through good and bad luck together many
times an’ learned how to trust each other as brave an’
faithful comrades. We haven’t any secrets, as I knows
on; an’ if so be you talked in private to any one of us,
he’d be sure to call a meetin’ an’ tell the others. So, if
you’ve anything to say about the ship, or business
matters, or anything that ain’t your own personal
concern, set right down here an’ tell it now, an’ we’ll all
listen the best we know how.”

Allerton followed this speech gravely and at first 32


appeared embarrassed and undecided. I saw him cast a
quick glance into Chaka’s eyes, and the Maya responded
with a stately nod. Then the lieutenant sat down in the
center of our group and said:

“I thank you, gentlemen, for your kindness. It would


seem that I have imposed upon your good nature
sufficiently already; yet here I am, about to ask another
favor.”

“Go on, sir,” said my father, with an encouraging nod.

“From what I have been able to learn,” continued the


lieutenant, in his quiet voice, “your ship is at this
moment unchartered. You are bound for Havana, where
you expect to make a lucrative contract to carry
merchandise between that port and Key West; mostly,
of course, leaf tobacco. Is that true?”

“It’s quite correct, sir,” said Uncle Naboth.

“In that case, there is no harm in my making you a


business proposition. I want to land on the east coast of
Yucatan, at a place little known and seldom visited by
ships. It will take you a couple of hundred miles out of
your course.”

For a few moments no one spoke. Then Captain Steele


said:
“A trip like that, Mr. Allerton, involves a certain amount 33
of expense to us. But we’re free, as far as our time is
concerned, and we’ve plenty of coal and supplies. The
question is, how much are you willing to pay for the
accommodation?”

A slight flush crept over Allerton’s cheek.

“Unfortunately, sir,” he replied, “I have very little ready


money.”

His tone was so crestfallen that I felt sorry for him, and
Joe turned quickly and said:

“That’s unlucky, sir; but I’ve some funds that are not in
use just now, and if you’ll permit me to loan you
whatever you require I shall be very happy to be of
service.”

“Or,” added Uncle Naboth, carelessly, “you can pay us


some other time; whenever you’re able.”

Allerton looked around him, meeting only sympathetic


faces, and smiled.

“But this is not business—not at all!” said he. “I did not


intend to ask for financial assistance, gentlemen.”

“What did you intend, Mr. Allerton?” I inquired.

He refrained from answering the direct question at 34


once, evidently revolving in his mind what he should
say. Then he began as follows:

“Ever since I came aboard I have had a feeling that I


am among friends; or, at least, congenial spirits. I am
embarked on a most Quixotic adventure, gentlemen,
and more and more I realize that in order to accomplish
what I have set out to do I need assistance—assistance
of a rare and practical sort that you are well qualified to
furnish. But it is necessary, in order that you understand
me and my proposition fully, that I should tell you my
story in detail. If I have your kind permission I will at
once do so.”

It began to sound interesting, especially in the ears of


us three boys who loved adventure. I think he could
read the eagerness in our eyes; but he looked earnestly
at Captain Steele, who said:

“Fire ahead, Mr. Allerton.”

He obeyed, seeming to choose his words carefully, so as


to make the relation as concise as possible.

“My home is in a small New Hampshire town where the 35


Allertons have been the most important family for many
generations. I was born in the same room—I think in
the same bed—that my father and grandfather were
born in. We had a large farm, or estate, and a fine old
homestead that was, and is, the pride of the country.
We have until recently been considered wealthy; but my
poor father in some way acquired a speculative passion
which speedily ruined him. On his death, while I was yet
a cadet at Annapolis, it was found that all the land and
investments he had inherited were gone; indeed, all
that was left was the homestead with a few acres
surrounding it.

“My mother and my two maiden sisters, one a


confirmed invalid and both much older than I, found
themselves wholly without resources to support
themselves. In this emergency an old lawyer—a friend
of the family, who I imagine has little keen ability in
business matters, advised my mother to mortgage the
place to secure funds for living expenses. It seemed
really necessary, for the three forlorn women were
unequal to the task of earning their living in any way.

“When this first fund was exhausted they mortgaged 36


the homestead again, and still again; and although they
had lived simply and economically, in twelve years the
old place has become so plastered with mortgages that
it is scarcely worth their face value. Little can be saved
from a second lieutenant’s pay, yet I have been able to
send something to the dear ones at home, which only
had the effect of staving off the inevitable crisis for a
time. Uncle Simeon, too, has helped them when he
remembered it and had money; but he is a man quite
impractical in money matters and the funds required for
his electrical experiments are so great that he is nearly
as poor as I am. Very foolishly he refuses to
commercialize his inventions.

“Conditions at home have naturally grown worse instead


of better, and now the man who holds the mortgages on
the homestead has notified my mother that he will
foreclose when next they fall due, in about four months’
time from now. Such is the condition of my family at
home, and you may well imagine, my friends, how
unhappy their misfortunes and necessities have made
me. As the climax of their sad fortunes drew near I have
tried to find some means to assist them. It has occupied
my thoughts by day and night. But one possible way of
relief has occurred to me, suggested by Chaka. It is a
desperate chance, perhaps; still, it is a chance, and I
have resolved to undertake it.”
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookgate.com

You might also like