Safety, Reliability and Risk Analysis PDF
Safety, Reliability and Risk Analysis PDF
AND APPLICATIONS
Editors
Sebastin Martorell
Department of Chemical and Nuclear Engineering,
Universidad Politcnica de Valencia, Spain
C. Guedes Soares
Instituto Superior Tcnico, Technical University of Lisbon, Lisbon, Portugal
Julie Barnett
Department of Psychology, University of Surrey, UK
VOLUME 1
CRC Press/Balkema is an imprint of the Taylor & Francis Group, an informa business
2009 Taylor & Francis Group, London, UK
Typeset by Vikatan Publishing Solutions (P) Ltd., Chennai, India
Printed and bound in Great Britain by Antony Rowe (A CPI-group Company), Chippenham, Wiltshire.
All rights reserved. No part of this publication or the information contained herein may be reproduced, stored
in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by photocopying,
recording or otherwise, without written prior permission from the publisher.
Although all care is taken to ensure integrity and the quality of this publication and the information herein, no
responsibility is assumed by the publishers nor the author for any damage to the property or persons as a result
of operation or use of this publication and/or the information contained herein.
Published by: CRC Press/Balkema
P.O. Box 447, 2300 AK Leiden, The Netherlands
e-mail: [email protected]
www.crcpress.com www.taylorandfrancis.co.uk www.balkema.nl
ISBN: 978-0-415-48513-5 (set of 4 volumes + CD-ROM)
ISBN: 978-0-415-48514-2 (vol 1)
ISBN: 978-0-415-48515-9 (vol 2)
ISBN: 978-0-415-48516-6 (vol 3)
ISBN: 978-0-415-48792-4 (vol 4)
ISBN: 978-0-203-88297-9 (e-book)
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Table of contents
Preface
XXIV
Organization
XXXI
Acknowledgment
XXXV
Introduction
XXXVII
VOLUME 1
Thematic areas
Accident and incident investigation
A code for the simulation of human failure events in nuclear power plants: SIMPROC
J. Gil, J. Espern, L. Gamo, I. Fernndez, P. Gonzlez, J. Moreno, A. Expsito,
C. Queral, G. Rodrguez & J. Hortal
11
Comparing a multi-linear (STEP) and systemic (FRAM) method for accident analysis
I.A. Herrera & R. Woltjer
19
Development of a database for reporting and analysis of near misses in the Italian
chemical industry
R.V. Gagliardi & G. Astarita
27
33
39
Formal modelling of incidents and accidents as a means for enriching training material
for satellite control operations
S. Basnyat, P. Palanque, R. Bernhaupt & E. Poupart
45
57
Organizational analysis of availability: What are the lessons for a high risk industrial company?
M. Voirin, S. Pierlot & Y. Dien
63
71
79
83
89
Decision support systems and software tools for safety and reliability
Complex, expert based multi-role assessment system for small and medium enterprises
S.G. Kovacs & M. Costescu
99
105
113
121
129
137
Dynamic reliability
A dynamic fault classification scheme
B. Fechner
147
155
163
175
183
191
Fault detection and diagnosis in monitoring a hot dip galvanizing line using
multivariate statistical process control
J.C. Garca-Daz
201
205
VI
211
Human factors
A study on the validity of R-TACOM measure by comparing operator response
time data
J. Park & W. Jung
An evaluation of the Enhanced Bayesian THERP method using simulator data
K. Bladh, J.-E. Holmberg & P. Pyy
221
227
233
243
Functional safety and layer of protection analysis with regard to human factors
K.T. Kosmowski
249
259
Incorporating simulator evidence into HRA: Insights from the data analysis of the
international HRA empirical study
S. Massaiu, P.. Braarud & M. Hildebrandt
267
Insights from the HRA international empirical study: How to link data
and HRA with MERMOS
H. Pesme, P. Le Bot & P. Meyer
275
Operators response time estimation for a critical task using the fuzzy logic theory
M. Konstandinidou, Z. Nivolianitou, G. Simos, C. Kiranoudis & N. Markatos
281
291
299
305
315
323
331
On some aspects related to the use of integrated risk analyses for the decision
making process, including its use in the non-nuclear applications
D. Serbanescu, A.L. Vetere Arellano & A. Colli
341
VII
351
361
365
369
379
391
399
407
415
423
A stochastic process model for computing the cost of a condition-based maintenance plan
J.A.M. van der Weide, M.D. Pandey & J.M. van Noortwijk
431
441
Aging processes as a primary aspect of predicting reliability and life of aeronautical hardware
J. Zurek,
M. Zieja, G. Kowalczyk & T. Niezgoda
449
455
463
469
477
483
489
VIII
497
505
515
Modelling different types of failure and residual life estimation for condition-based maintenance
M.J. Carr & W. Wang
523
531
541
Non-homogeneous Markov reward model for aging multi-state system under corrective
maintenance
A. Lisnianski & I. Frenkel
551
559
567
575
581
Optimal periodic inspection of series systems with revealed and unrevealed failures
M. Carvalho, E. Nunes & J. Telhada
587
593
Optimal replacement policy for components with general failure rates submitted to obsolescence
S. Mercier
603
611
619
Preventive maintenance planning using prior expert knowledge and multicriteria method
PROMETHEE III
F.A. Figueiredo, C.A.V. Cavalcante & A.T. de Almeida
Profitability assessment of outsourcing maintenance from the producer (big rotary machine study)
P. Fuchs & J. Zajicek
627
635
641
Study on the availability of a k-out-of-N System given limited spares under (m, NG )
maintenance policy
T. Zhang, H.T. Lei & B. Guo
649
IX
659
669
675
687
697
703
709
717
Occupational safety
Application of virtual reality technologies to improve occupational & industrial safety
in industrial processes
J. Rubio, B. Rubio, C. Vaquero, N. Galarza, A. Pelaz, J.L. Ipia, D. Sagasti & L. Jord
Applying the resilience concept in practice: A case study from the oil and gas industry
L. Hansson, I. Andrade Herrera, T. Kongsvik & G. Solberg
727
733
Development of an assessment tool to facilitate OHS management based upon the safe
place, safe person, safe systems framework
A.-M. Makin & C. Winder
739
Exploring knowledge translation in occupational health using the mental models approach:
A case study of machine shops
A.-M. Nicol & A.C. Hurrell
749
757
New performance indicators for the health and safety domain: A benchmarking use perspective
H.V. Neto, P.M. Arezes & S.D. Sousa
761
767
777
787
797
Organization learning
Can organisational learning improve safety and resilience during changes?
S.O. Johnsen & S. Hbrekke
805
813
821
829
839
847
Author index
853
VOLUME 2
Reliability and safety data collection and analysis
A new step-stress Accelerated Life Testing approach: Step-Down-Stress
C. Zhang, Y. Wang, X. Chen & Y. Jiang
863
869
Collection and analysis of reliability data over the whole product lifetime of vehicles
T. Leopold & B. Bertsche
875
881
891
899
905
913
Life test applied to Brazilian friction-resistant low alloy-high strength steel rails
D.I. De Souza, A. Naked Haddad & D. Rocha Fonseca
919
Non-homogeneous Poisson Process (NHPP), stochastic model applied to evaluate the economic
impact of the failure in the Life Cycle Cost Analysis (LCCA)
C. Parra Mrquez, A. Crespo Mrquez, P. Moreu de Len, J. Gmez Fernndez & V. Gonzlez Daz
929
Risk trends, indicators and learning rates: A new case study of North sea oil and gas
R.B. Duffey & A.B. Skjerve
941
Robust estimation for an imperfect test and repair model using Gaussian mixtures
S.P. Wilson & S. Goyal
949
XI
957
965
973
981
987
993
1001
1009
1019
1027
1035
1041
1049
1055
1065
1073
Chemical risk assessment for inspection teams during CTBT on-site inspections of sites
potentially contaminated with industrial chemicals
G. Malich & C. Winder
1081
1089
Conceptualizing and managing risk networks. New insights for risk management
R.W. Schrder
XII
1097
1103
1113
1119
1125
1129
1135
Geographic information system for evaluation of technical condition and residual life of pipelines
P. Yukhymets, R. Spitsa & S. Kobelsky
1141
1147
1157
1165
1173
On causes and dependencies of errors in human and organizational barriers against major
accidents
J.E. Vinnem
1181
Quantitative risk analysis method for warehouses with packaged hazardous materials
D. Riedstra, G.M.H. Laheij & A.A.C. van Vliet
1191
1199
1207
Risk analysis in the frame of the ATEX Directive and the preparation of an Explosion Protection
Document
A. Pey, G. Suter, M. Glor, P. Lerena & J. Campos
1217
1223
1231
Why ISO 13702 and NFPA 15 standards may lead to unsafe design
S. Medonos & R. Raman
1239
1251
1259
XIII
1267
1273
1283
1293
Do the people exposed to a technological risk always want more information about it?
Some observations on cases of rejection
J. Espluga, J. Farr, J. Gonzalo, T. Horlick-Jones, A. Prades, C. Oltra & J. Navajas
1301
1309
1317
1325
1335
1341
Risk management measurement methodology: Practical procedures and approaches for risk
assessment and prediction
R.B. Duffey & J.W. Saull
1351
1357
1365
The social perception of nuclear fusion: Investigating lay understanding and reasoning about
the technology
A. Prades, C. Oltra, J. Navajas, T. Horlick-Jones & J. Espluga
1371
Safety culture
Us and Them: The impact of group identity on safety critical behaviour
R.J. Bye, S. Antonsen & K.M. Vikland
1377
Does change challenge safety? Complexity in the civil aviation transport system
S. Hyland & K. Aase
1385
1395
1401
Empowering operations and maintenance: Safe operations with the one directed team
organizational model at the Kristin asset
P. Nsje, K. Skarholt, V. Heps & A.S. Bye
XIV
1407
1415
Local management and its impact on safety culture and safety within Norwegian shipping
H.A Oltedal & O.A. Engen
1423
1431
1439
1447
1455
1463
Drawing up and running a Security Plan in an SME type companyAn easy task?
M. Gerbec
1473
1481
1489
1495
Some safety aspects on multi-agent and CBTC implementation for subway control systems
F.M. Rachel & P.S. Cugnasca
1503
Software reliability
Assessment of software reliability and the efficiency of corrective actions during the software
development process
R. Savic
1513
1519
1525
1533
1539
1547
1555
XV
1567
Building resilience to natural hazards. Practices and policies on governance and mitigation
in the central region of Portugal
J.M. Mendes & A.T. Tavares
1577
Governance of flood risks in The Netherlands: Interdisciplinary research into the role and
meaning of risk perception
M.S. de Wit, H. van der Most, J.M. Gutteling & M. Bockarjova
1585
Public intervention for better governanceDoes it matter? A study of the Leros Strength case
P.H. Linde & J.E. Karlsen
1595
1601
Using stakeholders expertise in EMF and soil contamination to improve the management
of public policies dealing with modern risk: When uncertainty is on the agenda
C. Fallon, G. Joris & C. Zwetkoff
1609
1621
1629
1635
1641
1651
1655
1663
1671
1677
1685
Author index
1695
VOLUME 3
System reliability analysis
A copula-based approach for dependability analyses of fault-tolerant systems with
interdependent basic events
M. Walter, S. Esch & P. Limbourg
XVI
1705
1715
1723
A new approach to assess the reliability of a multi-state system with dependent components
M. Samrout & E. Chatelet
1731
1739
1747
1755
1763
Application of the fault tree analysis for assessment of the power system reliability
A. Volkanovski, M. Cepin
& B. Mavko
1771
1779
1787
Calculating steady state reliability indices of multi-state systems using dual number algebra
E. Korczak
1795
1803
1807
1813
1819
Efficient generation and representation of failure lists out of an information flux model
for modeling safety critical systems
M. Pock, H. Belhadaoui, O. Malass & M. Walter
1829
Evaluating algorithms for the system state distribution of multi-state k-out-of-n:F system
T. Akiba, H. Yamamoto, T. Yamaguchi, K. Shingyochi & Y. Tsujimura
1839
1847
1851
1861
1867
XVII
1873
1881
1891
1901
1909
1915
1919
1929
1937
1943
1949
1955
Reliability prediction using petri nets for on-demand safety systems with fault detection
A.V. Kleyner & V. Volovoi
1961
Reliability, availability and cost analysis of large multi-state systems with ageing components
K. Kolowrocki
1969
Reliability, availability and risk evaluation of technical systems in variable operation conditions
K. Kolowrocki & J. Soszynska
1985
1995
2003
2013
2021
2029
The operation quality assessment as an initial part of reliability improvement and low cost
automation of the system
L. Muslewski, M. Woropay & G. Hoppe
2037
XVIII
2045
Variable ordering techniques for the application of Binary Decision Diagrams on PSA
linked Fault Tree models
C. Ibez-Llano, A. Rauzy, E. Melndez & F. Nieto
2051
2061
2071
2081
2093
2101
2107
2117
2125
2135
2143
Reliability assessment under Uncertainty Using Dempster-Shafer and Vague Set Theories
S. Pashazadeh & N. Grachorloo
2151
Types and sources of uncertainties in environmental accidental risk assessment: A case study
for a chemical factory in the Alpine region of Slovenia
M. Gerbec & B. Kontic
Uncertainty estimation for monotone and binary systems
A.P. Ulmeanu & N. Limnios
2157
2167
2175
2185
XIX
2191
2199
2207
The Preliminary Risk Analysis approach: Merging space and aeronautics methods
J. Faure, R. Laulheret & A. Cabarbaye
2217
Using a Causal model for Air Transport Safety (CATS) for the evaluation of alternatives
B.J.M. Ale, L.J. Bellamy, R.P. van der Boom, J. Cooper, R.M. Cooke, D. Kurowicka, P.H. Lin,
O. Morales, A.L.C. Roelen & J. Spouge
2223
Automotive engineering
An approach to describe interactions in and between mechatronic systems
J. Gng & B. Bertsche
2233
2239
2245
2251
Towards a better interaction between design and dependability analysis: FMEA derived from
UML/SysML models
P. David, V. Idasiak & F. Kratz
2259
2269
2275
2285
2289
Exposure assessment model to combine thermal inactivation (log reduction) and thermal injury
(heat-treated spore lag time) effects on non-proteolytic Clostridium botulinum
J.-M. Membr, E. Wemmenhove & P. McClure
2295
2305
Public information requirements on health risks of mercury in fish (2): A comparison of mental
models of experts and public in Japan
H. Kubota & M. Kosugi
2311
Review of diffusion models for the social amplification of risk of food-borne zoonoses
J.P. Mehers, H.E. Clough & R.M. Christley
XX
2317
Risk perception and communication of food safety and food technologies in Flanders,
The Netherlands, and the United Kingdom
U. Maris
Synthesis of reliable digital microfluidic biochips using Monte Carlo simulation
E. Maftei, P. Pop & F. Popentiu Vladicescu
2325
2333
2345
2353
2363
Influence of safety systems on land use planning around seveso sites; example of measures
chosen for a fertiliser company located close to a village
C. Fivez, C. Delvosalle, N. Cornil, L. Servranckx, F. Tambour, B. Yannart & F. Benjelloun
2369
2379
2389
2397
2405
Reliability study of shutdown process through the analysis of decision making in chemical plants.
Case of study: South America, Spain and Portugal
L. Amendola, M.A. Artacho & T. Depool
2409
2415
2421
Civil engineering
Decision tools for risk management support in construction industry
S. Mehicic Eberhardt, S. Moeller, M. Missler-Behr & W. Kalusche
2431
2441
2447
2453
2463
XXI
2473
Critical infrastructures
A model for vulnerability analysis of interdependent infrastructure networks
J. Johansson & H. Jnsson
Exploiting stochastic indicators of interdependent infrastructures: The service availability of
interconnected networks
G. Bonanni, E. Ciancamerla, M. Minichino, R. Clemente, A. Iacomini, A. Scarlatti,
E. Zendri & R. Terruggia
Proactive risk assessment of critical infrastructures
T. Uusitalo, R. Koivisto & W. Schmitz
2491
2501
2511
Seismic assessment of utility systems: Application to water, electric power and transportation
networks
C. Nuti, A. Rasulo & I. Vanzi
2519
Author index
2531
VOLUME 4
Electrical and electronic engineering
Balancing safety and availability for an electronic protection system
S. Wagner, I. Eusgeld, W. Krger & G. Guaglio
Evaluation of important reliability parameters using VHDL-RTL modelling and information
flow approach
M. Jallouli, C. Diou, F. Monteiro, A. Dandache, H. Belhadaoui, O. Malass, G. Buchheit,
J.F. Aubry & H. Medromi
2541
2549
2561
Incorporation of ageing effects into reliability model for power transmission network
V. Matuzas & J. Augutis
2569
2575
2581
Security of gas supply to a gas plant from cave storage using discrete-event simulation
J.D. Amaral Netto, L.F.S. Oliveira & D. Faertes
2587
2593
2601
XXII
2609
2613
The estimation of health effect risks based on different sampling intervals of meteorological data
J. Jeong & S. Hoon Han
2619
2627
2635
2641
2649
2657
2665
2675
2685
2689
Manufacturing
A decision model for preventing knock-on risk inside industrial plant
M. Grazia Gnoni, G. Lettera & P. Angelo Bragatto
2701
Condition based maintenance optimization under cost and profit criteria for manufacturing
equipment
A. Snchez, A. Goti & V. Rodrguez
2707
2715
Mechanical engineering
Developing a new methodology for OHS assessment in small and medium enterprises
C. Pantanali, A. Meneghetti, C. Bianco & M. Lirussi
2727
2735
XXIII
2743
Natural hazards
A framework for the assessment of the industrial risk caused by floods
M. Campedel, G. Antonioni, V. Cozzani & G. Di Baldassarre
2749
2757
2765
Decision making tools for natural hazard risk managementExamples from Switzerland
M. Brndl, B. Krummenacher & H.M. Merz
2773
How to motivate people to assume responsibility and act upon their own protection from flood
risk in The Netherlands if they think they are perfectly safe?
M. Bockarjova, A. van der Veen & P.A.T.M. Geurts
2781
2789
Risk based approach for a long-term solution of coastal flood defencesA Vietnam case
C. Mai Van, P.H.A.J.M. van Gelder & J.K. Vrijling
2797
2807
2817
Nuclear engineering
An approach to integrate thermal-hydraulic and probabilistic analyses in addressing
safety margins estimation accounting for uncertainties
S. Martorell, Y. Nebot, J.F. Villanueva, S. Carlos, V. Serradell, F. Pelayo & R. Mendizbal
2827
Availability of alternative sources for heat removal in case of failure of the RHRS during
midloop conditions addressed in LPSA
J.F. Villanueva, S. Carlos, S. Martorell, V. Serradell, F. Pelayo & R. Mendizbal
2837
2845
Distinction impossible!: Comparing risks between Radioactive Wastes Facilities and Nuclear
Power Stations
S. Kim & S. Cho
2851
Heat-up calculation to screen out the room cooling failure function from a PSA model
M. Hwang, C. Yoon & J.-E. Yang
Investigating the material limits on social construction: Practical reasoning about nuclear
fusion and other technologies
T. Horlick-Jones, A. Prades, C. Oltra, J. Navajas & J. Espluga
2861
2867
Neural networks and order statistics for quantifying nuclear power plants safety margins
E. Zio, F. Di Maio, S. Martorell & Y. Nebot
2873
M. Cepin
& R. Prosen
2883
2891
XXIV
2899
2909
2913
2921
2929
FAMUS: Applying a new tool for integrating flow assurance and RAM analysis
. Grande, S. Eisinger & S.L. Isaksen
2937
2945
Life cycle cost analysis in design of oil and gas production facilities to be used in harsh,
remote and sensitive environments
D. Kayrbekova & T. Markeset
Line pack management for improved regularity in pipeline gas transportation networks
L. Frimannslund & D. Haugland
2955
2963
Optimization of proof test policies for safety instrumented systems using multi-objective
genetic algorithms
A.C. Torres-Echeverria, S. Martorell & H.A. Thompson
2971
2981
Preliminary probabilistic study for risk management associated to casing long-term integrity
in the context of CO2 geological sequestrationRecommendations for cement plug geometry
Y. Le Guen, O. Poupard, J.-B. Giraud & M. Loizzo
2987
2997
Policy decisions
Dealing with nanotechnology: Do the boundaries matter?
S. Brunet, P. Delvenne, C. Fallon & P. Gillon
3007
3015
Risk futures in Europe: Perspectives for future research and governance. Insights from a EU
funded project
S. Menoni
Risk management strategies under climatic uncertainties
U.S. Brandt
XXV
3023
3031
3039
Stop in the name of safetyThe right of the safety representative to halt dangerous work
U. Forseth, H. Torvatn & T. Kvernberg Andersen
3047
3055
Public planning
Analysing analysesAn approach to combining several risk and vulnerability analyses
J. Borell & K. Eriksson
Land use planning methodology used in Walloon region (Belgium) for tank farms of gasoline
and diesel oil
F. Tambour, N. Cornil, C. Delvosalle, C. Fivez, L. Servranckx, B. Yannart & F. Benjelloun
3061
3067
3077
3085
3093
3101
3109
3117
3125
On the methods to model and analyze attack scenarios with Fault Trees
G. Renda, S. Contini & G.G.M. Cojazzi
3135
3143
3153
3163
Dynamic maintenance policies for civil infrastructure to minimize cost and manage safety risk
T.G. Yeung & B. Castanier
3171
3177
XXVI
Impact of preventive grinding on maintenance costs and determination of an optimal grinding cycle
C. Meier-Hirmer & Ph. Pouligny
3183
3191
Optimal design of control systems using a dependability criteria and temporal sequences
evaluationApplication to a railroad transportation system
J. Clarhaut, S. Hayat, B. Conrard & V. Cocquempot
3199
RAM assurance programme carried out by the Swiss Federal Railways SA-NBS project
B.B. Stamenkovic
3209
3217
Safety analysis methodology application into two industrial cases: A new mechatronical system
and during the life cycle of a CAFs high speed train
O. Revilla, A. Arnaiz, L. Susperregui & U. Zubeldia
3223
3231
3237
3245
Waterborne transportation
A simulation based risk analysis study of maritime traffic in the Strait of Istanbul
B. zbas, I. Or, T. Altiok & O.S. Ulusu
3257
3265
3275
3285
Design of the ship power plant with regard to the operator safety
A. Podsiadlo & W. Tarelko
3289
3295
Modeling of hazards, consequences and risk for safety assessment of ships in damaged
conditions in operation
M. Gerigk
3303
Numerical and experimental study of a reliability measure for dynamic control of floating vessels
B.J. Leira, P.I.B. Berntsen & O.M. Aamo
3311
3319
3323
XXVII
3331
The analysis of SAR action effectiveness parameters with respect to drifting search area model
Z. Smalko & Z. Burciu
3337
3343
Author index
3351
XXVIII
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Preface
This Conference stems from a European initiative merging the ESRA (European Safety and Reliability
Association) and SRA-Europe (Society for Risk AnalysisEurope) annual conferences into the major safety,
reliability and risk analysis conference in Europe during 2008. This is the second joint ESREL (European Safety
and Reliability) and SRA-Europe Conference after the 2000 event held in Edinburg, Scotland.
ESREL is an annual conference series promoted by the European Safety and Reliability Association. The
conference dates back to 1989, but was not referred to as an ESREL conference before 1992. The Conference
has become well established in the international community, attracting a good mix of academics and industry
participants that present and discuss subjects of interest and application across various industries in the fields of
Safety and Reliability.
The Society for Risk AnalysisEurope (SRA-E) was founded in 1987, as a section of SRA international
founded in 1981, to develop a special focus on risk related issues in Europe. SRA-E aims to bring together
individuals and organisations with an academic interest in risk assessment, risk management and risk communication in Europe and emphasises the European dimension in the promotion of interdisciplinary approaches of
risk analysis in science. The annual conferences take place in various countries in Europe in order to enhance the
access to SRA-E for both members and other interested parties. Recent conferences have been held in Stockholm,
Paris, Rotterdam, Lisbon, Berlin, Como, Ljubljana and the Hague.
These conferences come for the first time to Spain and the venue is Valencia, situated in the East coast close
to the Mediterranean Sea, which represents a meeting point of many cultures. The host of the conference is the
Universidad Politcnica de Valencia.
This year the theme of the Conference is "Safety, Reliability and Risk Analysis. Theory, Methods and
Applications". The Conference covers a number of topics within safety, reliability and risk, and provides a
forum for presentation and discussion of scientific papers covering theory, methods and applications to a wide
range of sectors and problem areas. Special focus has been placed on strengthening the bonds between the safety,
reliability and risk analysis communities with an aim at learning from the past building the future.
The Conferences have been growing with time and this year the program of the Joint Conference includes 416
papers from prestigious authors coming from all over the world. Originally, about 890 abstracts were submitted.
After the review by the Technical Programme Committee of the full papers, 416 have been selected and included
in these Proceedings. The effort of authors and the peers guarantee the quality of the work. The initiative and
planning carried out by Technical Area Coordinators have resulted in a number of interesting sessions covering
a broad spectre of topics.
Sebastin Martorell
C. Guedes Soares
Julie Barnett
Editors
XXIX
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Organization
Conference Chairman
Dr. Sebastin Martorell Alsina
Conference Co-Chairman
Dr. Bls Galvn Gonzlez
Leira, BertNorway
Levitin, GregoryIsrael
Merad, MyriamFrance
Palanque, PhilippeFrance
Papazoglou, IoannisGreece
Preyssl, ChristianThe Netherlands
Rackwitz, RuedigerGermany
Rosqvist, TonyFinland
Salvi, OlivierGermany
Skjong, RolfNorway
Spadoni, GigliolaItaly
Tarantola, StefanoItaly
Thalmann, AndreaGermany
Thunem, Atoosa P-JNorway
Van Gelder, PieterThe Netherlands
Vrouwenvelder, TonThe Netherlands
Wolfgang, KrgerSwitzerland
Badia G, Spain
Barros A, France
Bartlett L, United Kingdom
Basnyat S, France
Birkeland G, Norway
Bladh K, Sweden
Boehm G, Norway
XXXI
Webpage Administration
Alexandre Janeiro
Le Bot P, France
Limbourg P, Germany
Lisnianski A, Israel
Lucas D, United Kingdom
Luxhoj J, United States
Ma T, United Kingdom
Makin A, Australia
Massaiu S, Norway
Mercier S, France
Navarre D, France
Navarro J, Spain
Nelson W, United States
Newby M, United Kingdom
Nikulin M, France
Nivolianitou Z, Greece
Prez-Ocn R, Spain
Pesme H, France
Piero B, Italy
Pierson J, France
Podofillini L, Italy
Proske D, Austria
Re A, Italy
Revie M, United Kingdom
Rocco C, Venezuela
Rouhiainen V, Finland
Roussignol M, France
Sadovsky Z, Slovakia
Salzano E, Italy
Sanchez A, Spain
Sanchez-Arcilla A, Spain
Scarf P, United Kingdom
Siegrist M, Switzerland
Srensen J, Denmark
Storer T, United Kingdom
Sudret B, France
Teixeira A, Portugal
Tian Z, Canada
Tint P, Estonia
Trbojevic V, United Kingdom
Valis D, Czech Republic
Vaurio J, Finland
Yeh W, Taiwan
Zaitseva E, Slovakia
Zio E, Italy
XXXII
Universidad de Granada
Universidad Politcnica de Valencia
Universidad Politcnica de Valencia
Universidad de Las Palmas de Gran Canaria
XXXIII
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Acknowledgements
The conference is organized jointly by Universidad Politcnica de Valencia, ESRA (European Safety and
Reliability Association) and SRA-Europe (Society for Risk AnalysisEurope), under the high patronage of
the Ministerio de Educacin y Ciencia, Generalitat Valenciana and Ajuntament de Valencia.
Thanks also to the support of our sponsors Iberdrola, PMM Institute for Learning, Tekniker, Asociacin
Espaola para la Calidad (Comit de Fiabilidad), CEANI and Universidad de Las Palmas de Gran Canaria. The
support of all is greatly appreciated.
The work and effort of the peers involved in the Technical Program Committee in helping the authors to
improve their papers are greatly appreciated. Special thanks go to the Technical Area Coordinators and organisers
of the Special Sessions of the Conference, for their initiative and planning which have resulted in a number of
interesting sessions. Thanks to authors as well as reviewers for their contributions in the review process. The
review process has been conducted electronically through the Conference web page. The support to the web
page was provided by the Instituto Superior Tcnico.
We would like to acknowledge specially the local organising committee and the conference secretariat and technical support at the Universidad Politcnica de Valencia for their careful planning of the practical arrangements.
Their many hours of work are greatly appreciated.
These conference proceedings have been partially financed by the Ministerio de Educacin y Ciencia
de Espaa (DPI2007-29009-E), the Generalitat Valenciana (AORG/2007/091 and AORG/2008/135) and the
Universidad Politcnica de Valencia (PAID-03-07-2499).
XXXV
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Introduction
The Conference covers a number of topics within safety, reliability and risk, and provides a forum for presentation
and discussion of scientific papers covering theory, methods and applications to a wide range of sectors and
problem areas.
Thematic Areas
XXXVII
Nuclear Engineering
Offshore Oil and Gas
Policy Decisions
Public Planning
Security and Protection
Surface Transportation (road and train)
Waterborne Transportation
XXXVIII
Thematic areas
Accident and incident investigation
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J. Hortal
Spanish Nuclear Safety Council (CSN), Madrid, Spain
ABSTRACT: Over the past years, many Nuclear Power Plant (NPP) organizations have performed Probabilistic
Safety Assessments (PSAs) to identify and understand key plant vulnerabilities. As part of enhancing the PSA
quality, the Human Reliability Analysis (HRA) is key to a realistic evaluation of safety and of the potential
weaknesses of a facility. Moreover, it has to be noted that HRA continues to be a large source of uncertainly in
the PSAs. We developed SIMulator of PROCedures (SIMPROC) as a tool to simulate events related with human
actions and to help the analyst to quantify the importance of human actions in the final plant state. Among others,
the main goal of SIMPROC is to check if Emergency Operating Procedures (EOPs) lead to safe shutdown plant
state. First pilot cases simulated have been MBLOCA scenarios simulated by MAAP4 severe accident code
coupled with SIMPROC.
INTRODUCTION
In addition to the traditional methods for verifying procedures, integrated simulations of operator and plant
response may be useful to:
1. Verify that the plant operating procedures can be
understood and performed by the operators.
2. Verify that the response based on these procedures
leads to the intended results.
3. Identify potential situations where judgment of
operators concerning the appropriate response is
inconsistent with the procedures.
4. Study the consequences of errors of commission
and the possibilities for recovering from such
errors, (CSNI 1998) and (CSNI-PWG1 and CSNIPWG5 1997).
5. Study time availability factors related with procedures execution.
WHY SIMPROC?
BABIECA-SIMPROC ARCHITECTURE
The final objective of the BABIECA-SIMPROC system is to simulate accidental transients in NPPs considering human actions. For this purpose is necessary
to develop an integrated tool that simulates the dynamics of the system. To achieve this we will use the
BABIECA Simulation Engine to calculate the time
evolution state of the plant. Finally we will model
the influence of human operator actions by means of
SIMPROC. We have modeled the operators influence
over the plant state as a separate module to emphasize the significance of operator actions in the final
state of the plant. It is possible to plug or unplug
SIMPROC to consider the operator influence over the
simulation state of the plant in order to compare end
states in both cases. The final goal of the BABIECASIMPROC overall system, integrated in SCAIS, is to
simulate Dynamic Event Trees (DET) to describe the
time evolution scheme of accidental sequences generated from a trigger event. During this calculation it
must be taken into account potential degradations of
the systems associating them with probabilistic calculations in each sequence. Additionally EOPs execution
influence is defined for each plant and each sequence.
In order to achieve this objective the integrated scheme
must fit the following features:
1. The calculation framework must be able to integrate
other Simulation Codes (MAAP, TRACE, . . . ). In
this case, BABIECA-SIMPROC acts as a wrapper
to external codes. This will allow to work with different codes in the same time line sequence. In case
the simulation reaches core damage conditions, it is
possible to unplug the best estimate code and plug
a severe accident code to accurately describe the
dynamic state of the plant.
2. Be able to automatically generate the DET associated to an event initiator, simulating the dynamic
plant evolution.
3. Obtain the probability associated to every possible
evolution sequence of the plant.
All the system is being developing in C++ code
in order to meet the requirements of speed and
performance needed in this kind of simulations. Parallelization was implemented by means of a PVM architecture. The communication with the PostGresSQL
database is carried out by the libpq++ library. All the
input desk needed to initialize the system is done using
standard XML.
The main components of the Global System Architecture can be summarized as follows:
1. DENDROS event scheduler. It is in charge of
opening branches of the simulation tree depending
on the plant simulation state. DENDROS allows
the modularization and parallelization of the tree
2.
3.
4.
5.
Figure 1.
BABIECA-SIMPROC architecture.
If we focus our attention on the SIMPROC integration of the system, the BABIECA-SIMPROC
architecture can be illustrated accordingly (Fig. 1).
BABIECA acts as a master code to encapsulate
different simulation codes in order to build a robust
system with a broad range of application and great
flexibility. BABIECA Driver has its own topology,
named BABIECA Internal Modules in Fig. 1. These
Figure 3.
Figure 2.
feature allows for a better modelling of the distribution of the operation duties among the members
of the operation team.
SIMPROC-BABIECA-MAAP4 connection.
SIMPROC-BABIECA-MAAP4 CONNECTION
SCHEME
APPLICATION EXAMPLE
The example used to validate the new simulation package simulates the operator actions related with the level
control of steam generators during MBLOCA transient in a PWR Westinghouse design. These operator
actions are included in several EOPs, like ES-1.2 procedure, Post LOCA cooldown and depressurization,
which is associated with primary cooling depressurization. The first step in this validation is to run
Figure 4. Representation of the key parameters to implement the narrow-band control over SGWL.
(1)
(2)
0, if z > z0 + zDEAD 2
Wmin , if z0 + zDEAD
<z
2
W =
(3)
+
z
<
z
0
DEAD
DEAD
W , if z0 z 2 < z
< z0 + DEAD
2
where Wmin is the flow rate used on the decreasing part
of the cycle.
Operation in manual mode results in a sawtoothlike level trajectory which oscillates about the desired
level z0 .
The parameters used to implement the narrowband control over the steam generator water level are
illustrated in Fig. 4.
To simulate the BABIECA-SIMPROC version of
the same transient we must create the XML files
needed to define the input desk of the overall system.
Finally, it is necessary to define the XML simulation files for BABIECA and SIMPROC. The main
difference with the previous XML files is that they do
not need to be parsed and introduced in the database
prior to the simulation execution. They are parsed
and stored in memory during runtime execution of the
simulation.
The BABIECA simulation file parameters are:
Simulation code. Must be unique in the database.
Start input. Informs about the XML BABIECA
Topology file linked with the simulation.
Simulation type. It is the type of simulation: restart,
transient or steady.
Total time. Final time of the simulation.
Delta. Time step of the master simulation.
Save output frequency. Frequency to save the outputs in the database.
Initial time. Initial time for the simulation.
Initial topology mode. Topology block can be in
multiple operation modes. During a simulation execution some triggers can lead to mode changes that
modify the calculation loop of a block.
Save restart frequency. Frequency we want to save
restart points to back up simulation evolution.
SIMPROC active. Flag that allow us to switch on
SIMPROC influence over the simulation.
The main parameters described in the XML SIMPROC simulation file are:
Initial and end time. These parameters can be different to the ones used for the simulation file and
define a time interval for SIMPROC to work.
Operator parameters. These are id, skill and slowness. The first two identify the operator and his type
and the latter takes account of his speed to execute
the required actions. It is known that this parameter
depends on multiple factors like operator experience
and training.
Initial variables. These are the variables that are
monitored continuously to identify the main parameters to evaluate the plant state. Each variable has a
procedure code to be used in the EOP description,
a BABIECA code to identify the variable inside the
topology and a set of logical states.
Variables. These variables are not monitored in a
continuous way but have the same structure as Initial
Variables. They are only updated under SIMPROC
request.
Figure 6. Mass Flow Rate to the Cold Leg (red line: MAAP4
output; dashed black line: BABIECA-SIMPROC output).
CONCLUSIONS
REFERENCES
CSNI (Ed.) (1998). Proceedings from Specialists Meeting
Organized: Human performance in operational events,
CSNI.
CSNI-PWG1, and CSNI-PWG5 (1997). Research strategies
for human performance. Technical Report 24, CSNI.
Expsito, A. and C. Queral (2003a). Generic questions about
the computerization of the Almaraz NPP EOPs. Technical
report, DSE-13/2003, UPM.
Expsito, A. and C. Queral (2003b). PWR EOPs computerization. Technical report, DSE-14/2003, UPM.
Izquierdo, J.M. (2003). An integrated PSA approach to independent regulatory evaluations of nuclear safety assessment of Spanish nuclear power stations. In EUROSAFE
Forum 2003.
Izquierdo, J.M., J. Hortal, M. Sanchez-perea, E. Melndez,
R. Herrero, J. Gil, L. Gamo, I. Fernndez, J. Espern,
P. Gonzlez, C. Queral, A. Expsito, and G. Rodrguez
(2008). SCAIS (Simulation Code System for Integrated
Safety Assesment): Current status and applications. Proceedings of ESREL 08.
Izquierdo, J.M., C. Queral, R. Herrero, J. Hortal, M. Sanchezperea, E. Melandez, and R. Muoz (2000). Role of fast
Running TH Codes and Their Coupling with PSA Tools,
in Advanced Thermal-hydraulic and Neutronic Codes:
Current and Future Applications. In NEA/CSNI/R(2001)2,
Volume 2.
NEA (2004). Nuclear regulatory challenges related to human
performance. Isbn 92-64-02089-6, NEA.
Rasmussen, N.C. (1975). Reactor safety study, an assessment
of accident risks in u. s. nuclear power plants. In NUREG
NUREG-75/014, WASH-1400.
Reason, J. (1990). Human Error. Cambridge University
Press.
Trager, E.A. (1985). Case study report on loss of safety system function events. Technical Report AEOD/C504, ffice
for Analysis and Evaluation of Operational Data. Nuclear
Regulatory Commission (NRC).
ACKNOWLEDGMENTS
SIMPROC project is partially funded by the Spanish
Ministry of Industry (PROFIT Program) and SCAIS
project by the Spanish Ministry of Education and Science (CYCIT Program). Their support is gratefully
acknowledged.
We want to show our appreciation to the people
who in one way or another, have contributed to the
accomplishment project.
NOMENCLATURE
ISA Integrated Safety Analysis
SCAIS Simulation Codes System for Integrated Safety
Assessment
PSA Probabilistic Safety Analysis
HRA Human Reliability Analysis
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Crime may be regarded as a major source of social concern in the modern world. Very often
increases in crime rates will be treated as headline news, and many people see the law and order issue as
one of the most pressing in modern society. An example of such issues has been highlighted by the Tlhuac
incident which occurred in Mexico City on 23 November 2004. The fatal incident occurred when an angry
crowd burnt alive two police officers and seriously injured another after mistaking them for child kidnappers.
The third policeman who was finally rescued by colleagues (three and half hours after the attack began) suffered
serious injuries. The paper presents some preliminary results of the analysis of the above incident by applying the
MORT (Management Over-sight Risk Three) technique. The MORT technique may be regarded as a structured
checklist in the form of a complex fault tree model that is intended to ensure that all aspects of an organizations
management are looked into when assessing the possible causes of an incident. Some other accident analysis
approaches may be adopted in the future for further analysis. It is hoped that by conducting such analysis lessons
can be learnt so that incidents such as the case of Tlhuac can be prevented in the future.
INTRODUCTION
1.1
1.1.2 Murdering
Very often murdering is a product of conflict between
acquaintances or family members or a by-product
of other type of crime such as burglary or robbery (McCabe & Wauchope 2005, Elklit 2002). It
is generally recognised that the role of cultural attitudes, reflected in and perpetuated by the mass media
accounts, has a very significant influence in the public
perception of crime.
11
1.2
1.1.5 Bullying
Burgess et al. (2006) argue that bullying has become a
major public health issue, because of its connection to
violent and aggressive behaviours that result in serious injury to self and to others. The authors define
bullying as a relationship problem in which power
and aggression are inflicted on a vulnerable person
to cause distress. They further emphasise that teasing
and bullying can turn deadly.
1.1.6 Law enforcement
Finckenauer (2004) emphasizes that a major push of
the expansion of higher education in crime and justice
studies came particularly from the desire to professionalise the policewith the aim of improving police
performance. He suggests new and expanded subjectmatter coverage. First, criminal-justice educators must
recognise that the face of crime has changedit has
become increasingly international in nature. Examples
include cyber-crime, drug trafficking, human trafficking, other forms of trafficking and smuggling,
and money laundering. Although global in nature,
these sorts of crimes have significant state and local
impact. The author argues that those impacts need to
be recognised and understood by twenty-first-century
criminal-justice professors and students. Increasingly,
crime and criminals do not respect national borders.
As a result, law enforcement and criminal justice cannot be bond and limited by national borders. Second,
he emphasises the need to recognise that the role
of science in law enforcement and the administration of justice has become increasingly pervasive and
Table 1.
Types of crime
Percentage (%)
Robbery
56.3
Other types of robbery
25.8
Assault
7.2
Theft of items from cars (e.g., accessories) 2.9
Burglary
2.4
Theft of cars
1.5
Other types of crime
0.4
Kidnappings
2.1
Sexual offences
0.8
Other assaults/threat to citizens
0.6
12
Table 2.
Percentage (%)
49.1
17.1
19.8
21.3
45.4
28.1
56.0
37.0
38.0
30.5
1.6
3.1
13
Losses
Oversights
and omissions
Assumed
risks
Specific
Control factors
LTA
S
SA1
SA2
Potentially
Harmful
condition
SB1
Information
systems
SD1
R2
Rn
Mitigation
LTA
Incident
R1
Management
system factors
LTA
Implementation of
policy LTA
Policy
LTA
MA1
MA2
Inspection
LTA
SD2
SB3
Operational
readiness
LTA
SD3
MA3
Energy flows
leading
accident/incident
Controls &
Barriers
LTA
Vulnerable
people/objects
SB2
Risk
Assessment
System LTA
SB4
Maintenance
LTA
SD4
Super
vision
LTA
SD5
Table 3.
SD6
Barrier
3-PFP
Tlahuac PFP-police officers in plain(Police
Suburb
clothes taking pictures of pupils
Officers)
leaving school.
OperaPFP-police officers did not have
tions
an official ID or equivalent to
demonstrate that they were
effectively members of the PFP.
No communication between the
PFP-police officers and the local
& regional authorities of the
purpose of their operations in
the neighborhood.
14
Angry
crowd
3-PFP
Crowd
attack
3-PFP
3.3
MORT structure
Figure 3. SB2 branchVulnerable people. (Red: problems that contributed to the outcome; Blue: need more
information. Green: is judged to have been satisfactory).
Figure 4. SB3 branchBarriers & Controls. (Red: problems that contributed to the outcome).
15
Figure 8. SD1 branchData collection LTA. (Red: problems that contributed to the outcome; Blue: need more
information. Green: is judged to have been satisfactory).
16
Table 4.
DISCUSSION
ACKNOWLEDGEMENTS
This project was funded by CONACYT & SIP-IPN
under the following grants: CONACYT: No-52914 &
SIP-IPN: No-20082804.
REFERENCES
Alaggia, R., & Regehr, C. 2006. Perspectives of justice
for victims of sexual violence. Victims and Offenders 1:
3346.
Alalehto, T. 2002. Eastern prostitution from Russia to
Sweden and Finland. Journal of Scandinavian Studies in
Criminology and Crime Prevention 3: 96111.
17
Burgess, A.W., Garbarino, C., & Carlson, M.I. 2006. Pathological teasing and bulling turned deadly: shooters and
suicide. Victims and Offenders 1: 114.
Chiffriller, S.H., Hennessy, J.J., & Zappone, M. 2006. Understanding a new typology of batterers: implications for
treatment. Victims and Offenders 1: 7997.
Davis, P.K., & Jenkins, B.M. 2004. A systems approach to
deterring and influencing terrorists. Conflict Management
and Peace Science 21: 315.
Ekblom, P. 2005. How to police the future: scanning for
scientific and technological innovations which generate
potential threats & opportunities in crime, policing &
crime reduction (Chapter 2). M.J. Smith & N. Tilley (eds),
Crime Sciencenew approaches to prevent & detecting
crime: 2755. Willan Publishing.
Elklit, A. 2002. Attitudes toward rape victimsan empirical
study of the attitudes of Danish website visitors. Journal of Scandinavian Studies in Criminology and Crime
Prevention 3: 7383.
FIA, 2004. Linchan a agentes de la PFP en Tlhuac. Fuerza
Informativa Azteca (FIA), http://www.tvazteca.com/
noticias (24/11/2004).
FIA, 2005. Linchamiento en Tlhuac pareca celebracin.
Fuerza Informativa Azteca (FIA), http://www.tvazteca.
com/noticias (10/01/2005).
Finckenauer, J.O. 2005. The quest for quality in criminal
justice education. Justice Quarterly 22 (4): 413426.
Griffiths, H. 2004. Smoking guns: European cigarette
smuggling in the 1990s. Global Crime 6 (2): 185200.
Instituto Ciudadano de Estudios Sobre la Inseguridad
(ICESI), online www.icesi.org.mx.
Klein, J. 2005. Teaching her a lesson: media misses boys
rage relating to girls in school shootings. Crime Media
Culture 1 (1): 9097.
Lampe, K.V., & Johansen, P.O. 2004. Organized crime and
trust: on the conceptualization and empirical relevance of
trust in the context of criminal networks. Global Crime
6 (2): 159184.
18
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
R. Woltjer
Department of Computer and Information Science, Cognitive Systems Engineering Lab,
Linkping University, Linkping, Sweden
ABSTRACT: Accident models and analysis methods affect what accident investigators look for, which contributing factors are found, and which recommendations are issued. This paper contrasts the Sequentially Timed
Events Plotting (STEP) method and the Functional Resonance Analysis Method (FRAM) for accident analysis and modelling. The main issues addressed in this paper are comparing the established multi-linear method
(STEP) with the systemic method (FRAM) and evaluating which new insights the latter systemic method provides for accident analysis in comparison to the former established multi-linear method. Since STEP and FRAM
are based on a different understandings of the nature of accidents, the comparison of the methods focuses on
what we can learn from both methods, how, when, and why to apply them. The main finding is that STEP helps
to illustrate what happened, whereas FRAM illustrates the dynamic interactions within socio-technical systems
and lets the analyst understand the how and why by describing non-linear dependencies, performance conditions,
variability, and their resonance across functions.
INTRODUCTION
19
A Norwegian Air Shuttle Boeing 737-36N with callsign NAX541 was en-route from Stavanger Sola airport to Oslo Gardermoen airport (OSL). The aircraft
was close to Gardermoen and was controlled by Oslo
Approach (APP). The runway in use at Gardermoen
was 19R. The aircraft was cleared to reduce altitude to
4000 ft. The approach and the landing were carried out
20
14:42:36
14:42:55
14:42:57
14:44:02
TIME LINE
ACTORS
AIR TRAFFIC
AIRCRAFT AC-1
TWR INFORMS
G/S FAIL AC-2
14:42:57
GARDERMOEN TWR
CONTROL
RUNWAY EQUIP.
RWY-E
Figure 1.
CAPTAIN, COPILOT
PNF, PF
RWY-E ACTIVATES
ALARM G/S FAIL
14:42:55
PNF ACCEPTS
TRANSFER
14:42:38
AIRCRAFT
AC-1
PNF CHANGES
TO TWR FRQ
AC-1 NOSE MOVES
DOWN, DISCONNECT
A/P 14:43:27
PNF
MANUAL
GO
AROUND
ALTITUDE
460ft
STEP provides a comprehensive framework for accident investigation from the description of the accident
process, through the identification of safety problems,
to the development of safety recommendations. The
first key concept in STEP is the multi-linear event
sequence, aimed at overcoming the limitations of the
single linear description of events. This is implemented in a worksheet with a procedure to construct a
flowchart to store and illustrate the accident process.
The STEP worksheet is a simple matrix. The rows are
labelled with the names of the actors on the left side.
The columns are labelled with marks across a time line.
Secondly, the description of the accident is performed by universal events building blocks. An event
is defined as one actor performing one action. To
ensure that there is a clear description the events are
broken down until it is possible to visualize the process and be able to understand its proper control. In
addition, it is necessary to compare the actual accident
events with what was expected to happen.
A third concept is that the events flow logically in
a process. This concept is achieved by linking arrows
to show proceed/follow and logical relations between
events. The result of the third concept is a cascading
flow of events representing the accident process from
the beginning of the first unplanned change event to the
last connected harmful event on the STEP worksheet.
The organization of the events is developed and
visualized as a mental motion picture. The completeness of the sequence is validated with three tests.
The row test verifies that there is a complete picture of
each actors actions through the accident. The column
test verifies that the events in the individual actor rows
are placed correctly in relation to other actors actions.
The necessary and sufficient test verifies that the early
action was indeed sufficient to produce the later event,
otherwise more actions are necessary.
The STEP worksheet is used to have a link between
the recommended actions and the accident. The events
FRAM promotes a systemic view for accident analysis. The purpose of the analysis is to understand
21
Precondition
Figure 2.
C Control
Activity/
Function
Input I
Step 1 is related to the identification and characterization of functions: A total of 19 essential functions
were identified and grouped in accordance to the area
of operation. There are no specified rules for the level
of granularity, instead functions are included or split
up when the explanation of variability requires. In this
particular analysis some higher level functions, e.g.
Oslo APP control, and some lower level functions,
e.g. Change frq to TWR control.
O Output
R Resource
A FRAM module.
22
Table 2.
Function:
Manual approach
Availability
of resources
(personnel,
equipment)
Training,
preparation,
competence
Communication
quality
HMI operational
support
Avail. procedures
Work conditions
Input
Output
Preconditions
Resources
Time
Control
Performance
conditions
Rating
Adequate
PF little
experience on type
Temporarily
inadequate
Delay to contact
tower
Unclear alerts
Inefficient
Interruptions?
# Goals, conflicts
Overloaded
Available time
Task synchronisation
Circadian rhythm
Team collaboration
Org. quality
Switched roles
Inadequate
Adequate
Temporarily
inadequate?
More than
capacity
Temporarily
inadequate
Adjusted
Inefficient
Function:
Manual approach
Aspect description
GPWS alarms,
pilot informed of G/S failure
Altitude in accordance with
approach path, Altitude lower/
higher than flight path
A/P disconnected
Pilot Flying, Pilot Non Flying
Efficiency Thoroughness TradeOff, time available varies
SOPs
23
Change
APP frq to
TWR frq
Auto-pilot
approach
Manual
approach
5,d)Pilotinformed
of G/S failure
2)Transfer requested
toTWRfrq
T
4) Frequency still
set to APP
C
Transmitting radio
comm
14:43:27
O
I
Receiving
rad io comm
T
1) APP -Pilot:
con tact TWR
on TWR frq
3) Pilot-APP :
to TWR frq
C
Oslo APP
con trol
6) Pilot-TWR :
Fligh t on TWRfrq
T
Gardermoen
TWR
con trol
P
Auto-p ilot
b)TWR-pilot:
informa/cof
G/Sfailure
R
I
a) G/S lost
14:42:55
Glide slope
transmission
P
X) Proactive TWR -APP comm:
check
frequency change
Figure 3.
R
P
a) no G/S sign al
14:42:55
A FRAM instantiation during the time interval 14:42:3714:43:27 with incident data.
24
COMPARISON
25
ACKNOWLEDGEMENTS
This work has benefited greatly from the help and support of several aviation experts and the participants in
the 2nd FRAM workshop. We are particularly grateful
to the investigators and managers of the Norwegian
Accident Investigation Board who commented on a
draft of the model. Thanks to Ranveig K. Tinmannsvik,
Erik Jersin, Erik Hollnagel, Jrn Vatn, Karl Rollenhagen, Kip Smith, Jan Hovden and the conference
reviewers for their comments on our work.
REFERENCES
AIBN. 2004. Rapport etter alvorlig luftfartshendelse ved
Oslo Lufthavn Gardermoen 9. Februar 2003 med Boeing 737-36N, NAX541, operert av Norwegian Air Shuttle.
Aircraft Investigation Board Norway, SL RAP.:20/2004.
Amalberti, R. 2001. The paradoxes of almost totally safe
transportation systems. Safety Science, 37, 109126.
Dekker, S.W.A. 2004. Ten questions about human error: A
new view of human factors and system safety. Mahwah,
NJ: Lawrence Erlbaum.
Hendrick, K., Benner, L. 1987. Investigating accidents with
STEP. Marcel Dekker Inc. New York.
Hollnagel, E. 2004. Barriers and accident prevention. Aldershot, UK: Ashgate.
Hollnagel, E. 2008a. From FRAM to FRAM. 2nd FRAM
Workshop, Sophia-Antipolis, France.
Hollnagel, E. 2008b. The changing nature of risks. Ecole des
Mines de Paris, Sophia Antipolis, France.
Hollnagel, E., Pruchnicki, S., Woltjer, R., & Etcher, S. 2008.
Analysis of Comair flight 5191 with the Functional Resonance Accident Model. Proc. of the 8th Int. Symp. of
the Australian Aviation Psychology Association, Sydney,
Australia.
Leveson, N. 2001. Evaluating accident models using recent
aerospace accidents. Technical Report, MIT Dept. of
Aeronautics and Astronautics.
Perrow, C. 1999. Normal accidents. Living with high risk
technologies. Princeton: Princeton University Press. (First
issued in 1984).
Rochlin, G.I. 1999. Safe operation as a social construct.
Ergonomics, 42, 15491560.
Woods, D.D., & Cook, R.I. 2002. Nine steps to move forward
from error. Cognition, Technology & Work, 4, 137144.
26
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
G. Astarita
Federchimica Italian Federation of Chemical Industries, Milan, Italy
ABSTRACT: Near misses are considered to be an important warning that an accident may occur and therefore
their reporting and analysis may have a significant impact on industrial safety performances, above all for those
industrial sectors involving major accident hazards. From this perspective, the use of a specific information
system, including a database ad hoc designed for near misses, constitutes an appropriate software platform
that can support company management in collecting, storing and analyzing data on near misses, and also
implementing solutions to prevent future accidents. This paper describes the design and the implementation of
such a system, developed in the context of a cooperation agreement with the Italian Chemical Industry Federation.
This paper also illustrates the main characteristics and utilities of the system, together with future improvements
that will be made.
1
INTRODUCTION
27
2
2.1
As a preliminary remark, from a legislative perspective, the recommendation that Member States report
near misses to the Commissions Major Accident
Reporting System (MARS) on a voluntary basis has
been introduced by the European Directive 96/82/EC
Seveso II. This is in addition to the mandatory requirements of major accident reporting. More
specifically, in annex VI of the Directive, in which
the criteria for the notification of an accident to the
Commission are specified, is included a recommendation that near misses of particular technical interest
for preventing major accidents and limiting their consequences should be notified to the Commission. This
recommendation is included in the Legislative Decree
n. 334/99 (Legislative Decree n. 334, 1999), the
national law implementing the Seveso II Directive in
Italy.
In addiction further clauses regarding near misses
are included in the above mentioned decree, referring to the provisions concerning the contents of the
Safety Management System in Seveso sites. In fact,
the Decree states that one of the issues to be addressed
by operators in the Safety Management System is
the monitoring of safety performances; this must be
reached by taking into consideration, among other
things, the analysis of near misses, functional anomalies, and corrective actions assumed as a consequence
of near misses.
2.2
28
is accessibility: To allow a wider diffusion, the system has been developed in such a way that it is easy
to access when needed. A web-version has therefore
been built, which assures security, confidentiality and
integrity criteria of the data handled. Secondly, in order
to guarantee that the system can be used by both expert
and non-expert users, the importance of the system
being user-friendly has been an important consideration in its design. The user must have easy access
to the data in the system and be able to draw data from
it by means of pull-down menus, which allow several
options to be chosen; moreover, help icons clarify
the meaning of each field of the database to facilitate data entry. Third, the system must be designed in
such a way as to allow the user to extract information
from the database; to this end it has been provided
with a search engine. By database queries and the
subsequent elaboration of results, the main causes of
the near misses and the most important safety measures adopted to avoid a repetition of the anomaly can
be singled out. In this way the system receives data
on near misses from the user and, in return, provides
information to the user regarding the adoption of corrective actions aimed at preventing similar situations
and/or mitigating their consequences. Lastly, the system must guarantee confidentiality: this is absolutely
necessary, otherwise companies would not be willing to provide sensitive information regarding their
activities. In order to ensure confidentiality, the data
are archived in the database in an anonymous form.
More precisely, when the user consults the database,
the only accessible information on the geographical
location of the event regards three macro-areas, Northern, Central and Southern Italy respectively; this is in
order to avoid that the geographical data inserted in the
database (municipality and province) in the reporting
phase could lead to the identification of the establishment in which near misses have occurred. Another
factor which assures confidentiality is the use of usernames and passwords to enter the system. These are
provided by Federchimica, after credentials have been
vetted, to any company of the Federation, which has
scientific knowledge and operating experience in prevention and safety control. In this way, any company,
fulfilling the above conditions, is permitted to enter the
system, in order to load near misses data and consult
the database, and have the guarantee of data security
and confidentiality.
4
near miss
reporting phase
consultation phase
lesson
learned
Figure 1.
DATABASE STRUCTURE
29
Figure 2.
Once this descriptive part is completed, a classification code will be attributed to any near miss, in order
that identification is univocal.
30
Figure 3.
4.2
As specified at the beginning of the previous paragraph, the second part of the software platform
concerns the extraction of lessons learned by the consultation of the database contents. In fact, one of the
main objectives of the database is the transfer of technical and managerial corrective actions throughout the
chemical process industry. The database has therefore
been supplied with a search engine which aids navigation through near miss reports, allowing several
options. First, the user is able to visualize all near
misses collected in the database in a summary table,
illustrating the most important fields, that are, respectively, event code, submitter data, event description,
damages, immediate or delayed corrective actions,
lessons learned, annex; second, the details of a specific
event can be selected by clicking on the corresponding code. It is also possible to visualize any additional
documents, within the record of the specific event, by
clicking on Annex. Third, to extract a specific near
miss from the database, on the basis of one or more
search criteria, a query system has been included in
the software utilities. This is done by typing a keyword, relevant, for example, to the location, unit,
year of occurrence, potential danger, etc, as shown
in Fig. 3.
CONCLUDING REMARKS
A software system for the collection, storage and analysis of near misses in the Italian chemical industry has
been developed for multiple purposes. It allows information regarding the different factors involved in a
31
REFERENCES
Center for Chemical Process Safety, 2003. Investigating
Chemical Process Incidents. New York, American Institute of Chemical Engineers.
European, Council 1997. Council Directive 96/82/EC on
the major accident hazard of certain industrial activities (Seveso II). Official Journal of the European
Communities. Luxembourg.
Jones, S., Kirchsteiger, C. & Bjierke, W. 1999. The importance of near miss reporting to further improve safety
performance. Journal of Loss prevention in the Process
Industries, 12, 5967.
Legislative Decree 17 August 1999, n. 224 on the control of
major accident hazards involving dangerous substances.
Gazzetta Ufficiale n. 228, 28 September 1999 Italy.
Philley, J., Pearson, K. & Sepeda, A. 2003. Updated CCPS
Investigation Guidelines book, Journal of Hazardous
Materials, 104, 137147.
32
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: As increase of complex systems, simple mistakes or failures may cause serious accidents. One
of measures against this situation is to understand the mechanism of accidents and to use the knowledge for
accident prevention. However, analyzing incident reports is not kept up with the pace of their accumulation
at present, and a database of incident reports is utilized insufficiently. In this research, an analysis system of
incident reports is to be developed based on the m-SHEL ontology. This system is able to process incident reports
and to obtain knowledge relevant for accident prevention efficiently.
INTRODUCTION
Figure 1.
Heinrichs law.
33
A number of incident reporting systems are now operated in various areas, but there are some problems. The
problems are twofold.
The first problem is how to make the report.
There are two types of reporting schemes: multiplechoice and freestyle description. Multiple-choice is
easy to report, because the reporter only have to
choose one or a few items from alternatives already
given. It saves the time of reporting. For this reason, many of existing reporting systems use this
type. But, some important information may be lost
with this reporting scheme. The purpose of incident
reporting is to analyze the cause of an incident and
to share the obtained knowledge. If the reports are
ambiguous, it is hard to know the real causes of an
incident.
Second problem is about the error of the contents.
An estimation of the cause or investigation of preventative measures is the very important part of the
report. However, there are sometimes mistake of analysis. Of course, these parts are complicated, but exact
description must be needed.
It is not easy to get rid of these problems, and in
this study it is assumed that no incident reports are free
from these problems. The influence of these problems
will be discussed later.
3
3.1
3.2 COCOM
There are many incidents caused by human behavior,
such as carelessness or rule violation. For this reason,
the liveware in the m-SHEL model must be focused on.
In this study, the COCOM model is applied to incident
analysis.
The COCOM model was proposed by E. Hollnagel
(Hollnagel, 1993). Hollnagel categorized the state
of human consciousness into the following four
classes:
34
Various opinions not only from electric power suppliers but also from industry-government-academia
communities are useful to solve problems in nuclear
safety.
Increasing transparency to the society contributes
to obtain trust from the public.
An ontology-based system is different from knowledge base systems, expert systems or artificial intelligence. These traditional ways have some problems.
They are:
Ontology
Figure 3.
An example of ontology.
35
<concept>
<label>
(Information about concept.
This label is for readability.)
</label>
<ont-id>
143
</ont-id>
<mSHEL>
H
</mSHEL>
<description>
(Information about what is this
concept.
This is used for searching)
</description>
<cause-id>
217
(id of the concept which cause No. 143)
</cause-id>
<effect-id>
314
(id of the concept which is caused by No. 143)
</effect-id>
</concept>
Figure 4.
<causality>
<label>
(Information about causality.
This label is for readability.)
</label>
<causality-id>
C-13
</causality-id>
<cause-id>
147
(id of the element which cause C-13)
</cause-id>
<effect-id>
258
(id of the element which is caused by C-13)
</effect-id>
<weight>
0.3 (weighing factor)
</weight>
</causality>
ANALYSIS SYSTEM
After having made the m-SHEL ontology, an analysis system was developed. The flow of this system is
shown below:
1. Input an incident data in a text form. This step can
be done both manually and by importing from XML
style files.
2. In Japanese language, since no spaces are placed
between words, morphological analysis is done to
the input text. It is explained in Chapter 4.1.
3. Keywords from the text are selected and mapped
onto the ontology. Keywords are the words included
in the m-SHEL ontology.
4. Required information of the incident is structurized both in the concept hierarchy and causality
relations.
36
Figure 5.
Morphological analysis
Figure 6.
From these points, a following rule is added: continuous Kanji nouns are combined to form a single
technical word. This is very simple change, but around
60% of keywords can be found by this rule. Figure 4
shows the percentage of compound nouns detected for
ten arbitrary sample cases of incident reports.
VERIFICATION EXPERIMENT
37
Table 1.
m
S
H
E
L
c
Another reason for the high ratio is a trend of reporting incidents in nuclear industry of Japan that incidents
are reported from a viewpoint of failure mechanism of
hardware rather than human factors.
This trend also resulted in the low presence of Environment or management items. Not only the result of
the system, but also that of the expert marked low
counts for these items. It means that the low scores are
attributable not to the system but to some problems in
the original reports.
On the other hand, the results for Liveware and
causal association are different from those for Environment and management. The results of automatic
analysis for Liveware and causal association also
marked low scores. However, the expert did not mark
so low as the system. It seems this outcome is caused
by some defects in the m-SHEL ontology.
Case
1
Case
2
Case
3
Case
4
Case
5
Case
6
Case
7
0/0
2/6
7/18
0/1
2/4
1/4
1/2
3/7
5/12
0/0
1/5
0/3
0/1
5/6
6/13
0/0
2/4
2/4
4/7
2/4
5/12
2/2
2/5
3/6
2/2
1/3
9/10
0/1
3/4
4/5
0/1
1/2
0/8
3/3
3/5
0/2
2/4
4/7
5/7
0/1
4/3
1/2
An analysis system of incident reports has been developed for nuclear power plants. The method of analysis
adopted is based on the m-SHEL ontology. The result
of automatic analysis failed to mark high scores in
assessment, but it is partly because of the contents
of the original incident report data. Though there is
a room for improvement of the m-SHEL ontology,
the system is useful to process incident reports and
to obtain knowledge useful for accident prevention.
CONCLUSION
REFERENCES
H.W. Heinrich. 1980. Industrial accident prevention: A safety
management approach, McGraw-Hill.
Frank E. Bird Jr. & George L. Germain. 1969. Practical Loss
Control Leadership, Intl Loss Control Inst.
NUCIA: Nuclear Information Archives, http://www.nucia.jp
R. Kawano. 2002. Medical Human Factor Topics. http://
www.medicalsaga.ne.jp/tepsys/MHFT_topics0103.html
E. Hollnagel. 1993. Human reliability analysis: Context and
control. Academic Press.
Riichiro Mizoguchi, Kouji Kozaki, Toshinobu Sano & Yoshinobu Kitamura. 2000. Construction and Deployment of a
Plant Ontology. Proc. of the 12th International Conference Knowledge Engineering and Knowledge Management (EKAW2000). 113128.
C.W. Johnson. 2003. Failure in Safety-Critical Systems:
A Handbook of Accident and Incident Reporting,
University of Glasgow Press, http://www.dcs.gla.ac.uk/
johnson/book/
J. Cavalcanti & D. Robertson. 2003. Web Site Synthesis based
on Computational Logic. Knowledge and Information
Systems Journal, 5(3):263287.
MeCab: Yet Another Part-of-Speech and Morphological
Analyzer, http://mecab.sourceforge.net/
Tim Bray, Jean Paoli, C.M. Sperberg-McQueen (ed.), 1998.
Extensible Markup Language (XML) 1.0: W3C Recommendation 10-Feb-1998. W3C.
DISCUSSION
38
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Chi-Min Shu
Process Safety and Disaster Prevention Laboratory, Department of Safety, Health, and Environmental
Engineering, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, ROC
ABSTRACT: Forklifts are so maneuverable that they can move almost everywhere. With stack board, forklifts also have the capability of loading, unloading, lifting and transporting materials. Forklifts are not only
widely used in various fields and regions, but are common in industry for materials handling. Because they are
used frequently, for any incidents such as incorrect forklift structures, inadequate maintenance, poor working
conditions, the wrong operations by forklift operators, and so on, may result in property damages and casualties. The forklifts, for example, may be operated (1) over speed, (2) in reverse or rotating, (3) overloaded, (4)
lifting a worker, (5) on an inclined road, or (6) with obscured vision and so on, which may result in overturning, crushing the operator, hitting pedestrian workers, causing loads to collapse, lifting a worker high to fall
and so on. The abovementioned, therefore, will result in adjacent labor accidents and the loss of property.
According to the significant professional disaster statistical data of the Council of Labor Affairs, Executive
Yuan, Taiwan, approximately 10 laborers perish in Taiwan due to forklift accidents annually. This obviously
shows that forklift risk is extremely high. If the operational site, the operator, the forklifts and the work environment are not able to meet the safety criterion, it can possibly cause a labor accident. As far as loss prevention
is concerned, care should be taken to guard handling, especially for forklift operation. This study provides
some methods for field applications, in order to prevent forklift overturn accidents and any related casualties
as well.
1
1.1
INTRODUCTION
The driver was obscured by goods piled too high, driving, reversing or turning the forklift too fast, forgetting
to use alarm device, turn signal, head lamp, rear lamp
or other signals while driving or reversing, not noticed
by pedestrian workers or cyclists, or disturbed by the
work environment such as corner, exit and entrance,
shortage of illumination, noise, rain, etc., causing
laborers to be struck by forklifts.
According to the significant occupational disaster statistical data of the Council of Labor Affairs (CLA),
Executive Yuan, Taiwan, forklifts cause approximately
10 fatalities annually from 1997 to 2007, as listed
in Fig. 1 (http://www.iosh.gov.tw/, 2008). The manufacturing industry, the transportation, warehousing
and communication industry, construction industry
are the three most common sectors sharing the highest occupational accidents of forklifts in Taiwan, as
listed in Fig. 2. In addition, the forklift accidents were
ascribed to being struck by forklifts, falling off a forklift, being crushed by a forklift and overturning, in
order. Similar to the normal cases in the USA, forklift
overturns are the leading cause of fatalities involving
forklift accidents. They represent about 25% of all
forkliftrelated deaths (NIOSH alert, 2001). Figure 3
reveals that the forklift occupational accidents could
be classified into five main occupational accident
types, explained as follows (Collins et al., 1999; Yang,
2006).
1.2
39
Year
2007
6
7
2006
10
2005
13
2004
2003
1.4
10
2002
4
16
2000
1999
10
1998
1997
0
2001
12
2
10
12
14
1.5
16
Number of fatalities
Others
Construction
10
Transportation,
warehousing and
communication
27
Manufacturing
0
60
10
20
30
40
50
60
Struck
37
2.1
Number of deaths
30
20
15
Collapsing
Falling
and
Crushed Overturn
17
16
15
35
25
40
Stuck
and
Pinned
11
10
5
Hitting
4
2.2
Others
2
Case two
40
and escaped quickly from the drivers seat to the warehouse entrance. However, the forklift was unable to
stop, decelerate its movement, but was still moving
forward to the warehouse entrance. After the forklift
mast hit against the connecting rod, its center of gravity was changed to the driver seat side. The forklift
reversed in the direction of the warehouse entrance.
The laborer was hit and killed. A similar accident of a
forklift overturning and tipping is pictured in Fig. 6.
3
Figure 4.
Figure 6.
RELATED REGULATIONS
a. The employer should have safe protective equipment for the moving forklifts and its setup should
be carried out according to the provision of the
machine apparatus protection standard.
b. The employer should make sure that the forklifts
cannot carry the laborer by the pallet or skid of
goods on the forks of forklifts or other part of forklifts (right outside the driver seat), and the driver or
relevant personnel should be responsible for it. But
forklifts those have been stopped, or have equipment or measures to keep laborers from falling, are
not subject to the restriction.
c. The pallet or skid at fork of forklifts, used by the
employer, should be able to carry weight of the
goods.
d. The employer should assure that the forks etc. are
placed on the ground and the power of the forklift
is shut off when the forklift operator alights.
e. The employer should not use a forklift without placing the back racket. The mast inclines and goods
fall off the forklift but do not endanger a laborer,
which is not subjected to the restriction.
f. The employer should have necessary safety and
health equipment and measures when the employee
41
42
ACKNOWLEDGMENTS
The authors are deeply grateful to the Institute of Occupational Safety and Health, Council of Labor Affairs,
Executive Yuan, Taiwan, for supplying related data.
REFERENCES
http://www.iosh.gov.tw/frame.htm, 2008; http://www.cla.gov.
tw, 2008
U.S. National Institute for Occupational Safety and Health,
2001 NIOSH alert: preventing injuries and deaths of
workers who operate or work near forklift, NIOSH
Publication Number 2001-109.
Collins, J.W., Landen, D.D., Kisner, S.M., Johnston, J.J.,
Chin, S.F., and Kennedy, R.D., 1999. Fatal occupational injuries associated with forklifts, United States,
19801994, American Journal of Industrial Medicine,
Vol. 36, 504512.
Yang, Z.Z., 2006. Forklifts accidents analysis and prevention,
Industry Safety Technology, 2630.
The Rule of Labor Safety and Health Facilities, 2007.
Chapter 5, Council of Labor Affairs, Executive Yuan,
Taipei, Taiwan, ROC.
The Machine Tool Protection Standard, 2001. Chapter 5,
Council of Labor Affairs, Executive Yuan, Taipei, Taiwan,
ROC.
The Rule of Labor Safety Health Organization Management
and Automatic Inspection, 2002. Chapters 4 and 5, Council of Labor Affairs, Executive Yuan, Taipei, Taiwan,
ROC.
a. Ensure that workplace safety inspections are routinely conducted by a person who can identify
hazards and conditions that are dangerous to workers. Hazards include obstructions in the aisle, blind
corners and intersections, and forklifts that come
too close to workers on foot. The person who conducts the inspections should have the authority to
implement prompt corrective measures.
b. Enforce safe driving practices, such as obeying
speed limits, stopping at stop signs, and slowing
down and blowing the horn at intersections.
c. Repair and maintain cracks, crumbling edges, and
other defects on loading docks, aisles, and other
operating surfaces.
4.5
Workerslabor event
43
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
E. Poupart
Products and Ground Systems Directorate/Generic Ground Systems Office, CNES, Toulouse, France
ABSTRACT: This paper presents a model-based approach for improving the training of satellite control room
operators. By identifying hazardous system states and potential scenarios leading to those states, suggestions
are made highlighting the required focus of training material. Our approach is grounded on current knowledge
in the field of interactive systems modeling and barrier analysis. Its application is shown on a satellite control
room incident.
1
INTRODUCTION
45
Table 1.
Identification
of users
Types of users
Examples
of systems
Training availability
Response to failure
Barrier
implementation
Anonymous
General public
Walk-up-and-use
None
Software/hardware patch
Technical barrier
Many
identifiable
General public
Microsoft Office
Software/hardware + online
documentation patch
Technical barrier
Few
identifiable
Specialists
Software/hardware +
documentation patch
Technical barrier
Pilots
Software
development
tools
Aircraft cockpits
Online documentation
onlytraining offered
by third parties
OptionalIn house and
third parties
Satellite
operators
Satellite control
room
Socio-technical
barrier
Human barrier
Very few
identifiable
46
Figure 1.
2.2 Frameworks
Within the Model Based Industrial Training (MOBIT)
project, Keith Brown (Brown 1999) developed Modelbased adaptive training framework (MOBAT). The
framework consists of a set of specification and realisation methods, for model-based intelligent training
agents in changing industrial training situations. After
a specification of training problem requirements, a
detailed analysis splits into three separate areas for:
(a) a specification of the tasks a trainee is expected
47
Figure 2.
mands.
48
Figure 4.
Figure 5.
The incident
The incident described in this section occurred during the test phase of a satellite development. The
Commanding operator sent a TC without receiving
confirmation from the Flight Director. There was no
impact on the satellite involved in the tests.
The circumstances surrounding the incident were
the following. Several minutes prior to the incident,
45 hazardous TCs were sent with a global go-ahead
49
This approach assumes that modifications to the system are not possible. Therefore, changes to operator
behavior are necessary. Using a model representing the
behavior of the system, it is possible to clearly identify the hazardous state within which operators must
be wary and intentionally avoid the known problem.
The system model will also indicate the pre-conditions
(system and/or external events) that must be met in
order to reach a particular hazardous state providing
an indicator of which actions the operator must not
perform.
The advantages of this human-barrier approach lie
in the fact that no modifications to the system are
necessary (no technical barriers required) and is efficient if only several operators lie in the fact that
no modifications to the system are necessary (no
technical barriers required) and is efficient if only
several users are involved (as within a control centre). We would argue that this is not an ideal solution
to the problem, but that the company would have
to make do with it as a last resort if the system is
inaccessible.
4.2
INTERACTIVE COOPERATIVE
OBJECTS & PETSHOP
The Interactive Cooperative Objects (ICOs) formalism is a formal description technique dedicated
to the specification of interactive systems (Bastide
et al. 2000). It uses concepts borrowed from the
object-oriented approach (dynamic instantiation, classification, encapsulation, inheritance, client/server
relationship) to describe the structural or static aspects
of systems, and uses high-level Petri nets (Genrich 1991) to describe heir dynamic or behavioural
aspects.
An ICO specification fully describes the potential
interactions that users may have with the application. The specification encompasses both the input
aspects of the interaction (i.e. how user actions impact
50
Figure 7.
Using the ICO formalism and its CASE tool, Petshop (Navarre, Palanque, and Bastide 2003), a model
describing the behavior of a ground segment application for the monitoring and control of a non-realistic
procedure, including the sending of a hazardous
telecommand has been designed. Figure 7 illustrates
this ICO model. It can be seen, that as the complexity
of the system increases, so does the Petri net model.
This is the reason why the ICO notation that we use
involves various communication mechanisms between
51
6.1
Sending a telecommand
Figure 9.
52
not involve the operator clicking a button on the interface. Rather, this action can be considered as a human
cognitive task and subsequently physical interaction with an independent communication system (see
Figure 5).
Tokens in places RequestPending and FlightDirector allow transition RequestGoAheadFD to be fireable. This transition contains the statement y=fd.
requestGoAhead(x).
It is important to note, that data from the token
in place FlightDirector would come from a separate model representing the behaviour of the Flight
Director (his interactions with control room members etc). Once place Result receives a token,
4 transitions to be fireable: Go, WaitTimeAndGo,
WaitDataAndGo and NoGo, representing the possible responses from the Flight Director. Again, these
transitions (apart from the one involving time) are
represented using autonomous transitions. While they
are external events; they are not events existing
in the modelled application (i.e. interface buttons).
These are verbal communications. In the case of each
result, the operator can still click on both SEND
and FAIL on the interface, transitions SendTc3_x and
Cancel3_ x in Figure 9. We provide here the explanation of one scenario. If the token (x) from the
FlightDirector place contains the data WaitTime,
then transition WaitTimeAndGo, (containing the statement y==WaitTime) is fireable taking the value
of y.
Within the scenario in which the operator must
wait for data (from an external source) before clicking
SEND, the model contains references to an external service (small arrows on places SIP_getData,
SOP_getData and SEP_getData, Service Input, Output and Exception Port respectively. When in state
WaitingForData a second model, representing the service getData would interact with the current procedure
model.
7
53
Cortiade, E. and PA Cros. 2008. OCTAVE: a data modeldriven Monitoring and Control system in accordance
with emerging CCSDS standards such as XTCE and
SM&C architecture. SpaceOps 2008 1216 May 2008,
Heidelberg, Germany.
Eiff, G.M. 1999. Organizational safety culture. In R.S. Jensen
(Ed.). Tenth International Symposium on Aviation Psychology (pp. 778783). Columbus. OH: The Ohio State
University.
Elizalde, Francisco, Enrique Sucar, and Pablo deBuen. 2006.
An Intelligent Assistant for Training of Power Plant
Operators. pp. 205207 in Proceedings of the Sixth
IEEE International Conference on Advanced Learning
Technologies. IEEE Computer Society.
Fitts, P.M. 1954. The Information Capacity of the
Human Motor System in Controlling the Amplitude of
Movement. . ..
Genrich, H.J. 1991. Predicate/Transitions Nets, HighLevels Petri-Nets: Theory and Application. pp. 343 in.
Springer Verlag.
Hollnagel, E. 1999. Accidents and barriers. pp. 175180
in Proceedings CSAPC99, In J.M. Hoc, P. Millot,
E. Hollnagel & P.C. Cacciabue (Eds.). Villeneuve dAsq,
France: Presses Universitaires de Valenciennes.
Hollnagel, E. 2004. Barriers and Accident Prevention.
Ashgage.
Johnson, C.W. 1997. Beyond Belief: Representing
Knowledge Requirements For The Operation of SafetyCritical Interfaces. pp. 315322 in Proceedings of
the IFIP TC13 International Conference on HumanComputer Interaction. Chapman & Hall, Ltd http://portal.
acm.org/citation.cfm?id=647403.723503&coll=GUIDE
&dl=GUIDE&CFID=9330778&CFTOKEN=19787785
(Accessed February 26, 2008).
Johnson, C.W. 2006. Understanding the Interaction Between
Safety Management and the Can Do Attitude in
Air Traffic Management: Modelling the Causes of
the Ueberlingen Mid-Air Collision. Proceedings of
Human-Computer Interaction in Aerospace 2006, Seattle, USA, 2022 September 2006. EDITORS F. Reuzeau
and K. Corker.Cepadues Editions Toulouse, France.
pp. 105113. ISBN 285428-748-7.
Khan, Paul, Brown, and Leitch. 1998. Model-Based Explanations in Simulation-Based Training. Intelligent Tutoring Systems. http://dx.doi.org/10.1007/3-540-68716-5_7
(Accessed February 20, 2008).
Kontogiannis, Tom. 2005. Integration of task networks and
cognitive user models using coloured Petri nets and its
application to job design for safety and productivity.
Cogn. Technol. Work 7:241261.
Lee, Jang R, Fanjoy, Richard O, Dillman, Brian G. The
Effects of Safety Information on Aeronautical Decision
Making. Journal of Air Transportation.
Lin, Fuhua. 2001. Modeling online instruction knowledge
using Petri nets. pp. 212215 vol.1 in Communications,
Computers and signal Processing, 2001. PACRIM. 2001
IEEE Pacific Rim Conference on, vol. 1.
NASA. 2005. Man-machine Integration Design and
Analysis System (MIDAS). http://human-factors.arc.
nasa.gov/dev/www-midas/index.html (Accessed February 19, 2008).
Navarre, D, P Palanque, and R Bastide. 2004. A Formal
Description Technique for the Behavioural Description
ACKNOWLEDGEMENTS
This research was financed by the CNES R&T Tortuga
project, R-S08/BS-0003-029.
REFERENCES
Barboni, E, D Navarre, P Palanque, and S Basnyat. 2007.
A Formal Description Technique for the Behavioural
Description of Interactive Applications Compliant with
ARINC Specification 661. Hotel Costa da Caparica,
Lisbon, Portugal.
Barboni, E, D Navarre, P Palanque, and S Basnyat. 2006.
Addressing Issues Raised by the Exploitation of Formal Specification Techniques for Interactive Cockpit
Applications.
Basnyat, S, and P Palanque. 2006. A Barrier-based
Approach for the Design of Safety Critical Interactive
Application. vol. Guedes Soares & Zio (eds). Estoril,
Portugal: Taylor & Francis Group.
Basnyat, S, P Palanque, B Schupp, and P Wright. 2007. Formal socio-technical barrier modelling for safety-critical
interactive systems design. Special Edition of Elseviers
Safety Science. Special Issue safety in design 45:545565.
Bastide, R, O Sy, P Palanque, and D Navarre. 2000. Formal
specification of CORBA services: experience and lessons
learned. . . ACM Press.
Brown, Keith. 1999. MOBIT a model-based framework for
intelligent training. pp. 6/16/4 in Invited paper at the
IEEE Colloqium on AI.
54
55
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Road safety depends mainly on traffic intensity and conditions of road infrastructure. More
precisely, one may found many more factors influencing safety in road transportation but process of their
perception, processing and drawing conclusions by the driver while driving is usually difficult and in many cases
fall behind the reality. Diagnostic Aided Driving System (DADS) is an idea of supporting system which provides
driver with selected, most useful information which reduces risk due to environment, weather conditions as well as
statistics of past hazardous events corresponding to given road segment. Paper presents part of the project aiming
to describe relations and dependencies among different traffic conditions influencing number of road accidents.
FOREWORD
2
2.1
exact date and time of the event (day time, week day,
month, season),
place of the event (number of lanes and traffic directions, compact/dispersed development, straight/
bend road segment, dangerous turn, dangerous
slope, top of the road elevation, road crossing
region, of equal importance or subordinated, roundabout, pedestrian crossing, public transport stop,
tram crossing, railway sub grade, railway crossing
unguarded/guarded, bridge, viaduct, trestle bridge,
pavement, pedestrian way, bike lane, shoulder,
median strip, road switch-over, property exit),
severity of consequences (collision, crash, catastrophe),
collision mode (collision in motion [front, side,
rear], run over [a pedestrian, stopped car, tree,
lamp-post, gate arm, pot-hole/hump/hole, animal],
vehicle turn-over, accident with passenger, other),
cause/accident culprit (driver, pedestrian, passenger, other reasons, complicity of traffic actors),
accidents caused by pedestrian (standing, lying in
the road, walking wrong lane, road crossing by
red light, careless road entrance [before driving
car; from behind the car or obstacle], wrong road
crossing [stopping, going back, running], crossing
at forbidden place, passing along railway, jumping
in to vehicle in motion, children before 7 year old
[playing in the street, running into the street], other),
cause/vehicle maneuver (not adjusting speed to traffic conditions, not observing regulations not giving
free way, wrong [overtaking, passing by, passing,
57
2.2
= e(0 +1 x1 +2 x2 ++k xk )
58
(1)
day
0,07
07:00
Figure 3.
17:00
accident
frequency
0,2
0,15
0,1
0,05
no
Sunday
Saturday
Friday
Thursday
Wednesday
Tuesday
120
Monday
week day
0
death
100
80
= e(0 +1 x1 +2 x2 ++k xk + )
60
40
(2)
20
0
1
10
11
12
month
3.2
Figure 5. Distribution of accidents and victims in months
over the period 19992005.
59
Model
1
2
3
4
5
6
7
8
9
10
11
12
7,06
7,23
9,22
5,19
6,75
2,33
4,56
9,35
9,94
9,83
11,49
9,91
ln l
Length
of road
segment
ln tv
Daily
traffic
volume
Hv
Heavy
vehicle
rate
1,65
1,55
1,49
2,25
1,68
2,45
2,60
2,16
2,15
2,26
2,36
1,59
0,56
0,89
0,82
0,60
0,83
0,22*
0,43*
0,90
0,99
0,95
1,12
1,06
0,09
0,11
0,09
0,09
0,10
0,05
0,06
0,06
0,06
0,05
J
Number
of all
junctions
rj
Number
of road
junctions
0,02*
0,09
0,01*
0,05
0,05
rn
Number
of road
junctions
(national
roads)
rc
Number of
road junctions
(communal roads)
rl
Number of
road junctions (local roads)
rp
Number of
exits to
parking
rm
Number of
modernization
0,16
0,08*
0,10
0,04
0,13
0,07
0,10
0,06*
0,09*
0,06*
0,77
0,86
0,85
0,80
0,65
0,05*
0,02*
0,07
0,12
Table 2.
Table 3.
Model
Model deviance
parameter
Model
R2
R2p
R2w
R2wp
R2ft
R2ftp
MD
8
9
10
11
12
0,72
0,74
0,76
0,78
0,70
0,88
0,90
0,92
0,95
0,85
0,69
0,70
0,72
0,75
0,67
0,87
0,88
0,91
0,94
0,84
0,69
0,70
0,72
0,75
0,67
0,87
0,90
0,92
0,96
0,85
261,26
267,63
269,76
269,83
248,61
105
105
105
105
105
0,034
0,027
0,020
0,011
0,053
0,1936
0,2617
0,3763
0,6067
0,1054
Comparison of regression models for number of road events, accidents, injured and fatalities.
0
ln l
ln tv
2,26
0,953
2,769
0,681
3,759
0,18
5,885 1,972
hv
rj rn
0,056
0,083
0,065
0,055
0,209
0,071
0,128
0,104
Rc
rl
rp
rm
0,099
0,209
0,077
0,1
0,854
0,152
0,053 1,005
0,093 0,114 1
1,718
rmi = x10 time fraction of the year with modernization road works of subsection (value from
0 to 1).
i = li 1 tvi 2 e (0 +j xji ) ;
60
j>2
(3)
14
12
10
8
6
4
2
0
0
3000
6000
9000
12000
15000
18000
21000
24000
27000
Figure 7. Regression functions of number of road events, accidents, injured and fatalities in function of traffic volume (all
remaining independent variables are constant). Shaded area relates to the range of traffic volume analyzed in database.
CONCLUSIONS
Two main subjects are discussed in the paper: theoretical modeling of traffic events and elaborating
of regression models on the basis of real accident
database. Proposed model consists in developing of
prediction function for various road events. There
were four types of road events concerning: all road
events, accidents, injuries and fatalities. Verification
of the assessment that number of, so called, rare
events undergoes Poisson distribution was done comparing elaborated real data with Poisson model with
parameter calculated as regression function of ten
independent variables. Conformity of five proposed
models was checked calculating statistical significance of parameter . Regression models applied to
online collected data are seen to be a part of active
61
Fricker J.D. & Whitford R.K. 2004. Fundamentals of Transportation Engineering. A Multimodal System Approach.
Pearson. Prentice Hall.
Kutz M. (ed.). 2004. Handbook of Transportation Engineering. McGraw-Hill Co.
Miaou S.P. & Lum H. 1993. Modeling Vehicle Accidents
And Highway Geometric Design Relationships. Accident
Analysis & Prevention, no. 25.
Michener R. & Tighe C. 1999. A Poisson Regression Model
of Highway Fatalities. The American Economic Review.
vol. 82, no. 2, pp. 452456.
Wixson J.R. 1992. The Development Of An Es&H Compliance Action Plan Using Management Oversight, Risk Tree
Analysis And Function Analysis System Technique. SAVE
Annual Proceedings.
REFERENCES
Baruya A. 1998. Speed-accident Relationships on European
Roads. Transport Research Laboratory, U.K.
Evans A.W. 2003. Estimating Transport Fatality Risk From
Past Accident Data. Accident Analysis & Prevention,
no. 35.
62
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: EDF R&D has developed an organisational analysis method. This method which was designed
from in depth examination of numerous industrial accidents, incidents and crises and from main scholar findings in the domain of safety, is to be applied for industrial safety purpose. After some thoughts regarding
(dis)connections between Safety and Availability, this paper analyses to what extend this method could be
used for event availability oriented analysis.
INTRODUCTION
Table 1.
Event
63
2.1
Historical dimension
Historical dimension;
Organizational network;
Vertical relationships in the organization (from
field operators to plant top management).
We have to note that, if these dimensions are introduced in a independent way, they are interacting and
an analysis has to deal with them in parallel (and in
interaction).
64
Analyses have also shown some repetition of organizational root causes of events. We have defined,
according to Reasons terminology (1997) these causes
as Pathogenic Organizational Factors (POFs).
POFs cover a certain number of processes and phenomena in the organization with a negative impact
on safety. They may be identified by their various
effects within the organisation, which can be assimilated into symptoms. Their effects may be objective
or even quantifiable, or they could be subjective and so
only felt and suggested by employees. POFs represent
the formalisation of repetitive, recurrent or exemplary
phenomena detected in the occurrence of multiple
events. These factors can be described as the hidden
factors which lead to an accident.
A list of seven POFs was defined (Pierlot, 2007).
Table 2 gives a list of this seven POFs. This list should
not be considered to be definitive since new case studies findings could add some findings even though
it is relatively stable. Classification of the factors is
somewhat arbitrary since, given the multiplicity and
65
also a safety mission because they are part of the second barrier against radioactive material leakage, and
also because their cooling capacities are necessary to
deal with some accidental events.
But all of these systems could have both an impact
on safety and availability. It is clear for systems dealing both with Safety and Availability, but, in case of
failures, some of the systems dealing only with availability could have clearly an impact on safety, as some
of the systems dealing only with safety could have
clearly an impact on availability (Voirin et al., 2007).
For example, if there is a turbine trip while the plant is
operating at full power, the normal way to evacuate the
energy produced by the nuclear fuel is no longer available without the help of other systems, and that may
endanger safety even if the turbine is only dedicated
to availability.
Safety and availability have then complex relationships: This is not because short term or medium
term availability is ensured that neither long term
availability or safety are also insured.
And we shall never forget that, before the accident, the Chernobyl nuclear power plant as the Bhopal
chemical factory had records of very good availability
levels, even if, in the Bhopal case, these records were
obtained with safety devices out of order since many
months (Voirin et al., 2007).
66
5
5.1
67
CONCLUSIONS
The performed Organisational Analysis on the occurrence of an Availability event confirms that the method
designed for a Safety purpose can be used for an event
dealing with Availability.
The three main dimensions approach (historical,
cross-functional, and vertical), the data collection and
checking (written data as information given by interviewees), the use of the thick description allow to
perform the analysis of an Availability event and to
give a general sense to all the collected data.
The knowledge of the Organisational Analysis
of many different safety events, of the different
Safety oriented Pathogenic Organisational Factors is a
required background for the analysts so that they could
know what must be looked for and where it must be
done, or which interpretation could be done from the
collected data.
This case confirms also that the usual difficulties encountered during the Organisational Analysis
of a Safety event are also present for the Organisational Analysis of an Availability event : the vertical
dimension is more difficult to address, the question
of single data is a tough issue that could deserve
deepener thoughts.
The performed analysis proves also that most of
the Safety oriented Pathogenic Organisational Factors
could be also seen as Availability oriented Pathogenic
Organisation Factors. However, these factors are
focused only on the event occurrence; they do not
intend to deal with the organisation capacity to recover
from the availability event. We believe that this
particular point must also be studied more carefully.
The last point is that an Availability event
Organisational Analysis must be performed with a
. . . The FOPs
68
Llory, M., Dien. Y. 20062007. Les systmes sociotechniques risques : une ncessaire distinction entre
fiabilit et scurit. Performances. Issue No. 30 (SeptOct
2006); 31 (NovDec 2006); 32 (Janvfev 2007)
Perrow, C. (ed.) 1984. Normal Accidents. Living with HighRisk Technology. New York: Basic Books.
Pierlot, S. 2006. Risques industriels et scurit: les
organisations en question. Proc. Premier Sminaire de
SaintAndr. 2627 Septembre 2006, 1935.
Pierlot, S., Dien, Y., Llory M. 2007. From organizational
factors to an organizational diagnosis of the safety. Proceedings, European Safety and Reliability conference, T.
Aven & J.E. Vinnem, Eds., Taylor and Francis Group,
London, UK, Vol. 2, 13291335.
Reason, J (Ed.). 1997. Managing the Risks of Organizational
Accidents. Aldershot: Ashgate Publishing Limited.
Roberts, K. (Ed.). 1993. New challenges to Understanding Organizations. New York: Macmillan Publishing
Company.
Sagan, S. (Ed.). 1993. The Limits of Safety: Organizations,
Accidents and Nuclear Weapons. Princeton: Princeton
University Press.
Sagan, S. 1994. Toward a Political Theory of Organizational Reliability. Journal of Contingencies and Crisis
Management, Vol. 2, No. 4: 228240.
Salmon, C. (Ed.). 2007. Storytelling, la machine fabriquer
des histoires et formater les esprits. Paris: ditions La
Dcouverte.
Turner, B. (Ed.). 1978. Man-Made Disasters. London:
Wykeham Publications.
U.S. Chemical Safety and Hazard Investigation Board. 2007.
Investigation Report, Refinery Explosion and Fire, BP
Texas City, Texas, March 23, 2005, Report No. 2005-04I-TX.
Vaughan, D. (Ed.). 1996. The Challenger Launch Decision. Risky Technology, Culture, and Deviance at NASA.
Chicago: The Chicago University Press.
Vaughan, D. 1997. The Trickle-Down Effect: Policy Decisions, Risky Work, and the Challenger Tragedy. California Management Review, Vol. 39, No. 2, 80102.
Vaughan, D. 1999. The Dark Side of Organizations: Mistake,
Misconduct, and Disaster. Annual Review of Sociology,
vol. 25, 271305.
Vaughan, D. 2005. System Effects: On Slippery Slopes,
Repeating Negative Patterns, and Learning from Mistake, In: Starbuck W., Farjoun M. (Ed.), Organization at
the Limit. Lessons from the Columbia Disaster. Oxford:
Blackwell Publishing Ltd.
Voirin, M., Pierlot, S. & Llory, M. 2007. Availability
organisational analysis: is it hazard for safety? Proc.
33rd ESREDA SeminarFuture challenges of accident
investigation, Ispra, 1314 November 2007.
REFERENCES
Columbia Accident Investigation Board 2003. Columbia
Accident Investigation Board. Report Volume 1.
Cullen, W. D. [Lord] 2000. The Ladbroke Grove Rail Inquiry,
Part 1 Report. Norwich: HSE Books, Her Majestys
Stationery Office.
Cullen, W. D. [Lord] 2001. The Ladbroke Grove Rail Inquiry,
Part 2 Report. Norwich: HSE Books, Her Majestys
Stationery Office.
Dien, Y. 2006. Les facteurs organisationnels des accidents industriels, In: Magne, L. et Vasseur, D. (Ed.),
Risques industrielsComplexit, incertitude et dcision:
une approche interdisciplinaire, 133174. Paris: ditions
TED & DOC, Lavoisier.
Dien, Y., Llory, M., Pierlot, S. 2006. Scurit et performance: antagonisme ou harmonie? Ce que nous apprennent les accidents industriels. Proc. Congrs 15Lille,
October 2006.
Dien, Y., Llory M. 2006. Mthode danalyse et de diagnostic
organisationnel de la sret. EDF R&D internal report.
Dien, Y., Llory, M. & Pierlot, S. 2007. Laccident la raffinerie BP de Texas City (23 Mars 2005)Analyse et
premire synthse. EDF R&D internal report.
Geertz C. 1998. La description paisse. In Revue Enqute.
La Description, Vol. 1, 73105. Marseille: Editions
Parenthses.
Hollnagel, E., Woods, D.D., et Leveson, N.G. (Ed.) 2006.
Resilience Engineering: Concepts and Precepts. Aldershot: Ashgate Publishing Limited.
Hopkins, A. 2003. Lessons from Longford. The Esso Gas
Plant Explosion. CCH Australia Limited, Sydney, 7th
Edition (1st edition 2000).
Llory, M. 1998. Ce que nous apprennent les accidents industriels. Revue Gnrale Nuclaire. Vol. 1, janvier-fvrier,
6368.
69
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J.M. Tseng
Graduate School of Engineering Science and Technology, National Yunlin University of Science
and Technology, Douliou, Yunlin, Taiwan, ROC
C.M. Shu
Department of Safety, Health, and Environmental Engineering, National Yunlin University of Science
and Technology, Douliou, Yunlin, Taiwan, ROC
ABSTRACT: In the past, process accidents incurred by Organic Peroxides (OPs) that involved near miss,
over-pressure, runaway reaction, thermal explosion, and so on occurred because of poor training, human error,
incorrect kinetic assumptions, insufficient change management, inadequate chemical knowledge, and so on, in
the manufacturing process. Calorimetric applications were employed broadly to test small-scale organic peroxides
materials because of its thermal hazards, such as exothermic behavior and self-accelerating decomposition in the
laboratory. In essence, methyl ethyl ketone peroxide (MEKPO) has a highly reactive and unstable exothermal
feature. In recent years, it has many thermal explosions and runaway reaction accidents in the manufacturing
process. Differential Scanning Calorimetry (DSC), Vent Sizing Package 2 (VSP2), and thermal activity monitor
(TAM) were employed to analyze thermokinetic parameters and safety index and to facilitate various auto-alarm
equipment, such as over-pressure, over-temperature, hazardous materials leak, etc., during a whole spectrum of
operations. Results indicated that MEKPO decomposed at lower temperature (3040 C) and was exposed on
exponential development. Time to Maximum Rate (TMR), self-accelerating decomposition temperature (SADT),
maximum of temperature (Tmax ), exothermic onset temperature (T0 ), and heat of decomposition (Hd ) etc.,
were necessary and mandatory to discover early-stage runaway reactions effectively for industries.
INTRODUCTION
The behavior of thermal explosions or runaway reactions has been widely studied for many years. A reactor
with an exothermic reaction is susceptible to accumulating energy and temperature when the heat generation rate exceeds the heat removal rate by Semenov
theory (Semenov, 1984). Unsafe actions and behaviors by operators, such as poor training, human error,
incorrect kinetic assumptions, insufficient change
management, inadequate chemical knowledge etc.,
lead to runaway reactions, thermal explosions, and
release of toxic chemicals, as have sporadically
occurred in industrial processes (Smith, 1982). Methyl
ethyl ketone peroxide (MEKPO), cumene hydroperoxide (CHP), di-tert-butyl peroxide (DTBP), tert-butyl
71
Nation
Frequency I
19531978
19802004
19842001
2000
19731986
1962
Japan
14
China
14
Taiwan
5
Korea
1
Australia 2
1
UK
Worst case
115 23 Tokyo
13 14 Honan
156 55 Taipei
11 3 Yosu
0 0 NA
0 0 NA
I: Injuries; F: Fatalities.
Data from MHIDAS (MHIDAS, 2006).
NA: Not applicable.
Methyl
ethyl
ketone
Dimethyl
phthalate
(DMP)
Hydrogen
peroxide
(H2O2)
Reactor
H3PO4
Crystallization
tank
Cooling
to lower
than
10C
Dehydration tank
Drying tank
MEKPO in storage
tank
Figure 2.
Taiwan.
EXPERIMENTAL SETUP
72
Table 2.
Thermokinetics and safety parameters of 31 mass% MEKPO and 20 mass% H2 O2 by DSC under at 4 C min1 .
4.00
2.47
42
41
83
67
135
100
304
395
160
200
768
1,113
395
1 MEKPO; 2 H O .
2 2
applied to detect the fundamental exothermic behavior of 31 mass% MEKPO in DMP that was purchased
directly from the Fluka Co. Density was measured and
provided directly from the Fluka Co. ca. 1.025 g cm3 .
It was, in turn, stored in a refrigerator at 4 C (Liao
et al., 2006; Fessas et al., 2005; Miyakel et al., 2005;
Marti et al., 2004; Sivapirakasam et al., 2004; Hou
et al., 2001). DSC is regarded as a useful tool for the
evaluation of thermal hazards and for the investigation
of decomposition mechanisms of reactive chemicals if
the experiments are carried out carefully.
2.5
20 mass% H2O2
31 mass% MEKPO
2.0
1.5
1.0
0.5
0.0
0
2.2
50
100
150
200
250
300
Temperature (C)
VSP2, a PC-controlled adiabatic calorimeter, manufactured by Fauske & Associates, Inc. (Wang et al.,
2001), was applied to obtain thermokinetic and thermal hazard data, such as temperature and pressure
traces versus time. The low heat capacity of the
cell ensured that almost all the reaction heat that
was released remained within the tested sample.
Thermokinetic and pressure behavior in the same test
cell (112 mL) usually could be tested, without any
difficult extrapolation to the process scale due to a
low thermal inertia factor () of about 1.05 and 1.32
(Chervin & Bodman, 2003). The low allows for
bench scale simulation of the worst credible case, such
as incorrect dosing, cooling failure, or external fire
conditions. In addition, to avoid bursting the test cell
and missing all the exothermic data, the VSP2 tests
were run with low concentration or smaller amount
of reactants. VSP2 was used to evaluate the essential
thermokinetics for 20 mass% MEKPO and 20 mass%
H2 O2 . The standard operating procedure was repeated
by automatic heat-wait-search (HWS) mode.
73
Table 3.
1
2
1
T0 ( C)
Tmax ( C)
Pmax (psi)
Tad ( C)
Ea(kJ mol1 )
90
80
263
158
530
841
394.7
7.8
140.0
100.0
178
78
97.54
NA
MEKPO; 2 H2 O2 .
280
300
220
200
200
100
180
Tmax = 158C
Temperature (C)
240
160
140
120
100
80
60
20 mass% MEKPO
20 mass% H2O2
40
20
0
50
100
150
200
400
Tmax = 263C
260
20 mass% MEKPO
-3.6
-3.2
-3.0
160
140
120
100
80
60
40
20
0
-20
Pressu re (psig)
700
Pmax = 530 psi
600
500
20 mass% MEKPO
20 mass% H2O2
160 -3.6
140
120
100
80
60
40
20
0
-20
-3.6
200
100
0
-100
0
50
100
150
-2.2
-2.0
-1.8
-3.4
-3.2
-3.0
-2.8
-2.6
-2.4
-2.2
-2.0
-1.8
Figure 6. Dependence of rate of temperature rise on temperature from VSP2 experimental data for 20 mass% MEKPO
and H2 O2 .
800
300
-2.4
-1000/T (K-1)
400
-2.6
20 mass H2O2
Time (min)
900
-2.8
-3.6
250
-3.4
14
12
10
8
6
4
2
0
200
250
Time (min)
20 mass% MEKPO
-3.4
-3.2
-3.0
-2.8
-2.6
-2.4
-2.2
-2.0
-1.8
-2.8
-2.6
-2.4
-2.2
-2.0
-1.8
max
20 mass H2O2
-3.4
-3.2
-3.0
Figure 7. Dependence of rate of pressure rise on temperature from VSP2 experimental data for 20 mass% MEKPO
and H2 O2 .
74
Table 4. Scanning data of the thermal runaway decomposition of 31 mass% MEKPO by TAM.
Isothermal
temperature
( C)
Mass
(mg)
Reaction time
(hr)
TMR
(hr)
Hd
(Jg1 )
60
70
80
0.05008
0.05100
0.05224
800
300
250
200
80
40
784
915
1,015
0.00015
ACKNOWLEDGMENT
0.00010
0.00005
0.00000
0.00015 0
200
400
600
0.00010
800
1000
1200
0.00005
0.00000
0.00015
50
100
150
0.00010
200
250
300
350
REFERENCES
0.00005
0.00000
0
50
100
150
200
250
300
350
Time (hr)
CONCLUSIONS
75
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: A disaster often causes a series of derivative disasters. The spreading of disasters can often form
disaster chains. A mathematical model for risk analysis of disaster chains is proposed in causality network,
in which each node represent one disaster and the arc represent interdependence between them. The nodes of
the network are classified into two categories, active nodes and passive nodes. The term inoperability risk,
expressing the possible consequences of each passive node due to the influences of all other nodes, is cited
to assess the risk of disaster chains. An example which may occur in real life is solved to show how to apply
the mathematical model. The results show that the model can describe the interaction and interdependence of
mutually affecting emergencies, and it also can estimate the risk of disaster chains.
INTRODUCTION
MATHEMATIC MODEL
79
Figure 1.
to refer to the nodes whose consequences (loss or damage) are embodied through passive nodes, for example,
the loss of earthquake comes from the damage of some
anthropogenic structures caused by it. Likewise, passive nodes will be used to refer to the nodes whose
consequences are possible loss events and are embodied by themselves. Taking power system as an example, the disaster is the consequence of inability of part
of its functions (called inoperability here). Risk is the
integration of possibility and consequence. Therefore,
the inoperability risk proposed by Haimes et al. which
expresses the possible consequences of each passive
node due to the influences of other nodes (including
passive and active), is cited to assess the risk of disaster
chains.
The proposed model is based on an assumption that
the state of active node is binary. That means that
active node has only two states (occurrence of nonoccurrence). The magnitude of active node (disaster) is
not taken into consideration in the model. In contrast
to that, the state of passive nodes can be in continuous states from normal state to completely destroyed
state.
j +
m
xi aij +
i=1
Pj = j +
m
Pi aij +
i=1
(3)
n
Rk akj
(4)
k=m+1
m
i=1
j = 1, 2, . . . , m
yk akj 1
Let us define certain active node with certain occurrence probability as the start point of disaster chains.
Let us index all nodes i = 1, 2, . . . , m, m + 1, m +
2, . . . , n. The nodes represent active nodes if i =
1, 2, . . . , n, and passive nodes if i = m + 1, m +
2, . . . , n. We shall denote by aij the probability for any
node i to induce directly the active node j, and aij > 0
in case of the existence of an arrow from node i to node
j, otherwise aij = 0. By definition, we put aii = 0. So
we get the matrix:
i = 1, 2, . . . , n,
n
k=m+1
The model
A = (aij ),
(2)
Pj = min j +
2.2
i, j = 1, . . . , m, k = m + 1, . . . , n
Pi aij +
n
Rk akj , 1
(5)
k=m+1
m
Pi bik
(6)
i=1
To assess the risk of disaster chains, we propose a quantity Mij called direct influence coefficient.
(1)
80
m
n
Pi aij +
Rk akj
Pj = j +
i=1
k=m+1
m
n
Pi bki +
Ri Mik
Rk =
i=m+1
i=1
s.t.
m
n
P = min +
P
a
+
R
a
,
1
j
j
i
ij
k
kj
k=m+1
m i=1
Rk = min
Pi bki +
Ri Mik , 1
i=1
i=m+1
for j = 1, 2, . . . , m, k = m + 1, m + 2 . . . , n
It expresses the influence of inoperability between passive i and j, which are directly connected. We get direct
influence coefficient matrix:
M = (Mij ),
i, j = m + 1, m + 2, . . . , n
(7)
(14)
Finally, in the mathematical model, we need to
determine three matrix, i.e. the A matrix, the B matrix
and the M matrix. For the matrix A, several physical
models and probability models have been developed
to give the probability that a disaster induces another
one (Salzano et al.). Extensive data collecting, data
mining and expert knowledge or experience may be
required to help determine the M matrix. We can use
historical data, empirical data and experience to give
the matrix B.
Mk
k=1
(8)
As this converges only for the existence of the
matrix (I M )1 , we will obtain the formula:
T = (I M )1 I
(9)
n
Ci Tik
(10)
i=m+1
n
Ri Mik
EXAMPLE
(11)
i=m+1
0
0
0
A=
0
0
0
That is also:
Rk =
m
Pi bki +
i=1
n
Ri Mik
(12)
i=m+1
m
i=1
Pi bki +
n
0.6
0.1
0
0
0
0
0
0
0.2
0.6
0
B = 0.9
0
0
0
0.5
0
0
0
0.3
M =
0
0
0
0
0
0
0
0
0.3
0
0
0
0
0
0
0
0
0.9
0.5
0
0.5
0
0
0.1
0
0
0.4
0.2
Ri Mik , 1
0.4
0
0.4
0
0
0
0
0.3
(13)
i=m+1
81
CONCLUSIONS
Figure 2.
1.0
inoperability risk
0.8
power plant
commercial area
public transportation
0.6
economic situation
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1.0
Probability of earthquake
82
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J. Borell
LUCRAM (Lund University Centre for Risk Analysis and Management),
Department of Design Sciences, Lund University, Lund, Sweden
ABSTRACT: During recent years a couple of emergencies have affected the city of Malmo and its inhabitants
and forced the city management to initiate emergency responses. These emergencies, as well as other incidents,
are situations with a great potential for learning emergency response effectiveness. There are several methods
available for evaluating responses to emergencies. However, such methods do not always use the full potential
for drawing lessons from the occurred emergency situations. Constructive use of principles or rules gained
during one experience (in this case an emergency response) in another situation is sometimes referred to as
positive transfer. The objective of this paper is to develop and demonstrate an approach for improving learning
from the evaluation of specific response experiences through strengthening transfer. The essential principle in
the suggested approach is to facilitate transfer through designing evaluation processes so that dimensions of
variation are revealed and given adequate attention.
INTRODUCTION
83
3.2 Transfer
Constructive use of principles or rules that a person gained during one experience (in this case an
emergency response operation) in another situation
is sometimes referred to as positive transfer (Reber
1995). Transfer may be quite specific when two situations are similar (positive or negative transfer), but
also more general, e.g. learning how to learn. The
concept of transfer is also discussed within organisational theory. At an organisational level the concept
transfer involves transfer at an individual level but
also transfer between different individuals or organisations. Transfer at an organisational level can be defined
as . . . the process through which one unit (e.g. group,
department, or division) is affected by experience of
another (Argote & Ingram 2000 p. 151).
METHOD
3
3.1
3.3 Variation
One essential principle for facilitating the transfer
process, established in the literature on learning, is
to design the learning process so that the dimensions of possible variation become visible to the
learners (Pang 2003, Marton & Booth 1999). Successful transfer for strengthening future capability
demands that critical dimensions of possible variation specific for the domain of interest are considered
(Runesson 2006).
When studying an emergency scenario two different kinds of variation are possible; variation of the
parameter values and variation of the set of parameters that build up the scenario. The first kind of
variation is thus the variation of the values of the
specific parameters that build up the scenario. In practice, it is not possible to vary all possible parameter
values. A central challenge is how to know which
parameters are critical in the particular scenario and
thus worth closer examination by simulated variation of their values. The variation of parameter values
can be likened to the concept of single-loop learning
(Argyris & Schn 1996). When the value of a given
parameter in a scenario is altered, that is analogous
to when a difference between expected and obtained
outcome is detected and a change of behaviour is
made.
The second kind of variation is variation of the
set of parameters. This kind of variation may be discerned through e.g. discussing similarities as well as
dissimilarities of parameter sets between different scenarios. The variation of the set of parameters can be
likened to the concept of double-loop learning (Argyris
& Schn 1996), wherein the system itself is altered
due to an observed difference between expected
and obtained outcome. A central question is what
the possible sets of parameters in future emergency
scenarios are.
THEORY
Organisational learning
For maintaining a response capability in an organisation over time there is a need that not only separate
individuals but the entire organisation has the necessary knowledge. According to Senge (2006 p. 129)
. . . Organizations learn only through individuals who
learn. Individual learning does not guarantee organizational learning. But without it no organizational
learning occurs. Also Argyris & Schn (1996) point
out that organisational learning is when the individual members learn for the organisation. Argyris &
Schn (1996) also discuss two types of organisational
learning: single-loop learning and double-loop learning. Single-loop learning occurs when an organisation
modifies its performance due to a difference between
expected and obtained outcome, without questioning
and changing the underlying program (e.g. changes
in values, norms and objectives). If the underlying program that led to the behaviour in the first
place is questioned and the organisation modifies it,
double-loop learning has taken place.
84
3.4
4.2
Scenario
The second step is to vary the value of the parameters that build up the scenario. This may be carried out
through imagining variation of the included parameters (that are seen as relevant) within the scenario
description. Typical examples of parameters can be
the length of forewarning or the number of people
involved in an emergency response.
Variation of parameter values makes the parameters
themselves as well as the possible variation of their
values visible. This can function as a foundation for
positive transfer to future emergency situations with
similar sets of relevant parameters. This in turn may
strengthen the capability to handle future emergencies
of the same kind as the one evaluated, but with for
example greater impact.
4.3
4.4
85
5.3
During the evaluation of the Lebanon war two parameters were identified as especially critical. These were
the staffing of the central staff group and the spreading
of information within the operative organisation.
During the emergency situation the strained staffing
situation was a problem for the people working in the
central staff group. There was no plan for long-term
staffing. A problem was that the crisis happened during the period of summer when most of the people
that usually work in the organisation were on vacation. In addition, there seems to have been a hesitation
to bring in more than a minimum of staff. This resulted
in that some individuals on duty were overloaded with
tasks. After a week these individuals were exhausted.
This was an obvious threat to the organisations ability
to continue operations. Critical questions to ask are:
What if the situation was even worse, would Malmo
have managed it? What if it would have been even
more difficult to staff the response organisation?
The spreading of information within the operative central organisation was primarily done by direct
contact between people either by telephone or mail.
86
unproductive. But if we can do those things in a reasonably disciplined way, we can be smarter and more
imaginative (Clarke 2005 p. 84).
The application of the approach during the evaluation of the managing of the Lebanon war consequences appears to have strengthened the learning in
the organisation. This statement is partly based on
opinions expressed by the individuals in the organisation involved in the discussions during and after the
evaluation. During the reception of the written report
as well as during seminars and presentations of the subject we found that the organisation understood and had
use of the way of thinking generated by using the proposed approach. Subsequently the organisation used
the findings from the use of this way of thinking in
their revision of their emergency management plan.
The new way of thinking seems to have resulted
in providing the organisation a more effective way
of identifying critical aspects. Consequently, it comes
down to being sensitive to the critical dimensions of
variation of these parameters. There is still a need to
further study how an organisation knows which the
critical dimensions are. It is also needed to further
evaluate and refine the approach in other organisations
and on other forms of emergencies.
To support transfer of the result throughout the organisation the evaluation of the Lebanon war resulted in
seminars for different groups of people within the
organisation, e.g. the preparedness planners and the
persons responsible for information during an emergency. These seminars resulted in thorough discussions in the organisation on emergency management
capability. In addition, an evaluation report was made
and distributed throughout the organisation. The discussions during and after the evaluation of the Lebanon
war also led to changes in Malmos emergency management planning and plans. Some of these changes
can be considered examples of double-loop learning,
with altering of parameters, expected to improve future
emergency responses.
CONCLUSION
Seeing scenarios as sets of parameters, and elaborating on the variation of parameter values as well as
the set of parameters, seems to offer possibilities for
strengthening transfer. It may thus support emergency
response organisations in developing rich and manysided emergency management capabilities based on
evaluations of occurred emergency events.
DISCUSSION
REFERENCES
Alexander, D. 2000. Scenario methodology for teaching principles of emergency management. Disaster Prevention
and Management: An International Journal 9(2): 8997.
Argote, L. & Ingram, P. 2000. Knowledge Transfer: A
Basis for Competitive Advantage in Firms. Organizational Behavior and Human Decision Processes 82(1):
150169.
Argyris, C. & Schn, D.A. 1996. Organizational Learning II:
Theory, Method, and Practice. Reading, Massachusetts:
Addison-Wesley Publishing Company.
Boin, A., t Hart, P., Stern, E. & Sundelius, B. 2005. The
Politics of Crisis Management: Public Leadership Under
Pressure. Cambridge: Cambridge University Press.
Carley, K.M. & Harrald, J.R. 1997. Organizational Learning Under Fire. The American Behavioral Scientist 40(3):
310332.
Clarke, L. 2005. Worst cases: terror and catastrophe in
the popular imagination. Chicago: University of Chicago
Press.
87
88
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this paper we discuss about the use of multi-criteria analysis in complex societal problems and
we illustrate our findings from a case study concerning the management of radioactively contaminated milk. We
show that application of multi-criteria analysis as an iterative process can benefit not only the decision-making
process in the crisis management phase, but also the activities associated with planning and preparedness. New
areas of investigation (e.g. zoning of affected areas or public acceptance of countermeasures) are tackled in
order to gain more insight in the factors contributing to a successful implementation of protective actions and
the stakeholders values coming into play. We follow the structured approach of multi-criteria analysis and we
point out some practical implications for the decision-making process.
structuring the decision process and achieving a common understanding of the decision problem and the
values at stake.
Using MCDA in an iterative fashion (as recommended e.g. by Dodgson et al. 2005) brings about
also an important learning dimension, especially for
an application field such as nuclear or radiological
emergency management, where planning and preparedness are two essential factors contributing to
an effective response. An elaborated exercise policy will serve as a cycle addressing all the steps of
the emergency management process: accident scenario definition, emergency planning, training and
communication, exercising and response evaluation,
fine-tuning of plans, etc (see Fig. 1).
INTRODUCTION
Risk management
cycle
89
Emergency
planning and response
scenarios
Exercises
Training &
communication
Simulated
response
Response, interventions
and crisis management
Figure 1.
Exercise
management
cycle
Mitigation
Accident
90
91
Potential actions
GIS data
Food Agency
data
Model and
measurement data
aSb
ji
92
aj
ji
bj + ,
i = 1, . . . , 5
4.4
ref
and
93
(F, ) = 1,
= F G,
(, F) = 0,
F G,
Table 2.
Criterion
C1 Residual collective
effective dose
(person . Sv)
C2 Maximal individual
5% (thyroid)
dose (mSv)
C3 Implementation
cost (kC
=)
C4 Waste (tonnes)
C5 Public
acceptance
C6 (Geographical)
feasibility
C7 Dairy industrys
acceptance
C8 Uncertainty of
outcome
C9 Farmers
acceptance
C10 Environmental
impact
C11 Reversibility
({gm } F, H) = 1, with
{gm } F G
and
H G (F, H) = 1 or gp H: (gm , gp ) = 1
and
(F, H\{gp }) = 1.
We further define a binary relation R representing comprehensive preferences on the set of potential
actions A as follows:
a, b A, R(a, b) = (F, H), where
F = {gi G|aPi b}, H = {gi G|b(Pi Qi )a},
and (Pi , Qi , Ii ) is the preference structure associated
with criterion gi .
4.6
An illustrative example
Description
A1
A2
Do Nothing
Clean feed in area defined by sector
(100 , 119 , 25 km):
Clean feed in area where deposit activity
>4000 Bq/m2 :
Clean feed in area where deposited activity
>4000 Bq/m2 , extended to full
administrative zones
Storage for 32 days in area where deposited
activity >4000 Bq/m2
Storage for 32 days in area where deposited
activity >4000 Bq/m2 , extended to full
administrative zones
A3
A4
A5
A6
Variable
indifference
threshold
(q0i )
Minimal
indifference
threshold Optimis.
(qi ref )
direction
10%
10 person
Sv
0.5 mSv
min
10%
20 kC
=
min
10%
0
1t
0
min
max
max
max
min
max
min
max
min
Table 3.
Criterion C1
Action
C2
Action
Person
Sv
mSv
A1
A2
A3
A4
A5
A6
4
0.1
0.3
0.3
0.8
0.8
C3
C4
C5
C6
C7
C8
=
kC
0
3
3
3
2
2
1
1
1
2
1
2
0
2
2
2
1
1
3
1
1
1
2
2
100
0
0
3.6 240
16
4.6 17
16
4.6 27
16
20
1.3 0
20
2.5 0
On criteria C C
9
11 all actions score the same, therefore
they are not mentioned in the table.
94
A2
A4
Table 4.
A6
A5
Potential actions
Evaluation criteria
A1
Preference
aggregation
A4
A3
A6
makers and decision advisers, as well as practitioners in the field (e.g. from dairy industry or farmers
union) contributed to a better understanding of many
aspects of the problem considered. For our case study,
this process triggered further research in two directions: flexible tools for generating potential actions
and social research in the field of public acceptance of
food chain countermeasures.
MCDA can be thus viewed as bridging between various sciencesdecision science, radiation protection,
radioecological modelling and social scienceand a
useful tool in all emergency management phases.
The research presented here represents one step in
an iterative cycle. Further feedback from exercises and
workshops will contribute to improving the proposed
methodology.
A2
A5
A1
Figure 4. Comprehensive preferences with inter-criteria
information.
REFERENCES
Allen, P., Archangelskaya, G., Belayev, S., Demin, V.,
Drotz-Sjberg, B.-M., Hedemann-Jensen, P., Morrey, M.,
Prilipko, V., Ramsaev, P., Rumyantseva, G., Savkin, M.,
Sharp, C. & Skryabin, A. 1996. Optimisation of health
protection of the public following a major nuclear
accident: interaction between radiation protection and
social and psychological factors. Health Physics 71(5):
763765.
Belton, V. & Stewart, T.J. 2002. Multiple Criteria Decision
Analysis: An integrated approach. Kluwer: Dordrecht.
Carter, E. & French, S. 2005. Nuclear Emergency Management in Europe: A Review of Approaches to Decision
Making, Proc. 2nd Int. Conf. on Information Systems for
Crisis Response and Management, 1820 April, Brussels,
Belgium, ISBN 9076971099, pp. 247259.
Dias, L.C. 2006. A note on the role of robustness analysis in decision aiding processes. Working paper, Institute
of Systems Engineering and Computers INESC-Coimbra,
Portugal. www.inesc.pt.
CONCLUSIONS
95
96
Decision support systems and software tools for safety and reliability
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The paper presents a complex assessment system developed for Small and Medium Enterprises.
It assesses quality, safety and environment on three layers starting from basic to complete assessment (including
management and organizational culture). It presents the most interesting attributes of this system together with
some of the results obtained by testing this system on a statistical lot of 250 Romanian SME.
GENERAL ASPECTS
Figure 2.
Figure 1.
MRAS structure.
VULNERABILITY ANALYSIS
ISMU structure.
99
in an enterprise infrastructure. In adition vulnerability analysis can forecast the effectiveness of proposed
prevention measures and evaluate their actual effectiveness after they are put into use. Our vulnerability
analysis is performed in the basic safety assessment
layer and consist mainly on the next steps:
1. Definition and analysis of the existing (and available) human and material resources;
2. Assignment of relative levels of importance to the
resources;
3. Identification of potential safety threats for each
resource;
4. Identification of the potential impact of the threat
on the specific resource (will be later used in
scenario analysis).
5. Development of a strategy for solving the threats
hierarchically, from the most important to the last
important.
6. Defining ways to minimise the consequences if a
threat is acting(TechDir 2006).
We have considered a mirrored vulnerability
(Kovacs 2006b)this analysis of the vulnerability
being oriented inwards and outwards of the assessed
workplace. The Figure 3 shows this aspect.
So, the main attributes for which vulnerability is
assessed are:
Table 1.
Mark
Semnification
0
1
2
3
4
5
Non-vulnerable
Minimal
Medium
Severe vulnerability-loss
Severe vulnerability-accidents
Extreme vulnerability
Figure 3.
Vulnerability scores.
PRE-AUDIT
100
Table 2.
Pre- Minimum
audit corresponding level
score of safety
Level of
Pretransition to
audit move to next
ranges level
Nothing in place
01
Demonstrate a basic
12
knowledge and
willingness to
implement safety
policies/procedures/
guidelines
Responsibility and
23
accountability
identified for most
safety related tasks
4
5
OHS Policies/
Procedures and
Guidelines
implemented
Comprehensive level
of implementation
across audit site
Pre-audit results
used to review
and improve
safety system;
demonstrates
willingness for
continual
improvement
Table 3.
Pre-audit scores.
34
45
Identification of
basic safety
needs, some
activities
performed
Local safety
procedures
developed and
implemented
Responsibility
documented and
communicated for
the main safety
tasks
Significant level
of implementation
through audit site
Complete
implementation
across audit site
Industry best
practice
Yes No
Figure 4.
101
Predict:
Predict specific risk action;
Predict component failure;
Predict soft spots where risks could materialize
more often;
Prevent:
Foresee the necessary prevention measures; some
of these measures could be taken immediately;
other are possible to be postponed for a more
favourable period (for example the acquisition of
an expensive prevention mean), other are simply improvements of the existing situationfor
example a training that should include all the
workers at the workplace not just the supervisors;
The actual development of the scenario is performed in a best casemedium caseworst case
framework. This framework could be seen in the
Figure 5.
Mainly there are considered three essential components:
Figure 5.
OBTAINED RESULTS
102
REFERENCES
5
12%
0
14%
4
16%
1
14%
3
24%
Figure 6.
2
20%
CONCLUSIONS
MRAS (Multi-Role Assessment System) was developed initially as a self-audit tool which could allow
SME to have an objective and realistic image regarding their efforts to assure the continuous improvement
of quality and maintain a decent standard of safety
and environment protection. However, on the development period we have seen a serious interest from
control organs (like the Work Inspection) so we have
developed the system so that it can be used for selfaudit or it can be used by an external auditor. The
control teams are interested to have a quick referential
in order to be able to check quickly and optimally the
safety state inside a SME. In this respect our system
was the most optimal.
Romanian SME are progressing towards the full
integration into the European market. In this respect
they need to abide to the EU provisions, especially regarding safety and environment. Considering
this, the system assures not just the full conformity
with European laws but also an interactive forecastplan-atact-improve instrument. The compliance audits
included in the system together with the management
and culture audits are opening Romanian SMEs to the
European Union world.
103
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
C. Pragliola
ANSALDO STS, Ansaldo Segnalamento Ferroviario S.p.A., Naples, Italy
ABSTRACT: Critical Infrastructure Protection (CIP) against potential threats has become a major issue in
modern society. CIP involves a set of multidisciplinary activities and requires the adoption of proper protection
mechanisms, usually supervised by centralized monitoring systems. This paper presents the motivation, the
working principles and the software architecture of DETECT (DEcision Triggering Event Composer & Tracker),
a new framework aimed at the automatic and early detection of threats against critical infrastructures. The
framework is based on the fact that non trivial attack scenarios are made up by a set of basic steps which have to
be executed in a predictable sequence (with possible variants). Such scenarios are identified during Vulnerability
Assessment which is a fundamental phase of the Risk Analysis for critical infrastructures. DETECT operates
by performing a model-based logical, spatial and temporal correlation of basic events detected by the sensorial
subsystem (possibly including intelligent video-surveillance, wireless sensor networks, etc.). In order to achieve
this aim, DETECT is based on a detection engine which is able to reason about heterogeneous data, implementing
a centralized application of data fusion. The framework can be interfaced with or integrated in existing
monitoring systems as a decision support tool or even to automatically trigger adequate countermeasures.
1
Critical Infrastructure Protection (CIP) against terrorism and any form or criminality has become
a major issue in modern society. CIP involves a
set of multidisciplinary activities, including Risk
Assessment and Management, together with the
adoption of proper protection mechanisms, usually supervised by specifically designed Security
Management Systems (SMS)1 (see e.g. (LENEL
2008)).
Among the best ways to prevent attacks and disruptions is to stop any perpetrators before they strike.
This paper presents the motivation, the working principles and the software architecture of DETECT
(DEcision Triggering Event Composer & Tracker),
1 In
105
or radiological material in underground stations, combined attacks with simultaneous multiple train halting
and railway bridge bombing, etc. DETECT has proven
to be particularly suited for the detection of such articulated scenarios using a modern SMS infrastructure
based on an extended network of cameras and sensing
devices. With regards to the underlying security infrastructure, a set of interesting technological and research
issues can also be addressed, ranging from object
tracking algorithms to wireless sensor network integration; however, these aspects (mainly application
specific) are not in the scope of this work.
DETECT is a collaborative project carried out by
the Business Innovation Unit of Ansaldo STS Italy
and the Department of Computer and System Science
of the University of Naples Federico II.
The paper is organized as follows: Section 2
presents a brief summary of related works; Section 3
introduces the reference software architecture of the
framework; Section 4 presents the language used to
describe the composite events; Section 5 describes the
implementation of the model-based detection engine;
Section 6 contains a simple case-study application;
Section 7 draws conclusions and provides some hints
about future developments.
2
RELATED WORKS
106
MODEL(S)
GENERATOR
EDL
REPOSITORY
EVENT
HISTORY
MODEL k
SOLVER
QUERIES
MODEL k
EXECUTOR
MODEL
UPDATER k
MODEL k
FEEDER
INPUTS
DETECTION
MODEL k
SMS / SCADA
ALARMS ->
<- CONFIG
COUNTERMEASURES
OUTPUT
MANAGER
Figure 1.
DETECTION
ENGINE
107
(e.g. update of a threshold parameter), without regenerating the whole model (whenever
supported by the modeling formalism).
Output Manager (single), which stores the output of the model(s) and/or passes it to the interface
modules.
Model Generator and Model Manager are dependent on the formalisms used to express the models
constituting the Detection Engine. In particular, the
Model Generator and Model Feeder are synergic in
implementing the detection of the event specified in
EDL files: in fact, while the Detection Engine plays
undoubtedly a central role in the framework, many
important aspects are demanded to the way the query
on the database is performed (i.e. selection of proper
events). As an example, in case the Detection Engine
is based on Event Trees (a combinatorial formalism),
the Model Feeder should be able to pick the set of last
N consecutive events fulfilling some temporal properties (e.g. total time elapsed since the first event of the
sequence <T), as defined in the EDL file. In case of
Event Graphs (a state-based formalism), instead, the
model must be fed by a single event at a time.
Besides these main modules, there are others which
are also needed to complete the framework with useful,
though not always essential, features (some of which
can also be implemented by external tools or in the
SMS):
4
Scenario GUI (Graphical User Interface) used to
draw attack scenarios using an intuitive formalism
and a user-friendly interface (e.g. specifically
tagged UML Sequence Diagrams stored in the
standard XMI2 format (Object Management Group
UML 2008)).
EDL File Generator, translating GUI output into
EDL files.
Event Log, in which storing information about
composite events, including detection time, scenario type, alarm level and likelihood of attack
(whenever applicable).
Countermeasure Repository, associating to each
detected event or event class a set of operations to
be automatically performed by the SMS.
Specific drivers and adapters needed to interface
external software modules, possibly including antiintrusion and video-surveillance subsystems.
3 OLE
2 XML
(eXtended
Interchange.
Markup
Language)
Metadata
108
109
internal nodes (including the root) represent EDL language operators. Figure 2 shows an example Event
Tree representing a composite event.
After the user has sketched the Event Tree, the Scenario GUI module parses the graph and provides the
EDL expression to be added to the EDL Repository.
The parsing process starts from the leaf nodes representing the primitive events and ends at the root
node. Starting from the content of the EDL Repository,
the Model Generator module builds and instantiates
as many Event Detector objects as many composite
events stored in the database. The detection algorithm
implemented by such objects is based on Event Graphs
and the objects include the functionalities of both the
Model Solver and the Detection Engine.
In the current prototype, after the insertion of attack
scenarios, the user can start the detection process on
the Event History using a stub front-end (simulating
the Model Executor and the Output Manager modules). A primitive event is accessed from the database
by a specific Model Feeder module, implemented by a
single Event Dispatcher object which sends primitive
event instances to all Event Detectors responsible for
the detection process.
The Event Dispatcher requires considering only
some event occurrences, depending on a specific
policy defined by the parameter context. The policy
is used to define which events represent the beginning (initiator) and the end (terminator) of the scenario. The parameter context states which component
event occurrences play an active part in the detection process. Four contexts for event detection can be
defined:
Recent: only the most recent occurrence of the
initiator is considered.
AN EXAMPLE SCENARIO
110
REFERENCES
Figure 3.
Alferes, J.J. & Tagni, G.E. 2006. Implementation of a Complex Event Engine for the Web. In Proceedings of IEEE
Services Computing Workshops (SCW 2006). September
1822. Chicago, Illinois, USA.
Buss, A.H. 1996. Modeling with Event Graphs. In Proc.
Winter Simulation Conference, pp. 153160.
111
Jain, A.K., Mao, J. & Mohiuddin, K.M. 1996. Artificial Neural Networks: A tutorial. In IEEE Computer, Vol. 29, No.
3, pp. 5663.
Jones, A.K. & Sielken, R.S. 2000. Computer System Intrusion Detection: A Survey. Technical Report, Computer
Science Dept., University of Virginia.
Krishnaprasad, V. 1994. Event Detection for Supporting
Active Capability in an OODBMS: Semantics, Architecture and Implementation. Masters Thesis. University of
Florida.
LENEL OnGuard 2008. http://www.lenel.com.
Lewis, F.L. 2004. Wireless Sensor Networks. In Smart Environments: Technologies, Protocols, and Applications, ed.
D.J. Cook and S.K. Das. John Wiley, New York.
Lewis, T.G. 2006. Critical Infrastructure Protection in
Homeland Security: Defending a Networked Nation. John
Wiley, New York.
Object Management Group UML, 2008. http://www.omg.
org/uml.
OLE for Process Communication. http://www.opc.org.
Remagnino, P., Velastinm, S.A., Foresti G.L. & Trivedi, M.
2007. Novel concepts and challenges for the next generation of video surveillance systems. In Machine Vision and
Applications (Springer), Vol. 18, Issue 34, pp. 135137.
Roman, R., Alcaraz, C. & Lopez, J. 2007. The role of Wireless Sensor Networks in the area of Critical Information
Infrastructure Protection. In Information Security Tech.
Report, Vol. 12, Issue 1, pp. 2431.
Tzafestas, S.G. 1999. Advances in Intelligent Autonomous
Systems. Kluwer.
112
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper introduces an integrated framework and software platform that uses a three layer
approach to modeling complex systems. The multi-layer PRA approach implemented in IRIS (Integrated Risk
Information System) combines the power of Event Sequence Diagrams and Fault Trees for modeling risk
scenarios and system risks and hazards, with the flexibility of Bayesian Belief Networks for modeling nondeterministic system components (e.g. human, organizational). The three types of models combined in the IRIS
integrated framework form a Hybrid Causal Logic (HCL) model that addresses deterministic and probabilistic
elements of systems and quantitatively integrates system dependencies. This paper will describe the HCL
algorithm and its implementation in IRIS by use of an example from aviation risk assessment (a risk scenario
model of aircraft taking off from the wrong runway.
INTRODUCTION
113
2.1
The hybrid causal logic methodology extends conventional deterministic risk analysis techniques to
include soft factors including the organizational and
regulatory environment of the physical system. The
HCL methodology employs a model-based approach
to system analysis; this approach can be used as the
foundation for addressing many of the issues that are
commonly encountered in system safety assessment,
hazard identification analysis, and risk analysis. The
integrated framework is presented in Figure 1.
ESDs form the top layer of the three layer model,
FTs form the second layer, and BBNs form the bottom
layer. An ESD is used to model temporal sequences of
events. ESDs are similar to event trees and flowcharts;
an ESD models the possible paths to outcomes, each
of which could result from the same initiating event.
ESDs contain decision nodes where the paths diverge
based on the state of a system element. As part of the
hybrid causal analysis, the ESDs define the context
or base scenarios for the hazards, sources of risk, and
safety issues.
The ESD shown in Figure 2 models the probability
of an aircraft taking off safely, stopping on the runway, or overrunning the runway. As can be seen in
the model, the crew must reject the takeoff and the
speed of the aircraft must be lower than the critical
speed beyond which the aircraft cannot stop before the
IE
PE-1
PE-2
PE-3
E
C
Figure 1.
114
Figure 2.
Case study top layerESD for an aircraft using the wrong runway (Roelen et al. 2002).
Figure 4. Partial NLR model for takeoff from wrong runway. The flight plan node is fed by Figure 5, and the crew
decision/action error is fed by additional human factors.
115
Figure 5.
BBN nodes are quantified in conditional probability tables. The size of the conditional probability
table for each node depends on the number of parent
nodes leading into it. The conditional probability table
requires the analyst to provide a probability value for
each state of the child node based on every possible
combination of the states of parent nodes. The default
number of states for a BBN node is 2, although additional states can be added as long as the probability of
all states sums to 1. Assuming the child and its n parent nodes all have 2 states, this requires 2n probability
values.
In order to quantify the hybrid model it is necessary to convert the three types of diagrams into a set
of models that can communicate mathematically. This
is accomplished by converting the ESDs and FTs into
Reduced Ordered Binary Decision Diagrams (BDDs).
The set of reduced ordered BDDs for a model are all
unique and the order of variables along each path from
root node to end node is identical. Details on the algorithms used to convert ESDs and FTs into BDDs have
been described extensively (Bryant 1992, Brace et al.
1990, Rauzy 1993, Andrews & Dunnett 2000, Groen
et al. 2005).
116
3.1
Importance measures
Figure 6.
Probability values and cut sets for the base wrong runway scenario.
Figure 7.
117
p(e S)
P(S)
(1)
p(Si |e)
i p(e Si )
= i
p(S
)
i
i
i p(Si )
(2)
Risk indicators
(3)
Figure 8.
118
Risk impact
Figure 9.
CONCLUSION
Updated scenario results for the runway overrun with information about flight 5191 specified.
Figure 10. Fault tree results showing the probability of taking off from the wrong runway for the base case (top) and the case
reflecting flight 5191 factors (bottom).
119
REFERENCES
Andrews, J.D. & Dunnett, S.J. 2000. Event Tree Analysis
using Binary Decision Diagrams. IEEE Transactions on
Reliability 49(2): 230239.
Brace, K., Rudell, R. & Bryant, R. 1990. Efficient Implementation of a BDD Package. The 27th ACM/IEEE Design
Automation Conference, IEEE 0738.
Bryant, R. 1992. Symbolic Boolean Manipulation with
Ordered Binary Decision Diagrams. ACM Computing
Surveys 24(3): 293318.
Eghbali, G.H. 2006. Causal Model for Air Carrier Maintenance. Report Prepared for Federal Aviation Administration. Atlantic City, NJ: Hi-Tec Systems.
Fussell, J.B. 1975. How to Hand Calculate System Reliability and Safety Characteristics. IEEE Transactions on
Reliability R-24(3): 169174.
Groen, F. & Mosleh, A. 2008 (In Press). The Quantification of Hybrid Causal Models. Submitted to Reliability
Engineering and System Safety.
Groen, F., Smidts, C. & Mosleh, A. 2006. QRASthe quantitative risk assessment system. Reliability Engineering
and System Safety 91(3): 292304.
Groth, K. 2007. Integrated Risk Information System Volume 1: User Guide. College Park, MD: University of
Maryland.
Groth, K., Zhu, D. & Mosleh, A. 2008. Hybrid Methodology
and Software Platform for Probabilistic Risk Assessment. The 54th Annual Reliability and Maintainability
Symposium, Las Vegas, NV.
Mandelapu, S. 2006. Causal Model for Air Carrier Maintenance. Report Prepared for Federal Aviation Administration. Atlantic City, NJ: Hi-Tec Systems.
Mosleh, A. et al. 2004. An Integrated Framework for
Identification, Classification and Assessment of Aviation Systems Hazards. The 9th International Probabilistic
Safety Assessment and Management Conference. Berlin,
Germany.
Mosleh, A., Wang, C. & Groen, F. 2007. Integrated Methodology For Identification, Classification and Assessment
of Aviation Systems Hazards and Risks Volume 1: Framework and Computational Algorithms. College Park, MD:
University of Maryland.
National Transportation Safety Board (NTSB) 2007. Aircraft
Accident Report NTSB/AAR-07/05.
Rauzy, A. 1993. New Algorithms for Fault Trees Analysis.
Reliability Engineering and System Safety 40: 203211.
Roelen, A.L.C. & Wever, R. 2004a. A Causal Model
of Engine Failure, NLR-CR-2004-038. Amsterdam:
National Aerospace Laboratory NLR.
Roelen, A.L.C. &. Wever, R. 2004b. A Causal Model of
A Rejected Take-Off. NLR-CR-2004-039. Amsterdam:
National Aerospace Laboratory NLR.
Roelen, A.L.C et al. 2002. Causal Modeling of Air Safety.
Amsterdam: National Aerospace Laboratory NLR.
Wang, C. 2007. Hybrid Causal Methodology for Risk Assessment. PhD dissertation. College Park, MD: University of
Maryland.
Zhu, D. et al. 2008. A PRA Software Platform for Hybrid
Causal Logic Risk Models. The 9th International Probabilistic Safety Assessment and Management Conference.
Hong Kong, China.
120
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper surveys the current status of Spanish Nuclear Safety Council (CSN) work made to
establish an Integrated Safety Analysis (ISA) methodology, supported by a simulation framework called SCAIS,
to independently check the validity and consistency of many assumptions used by the licensees in their safety
assessments. This diagnostic method is based on advanced dynamic reliability techniques on top of using classical
Probabilistic Safety Analysis (PSA) and deterministic tools, and allows for checking at once many aspects of
the safety assessments, making effective use of regulatory resources. Apart from a theoretical approach that is at
the basis of the method, application of ISA requires a set of computational tools. Steps done in the development
of ISA started by development of a suitable software package called SCAIS that comprehensively implies an
intensive use of code coupling techniques to join typical TH analysis, severe accident and probability calculation
codes. The final goal is to dynamically generate the event tree that stems from an initiating event, improving the
conventional PSA static approach.
Important examples are for instance the analysis justifying the PSA success criteria and operating
technical specifications, which are often based on
potentially outdated base calculations made in older
times in a different context and with other spectrum of
applications in mind.
This complex situation generates a parallel need
in regulatory bodies that makes it mandatory to
increase their technical expertise and capabilities in
this area. Technical Support Organization (TSO) have
become an essential element of the regulatory process1 , providing a substantial portion of its technical
and scientific basis via computerized safety analysis supported on available knowledge and analytical
methods/tools.
TSO tasks can not have the same scope as their
industry counterparts, nor is it reasonable to expect
1 Examples
121
In recent years efforts are being devoted to the clarification of the relative roles of deterministic and
probabilistic types of analysis with a view towards
their harmonization, in order to take benefit of their
strengths and to get rid of identified shortcomings,
normally related with inter-phase aspects, like the
interaction between the evolution of process variables
and its influence in probabilities. Different organizations, (Hofer 2002) and (Izquierdo 2003a), have
undertaken some initiatives in different contexts with
claims such as the need for an integration of probabilistic safety analysis in the safety assessment, up to
the approach of a risk-informed decision making process as well as for proposals of verification methods
for application that are in compliance with the state of
the art in science and technology.
These initiatives should progressively evolve into
a sound and efficient interpretation of the regulations
that may be confirmed via computerized analysis. It
is not so much a question of new regulations from
the risk assessment viewpoint, but to ensure compliance with existing ones in the new context by
verifying the consistency of individual plant assessment results through a comprehensive set of checks.
Its development can then be considered as a key and
novel topic for research within nuclear regulatory
agencies/TSO.
More precisely, issues that require an integrated
approach arise when considering:
the process by which the insights from these complementary safety analysis are combined, and
122
Figure 1.
Figure 2.
123
124
under
development
are
New developments in sequence dynamics. A generalization of the transfer function concepts for
sequences of events, with potential for PSA application as generalized dynamic release factors is under
investigation.
New developments about classical PSA aspects.
They include rigorous definitions for concepts like
available time for operations or plant damage states.
125
126
ACKNOWLEDGMENTS
6 .
NOMENCLATURE
CONCLUSIONS
REFERENCES
Espern, J. & Expsito, A. (2008). SIMPROC: Procedures
simulator for operator actions in NPPs. Proceedings of
ESREL 08.
Herrero, R. (2003). Standardization of code coupling for integrated safety assessment purposes. Technical meeting on
progress in development and use of coupled codes for
accident analysis. IAEA. Viena, 2628 November 2003.
127
Muoz, R. (1999). DENDROS: A second generation scheduler for dynamic event trees. M & C 99 Conference,
Madrid.
Queral, C. (2006). Incorporation of stimulusdriven theory
of probabilistics dynamics into ISA (STIM). Joint project
of UPM, CSN and ULB founded by Spanish Ministry of
Education & Science (ENE2006 12931/CON), Madrid.
Siu, N. & Hilsmeier, T. (2006). Planning for future probabilistic risk assessment research and development. In G.S. Zio
(Ed.), Safety and Reliability for Managing Risk, Number
ISBN 0-415-41620-5. Taylor & Francis Group, London.
SMAP Task Group (2007). Safety Margins Action Plan. Final
Report. Technical Report NEA/CSNI/R(2007)9, Nuclear
Energy Agency. Committee on the Safety of Nuclear
Installations, http://www.nea.fr/html/nsd/docs/2007/csnir
2007-9.pdf.
128
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Using GIS and multivariate analyses to visualize risk levels and spatial
patterns of severe accidents in the energy sector
P. Burgherr
Paul Scherrer Institut (PSI), Laboratory for Energy Systems Analysis, Villigen PSI, Switzerland
ABSTRACT: Accident risks of different energy chains are analyzed by comparative risk assessment, based on
the comprehensive database ENSAD established by the Paul Scherrer Institut. Geographic Information Systems
(GIS) and multivariate statistical analyses are then used to investigate the spatial variability of selected risk
indicators, to visualize the impacts of severe accidents, and to assign them to specific geographical areas.
This paper demonstrates by selected case studies how geo-referenced accident data can be coupled with other
socio-economic, ecological and geophysical contextual parameters, leading to interesting new insights. Such an
approach can facilitate the interpretation of results and complex interrelationships, enabling policy makers to
gain a quick overview of the essential scientific findings by means of summarized information.
INTRODUCTION
Categorizing countries by their risk levels due to natural hazards has become a standard approach to assess,
prioritize and mitigate the adverse effects of natural
disasters. Recent examples of coordinated international efforts include studies like the World Disasters
Report (IFRC 2007), Natural Disaster Hotspots
(Dilley et al. 2005), and Reducing Disaster Risk
(UNDP 2004) as well as studies by the worlds leading
reinsurance companies (Berz 2005; Munich Re 2003;
Swiss Re 2003). Applications to the impacts of manmade (anthropogenic) activities range from ecological
risk assessment of metals and organic pollutants (e.g.
Critto et al. 2005; Zhou et al. 2007) to risk assessment of contaminated industrial sites (Carlon et al.
2001), conflicts over international water resources
(Yoffe et al. 2003), climate risks (Anemller et al.
2006), and environmental sustainability assessment
(Bastianoni et al. 2008).
Accidents in the energy sector rank second (after
transportation) of all man-made accidents (Hirschberg
et al. 1998). Since the 1990s, data on energyrelated accidents have been systematically collected,
harmonized and merged within the integrative Energyrelated Severe Accident Database (ENSAD) developed and maintained at the Paul Scherrer Institut
(Burgherr et al. 2004; Hirschberg et al. 1998).
Results of comparative risk assessment for the different energy chains are commonly expressed in a
quantitative manner such as aggregated indicators (e.g.
damage rates), cumulative risk curves in a frequencyconsequence (F-N) diagram, or external cost estimates
to provide an economic valuation of severe accidents (Burgherr & Hirschberg 2008; Burgherr &
Hirschberg, in press; Hirschberg et al. 2004a).
Recently, ENSAD was extended to enable
geo-referencing of individual accident records at
different spatial scales including regional classification schemes (e.g. IPCC world regions; subregions of oceans and seas; international organization
participation), sub-national administrative divisions
(ADMIN1, e.g. state or province; ADMIN2, e.g.
county), statistical regions of Europe (NUTS) by Eurostat, location name (PPL, populated place), latitude
and longitude (in degrees, minutes and seconds), or
global grid systems (e.g. Marsden Squares, Maidenhead Locator System). The coupling of ENSAD
with Geographic Information Systems (GIS) and geostatistical methods allows one to visualize accident
risks and to assign them to specific geographical areas,
as well as to calculate new risk layers, i.e. to produce
illustrative maps and contour plots based on scattered
observed accident data.
In the course of decision and planning processes
policy makers and authorities often rely on summarized information to gain a quick overview of a
thematic issue. However, complex scientific results
are often very difficult to communicate without appropriate visualization. Methods for global, regional or
local mapping using GIS methods are particularly useful because they reveal spatial distribution patterns
and link them to administrative entities, which allow
planning and implementation of preventive measures,
legal intervention or mitigation at the appropriate
administrative level.
129
METHODS
2.1
130
the European Free Trade Association (EFTA). A multivariate risk score was then calculated for individual
countries to analyze their differences in susceptibility
to accident risks. The proposed risk score consists of
four indicators:
131
Acc
Fat
EU 27
Acc
Fat
Non-OECD Acc
World total
81
2123
41
942
144
1363
Fat
5360
24456
Acc 1588
Fat 31939
Oil
Gas
LPG
174 103
59
3388 1204 1875
64
33
20
1236 337 559
308
61
61
Total
417
8590
158
3074
574
Figure 1. Individual countries are shaded according to their total numbers of severe accident fatalities in fossil energy
chains for the period 19702005. Pie charts designate the ten countries with most accidents, whereas bars indicate the ten
most deadly accidents. Country boundaries: ESRI Data & Maps (ESRI 2006a).
Figure 2. Locations of all fatal oil chain accidents and country-specific risk scores of EU 27, accession candidate and
EFTA countries for the period 19702005. Country boundaries: EuroGeographics for the administrative boundaries
(Eurostat 2005).
132
Figure 3. Individual geo-referenced oil spills for the period 19702005 are represented by different-sized circles corresponding to the number of tonnes released. Regional differences in susceptibility to accidents were analyzed by ordinary kriging,
resulting in a prediction map of filled contours. The boundaries of the Large Marine Ecosystems (LME) are also shown.
Country boundaries: EuroGeographics for the administrative boundaries (Eurostat 2005).
133
Figure 4. For individual provinces in China, average fatalities per Mt produced coal are given for severe (5 fatalities)
accidents in large and small mines for the period 19941999. Provinces were assigned to three distinct levels of mechanization
as indicated by their shading. Locations of major state-owned coal mines are also indicated on the map. Administrative
boundaries and coal mine locations: U.S. Geological Survey (USGS 2004).
134
CONCLUSIONS
ACKNOWLEDGEMENTS
The author thanks Drs. Stefan Hirschberg and Warren
Schenler for their valuable comments on an earlier
version of this manuscript. This study was partially
performed within the Integrated Project NEEDS (New
Energy Externalities Development for Sustainability, Contract No. 502687) of the 6th Framework
Programme of European Community.
REFERENCES
Anemller, S., Monreal, S. & Bals, C. 2006. Global Climate Risk Index 2006. Weather-related loss events and
their impacts on countries in 2004 and in a long-term
comparison, Bonn/Berlin, Germany: Germanwatch e.V.
Bastianoni, S., Pulselli, F.M., Focardi, S., Tiezzi, E.B.P. &
Gramatica, P. 2008. Correlations and complementarities
in data and methods through Principal Components Analysis (PCA) applied to the results of the SPIn-Eco Project.
Journal of Environmental Management 86: 419426.
Berz, G. 2005. Windstorm and storm surges in Europe: loss
trends and possible counter-actions from the viewpoint
of an international reinsurer. Philosophical Transactions
of the Royal Society AMathematical Physical and
Engineering Sciences 363(1831): 14311440.
Burgherr, P. 2007. In-depth analysis of accidental oil spills
from tankers in the context of global spill trends from all
sources. Journal of Hazardous Materials 140: 245256.
Burgherr, P. & Hirschberg, S. 2007. Assessment of severe
accident risks in the Chinese coal chain. International
Journal of Risk Assessment and Management 7(8):
11571175.
Burgherr, P. & Hirschberg, S. 2008. Severe accident risks
in fossil energy chains: a comparative analysis. Energy
33(4): 538553.
Burgherr, P. & Hirschberg, S. in press. A comparative analysis of accident risks in fossil, hydro and nuclear energy
chains. Human and Ecological Risk Assessment.
Burgherr, P., Hirschberg, S., Hunt, A. & Ortiz, R.A. 2004.
Severe accidents in the energy sector. Final Report to the
European Commission of the EU 5th Framework Programme New Elements for the Assessment of External
Costs from Energy Technologies (NewExt), Brussels,
Belgium: DG Research, Technological Development and
Demonstration (RTD).
Carlon, C., Critto, A., Marcomini, A. & Nathanail, P. 2001.
Risk based characterisation of contaminated industrial
site using multivariate and geostatistical tools. Environmental Pollution 111: 417427.
Critto, A., Carlon, C. & Marcomini, A. 2005. Screening
ecological risk assessment for the benthic community in
the Venice Lagoon (Italy). Environment International 31:
10941100.
Dilley, M., Chen, R.S., Deichmann, U., Lerner-Lam, A.L.,
Arnold, M., Agwe, J., Buys, P., Kjekstad, O., Lyon, B. &
Yetman, G. 2005. Natural disaster hotspots. A global
risk analysis. Disaster Risk Management Series No.
5, Washington D.C., USA: The World Bank, Hazard
Management Unit.
135
136
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: At industrial facilities where the legislation on major accident hazard is enforced, near misses,
failures and deviations, even though without consequences, should be recorded and analyzed, for an early
identification of factors that could precede accidents. In order to provide duty-holders with tools for capturing
and managing these weak signals coming from operations, a software prototype, named NOCE, has been
developed. Client-server architecture has been adopted, in order to have a palmtop computer connected to an
experience data base at the central server. The operators shall record by the palmtop any non-conformance, in
order to have a plant operational experience database. Non-conformances are matched with the safety system
for finding breaches in the safety system and eventually remove them. A digital representation of the plant and
its safety systems shall be developed, step by step, by exploiting data and documents, which are required by the
legislation for the control of major accident hazard. No extra job is required to the duty holder. The plant safety
digital representation will be used for analysing non-conformances. The safety documents, including safety
management system, safety procedures, safety report and the inspection program, may be reviewed according
to the plant operational experience.
INTRODUCTION
137
138
139
Figure 2.
140
Figure 4.
4.3
Desktop application
141
Figure 5.
A case study
The NOCE prototype has been tested at a smallmedium sized facility, which had good historical
collections of anomalies and near misses. These
data have been used to build an adequate experience
database. The following documents have been found
in the safety reports:
The list of near misses, failures and deviation, as presented in the NOCE graphical user interface.
142
Figure 6. A. The tree plant representation. B. The event data, as recorded via palmtop. C. The computation of Mond FE&T
index. D. Basic representation of the top event. E. The paragraph of the Safety Report that has been tagged. F. The lesson
learnt from the non-conformance.
ACKNOWLEDGEMENTS
CONCLUSIONS
The authors wish to thank Mr Andrea Tosti for his professional support in implementing palmtop software.
REFERENCES
Lewis, D.J. (1979). The Mond fire, explosion and toxicity
index a development of the Dow Index AIChE on Loss
Prevention, New York.
143
Uth, H.J. & Wiese, N. (2004). Central collecting and evaluating of major accidents and near-miss-events in the Federal
Republic of Germanyresults, experiences, perspectives
J. of Hazardous Materials 111(13), 139145.
Beard, A.N. (2005). Requirements for acceptable model use
Fire Safety Journal 40, 477484.
Beard, A.N. (2005). Requirements for acceptable model use
Fire Safety J. 40(5), 477484.
Zhao, C., Bhushan, M. & Venkatasubramanian, V. (2005).
Phasuite: An automated HAZOP analysis tool for
chemical processes, Process Safety and Environmental
Protection, 83(6B), 509548.
Fabbri, L., Struckl, M. & Wood, M. (Eds.) (2005). Guidance
on the Preparation of a Safety Report to meet the requirements of Dir 96/82/EC as amended by Dir 03/105/EC EUR
22113 Luxembourg EC.
Sonnemans, P.J.M. & Korvers P.M.W. (2006). Accidents in
the chemical industry: are they foreseeable J. of Loss
Prevention in the Process Industries 19, 112.
OECD (2006). Working Group on Chemical Accident (2006)
Survey on the use of safety documents in the control of
major accident hazards ENV/JM/ACC 6 Paris.
Agnello, P., Ansaldi, S., Bragatto, P. & Pittiglio, P. (2007).
The operational experience and the continuous updating
of the safety report at Seveso establishments Future Challenges of Accident Investigation Dechy, N Cojazzi, GM
33rd ESReDA seminar EC-JRC-IPS.
Bragatto, P., Monti, M., Giannini,F. & Ansaldi, S. (2007).
Exploiting process plant digital representation for risk
analysis J. of Loss Prevention in the Process Industry 20,
6978.
Santos-Reyes, J.A. & Beard, A.N. (2008). A systemic
approach to managing safety J. of Loss Prevention in the
Process Industries 21(1), 1528.
144
Dynamic reliability
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this paper, we introduce a novel and simple fault rate classification scheme in hardware. It is
based on the well-known threshold scheme, counting ticks between faults. The innovation is to introduce variable
threshold values for the classification of fault rates and a fixed threshold for permanent faults. In combination
with field data obtained from 9728 processors of a SGI Altix 4700 computing system, a proposal for the
frequency-over-time behavior of faults results, experimentally justifying the assumption of dynamic and fixed
threshold values. A pattern matching classifies the fault rate behavior over time. From the behavior a prediction
is made. Software simulations show that fault rates can be forecast with 98% accuracy. The scheme is able to
adapt to and diagnose sudden changes of the fault rate, e.g. a spacecraft passing a radiation emitting celestial
body. By using this scheme, fault-coverage and performance can be dynamically adjusted during runtime.
For validation, the scheme is implemented by using different design styles, namely Field Programmable Gate
Arrays (FPGAs) and standard-cells. Different design styles were chosen to cover different economic demands.
From the implementation, characteristics like the length of the critical path, capacity and area consumption
result.
INTRODUCTION
6500
The performance requirements of modern microprocessors have increased proportionally to their growing
number of applications. This led to an increase of clock
frequencies up to 4.7 GHz (2007) and to an integration
density of less than 45 nm (2007). The Semiconductor
Industry Association roadmap forecasts a minimum
feature size of 14 nm [6] until 2020. Below 90 nm a
serious issue occurs at sea level, before only known
from aerospace applications [1]: the increasing probability that neutrons cause Single-Event Upsets in
memory elements [1, 3]. The measurable radiation on
sea level consists of up to 92% neutrons from outer
space [2]. The peak value is 14400 neutrons/cm2 /h
[4]. Figure 1 shows the number of neutron impacts per
hour per square centimeter for Kiel, Germany (data
from [5]).
This work deals with the handling of faults after
they have been detected (fault diagnosis). We do not
use the fault rate for the classification of fault types
such as transient, intermittent or permanent faults. We
classify the current fault rate and forecast its development. On this basis the performance and fault coverage
can be dynamically adjusted during runtime. We call
this scheme History Voting.
The rest of this work is organized as follows: in
Section 2, we present and discuss observations on fault
rates in real-life systems. In Section 3, we discuss
6300
6100
0
2000
4000
6000
8000
Time in hours
147
OBSERVATIONS
100
Number of systems
80
60
40
20
Figure 2.
30
Number of faults
25
20
15
10
1Jul00
1May
00
1Mar
00
1Jan00
1Nov
99
1Sep
99
1Jul99
1May
99
RELATED WORK
148
Figure 4.
Bennett [16] develop and analyze a flexible majority voter for TMR-systems. From the history of faults
the most reliable module is determined. In [18] the
processor that will probably fail in the future is determined from the list of processors which took part in
a redundant execution scheme. Therefore, processors
are assigned weights. Like [9] we separate between
techniques which incur interference from outside and
mechanisms based on algorithms. The latter are used
in this work.
4
149
H[1][2][3]
Prediction
i : N N {0, 1}
0 if < (a, b) and
i(a, b) :=
1 if < (a, b)
:NNN
with
(a, b) := |a b| .
(1)
Table 1 shows the coding of i(a, b) and the consequence on the trust (). represents the bitwise
shifting of value to the right, ++ an increase of
value . If a fault cannot be tolerated, the trust is
decremented until the minimum (null) is reached.
If i is substantially greater than , this could
mean that the diagnose unit does not respond. In this
case, tests should be carried out to prevent further
faulty behavior.
4.2
Table 3.
Symbol
Description
Trust of unit i
Upper threshold: normal fault rate
Mid-threshold: increased fault rate
Lower (fixed) threshold:
permanent fault
Quality of prediction
If > , trust can be adjusted
(quality-threshold)
Value of cycle-counter
when detecting fault i
Entry i in the history
Maximal number of entries
in the history
Prediction of the fault rate
from the history (s. Figure 5)
Pattern matching to forecast
the fault (s. Figure 5)
i
H[i]
Entries
Coding i(a, b)
Fault rate
Consequence
Prediction
0
1
Normal
Increase
++
Predict
150
Figure 5.
EXPERIMENTAL RESULTS
151
Table 5.
Table 4.
3308
1175 1200
5784
8.8
Figure 6.
CONCLUSION
In this work we presented a novel fault classification scheme in hardware. Apart from other schemes,
the developed History Voting classifies the fault rate
behavior over time. From this, a prediction is made.
Only if the prediction quality exceeds a known value,
the trust in units can be adjusted. From the implementations a proof of concept is made. From the results, we
see that the scheme is relatively slow. Since faults
apart from permanent onesoccur seldom in time,
this will not appeal against the scheme. It will easily fit even on small FPGAs. For the scheme, many
application areas exist. Depending on the size of the
final implementation, it could be implemented as an
additional unit on a space probe, adjusting the system behavior during the flight. Another application
area are large-scale systems, equipped with application specific FPGAs e.g. to boost the performance of
cryptographic applications. Here, the scheme could
be implemented to adjust the performance of a node,
e.g. identify faulty cores using their trust or automatically generate warnings if a certain fault rate is
reached.
9.962
REFERENCES
[1] E. Normand. Single-Event Upset at Ground Level.
IEEE Trans. on Nuclear Science, vol. 43, no. 6, part 1,
pp. 27422750, 1996.
[2] T. Karnik et al. Characterization of Soft Errors caused
by Single-Event Upsets in CMOS Processes. IEEE
Trans. on Dependable and Secure Computing, vol.
1, no. 2, pp. 128143, 2004.
[3] R. Baumann, E. Smith. Neutron-induced boron fission
as a major source of soft errors in deep submicron
SRAM devices. In Proc. of the 38th Intl. Reliability
Physics Symp., pp.152157, 2000.
[4] F.L. Kastensmidt, L. Carro, R. Reis. Fault-tolerance
Techniques for SRAM-based FPGAs. SpringerVerlag, ISBN 0-387-31068-1, 2006.
mA
6.88
mW
12.39
Area
Slices
Slice-FFs
4-Input LUTs
IOBs
Gate Count
188
200
233
4
3661
152
153
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In dynamic reliability, the evolution of a system is governed by a piecewise deterministic Markov
process, which is characterized by different input data. Assuming such data to depend on some parameter p P,
our aim is to compute the first-order derivative with respect to each p P of some functionals of the process,
which may help to rank input data according to their relative importance, in view of sensitivity analysis. The
functionals of interest are expected values of some function of the process, cumulated on some finite time interval
[0, t], and their asymptotic values per unit time. Typical quantities of interest hence are cumulated (production)
availability, or mean number of failures on some finite time interval and similar asymptotic quantities. The
computation of the first-order derivative with respect to p P is made through a probabilistic counterpart of the
adjoint point method, from the numerical analysis field. Examples are provided, showing the good efficiency of
this method, especially in case of large P.
INTRODUCTION
values in a Borel set V Rd and stands for the environmental conditions (temperature, pressure, . . .). The
process (It , Xt )t0 jumps at countably many random
times and both components interact one in each other,
as required for models from dynamic reliability: by a
jump from (It , Xt ) = (i, x) to (It , Xt ) = ( j, y) (with
(i, x), ( j, y) E V ), the transition rate between
the discrete states i and j depends on the environmental condition x just before the jump and is a function
x a(i, j, x). Similarly, the environmental condition just after the jump Xt is distributed according to
some distribution (i,j,x) (dy), which depends on both
components just before the jump (i, x) and on the after
jump discrete state j. Between jumps, the discrete
component It is constant, whereas the evolution of
the environmental condition Xt is deterministic, solution of a set of differential equations which depends
on the fixed discrete state: given that It () = i for
all t [a, b], we have dtd Xt () = v(i, Xt ()) for all
t [a, b], where v is a mapping from E V to V .
Contrary to the general model from (Davis 1984),
we do not take here into account jumps of (It , Xt )t0 ,
eventually entailed by the reaching of the frontier of V .
Given such a PDMP (It , Xt )t0 , we are interested in
different quantities linked to this process, which may
be written as cumulated expectations on some time
interval [0, t] of some bounded measurable function h
of the process:
R0 (t) = E0
155
t
0
h(Is , Xs ) ds
ASSUMPTIONS
IFp (t) =
p R0 (t)
R0 (t) p
t+
(R0 (t)/t)
p
R0 (t)/t
p
s(p) h(p) ds
iE
IFp () = lim
t
0
h(p) (i, x) ds s(p) (i, dx)
TRANSITORY RESULTS
156
H0 f (i, x) =
a(p) (i, j, x)
R0
(t) =
p
(p)
s(p)
0
(p)
+ v (i, x) f (i, x)
where we set a(p) (i, i, x) =
(p)
j
=i
a(p) (i, j, x)
(p)
f
(i, x, s) + v(p) (i, x) f (i, x, s) (1)
s
(p)
v(p)
(i, x) (i, x, s)
p
(p)
(t) =
0
s(P)
if 0 s t
(2)
(p)
(3)
H (p) (p)
(., ., s) ds
p t
H (p)
(i, x, s)
p
a(p)
(p)
(i, j, x) (j, y, s)(i,j,x) (dy)
=
p
jE
(p)
a(p) (i, j, x) ( (j, y, s)(i,j,x) (dy)))
+
p
jE
s(p)
where we set:
(i,i,x) = x .
Let DH be the set of functions f (i, x, s) from E
V R+ to R such that for all i E the function
(x, s) f (i, x, s) is bounded, continuously differentiable on V R+ and such that the function
(p)
x f
s (i, x, s) + v (i, x) f (i, x, s) is bounded
on V R+ . For f DH , we define
H (p) f (i, x, s) =
jE
h(p)
ds
p
h(P)
ds
pl
s(P)
H (P) (P)
(., ., s) ds
pl t
(4)
(P)
157
data h(P) /pl and H (P) /pl (see (4)), which is done
simultaneously to the solving.
This has to be compared with the usual finite differ(P)
R
ences method, for which the evaluation of pl0 (t) for
one single pl requires the computation of R(P)
0 for two
different families of parameters (P and P with pl sub(P)
R
stituted by some pl + ). The computation of pl0 (t)
(p)
(7)
for all (i, x) E V . Any other element of DH0 solution of (7) is of the shape: Uh(p) + C where C is a
constant.
The function Uh(p) is called the potential function
associated to h(p) .
The following theorem provides an extension to
PDMP of the results from (Cao and Chen 1997).
Theorem 1.2 Let us assume (i,j,x) and vs(i, x) to be
independent on p and H1 , H2 to be true. Then, the
following limit exists and we have:
ASYMPTOTIC RESULTS
differential equation:
(p)
lim
t+
(p)
H
1 R0
h(p)
(t) = (p)
+ (p) 0 Uh(p) (8)
t p
p
p
where we set:
(p)
H0
0 (i, x)
p
a(p)
(i, j, x) 0 (j, y)(i,j,x) (dy)
:=
p
jE
(5)
Uh(p) (i, x) :=
0
A FIRST EXAMPLE
158
(p)
t (x, s)
s
t (x, s) = (x)
x
f(p) (x) =
(p)
(p)
E(T1 )
x
0
(p) (u)du
(p)
E(T1 )
(9)
F(x)
where F is the survival function F(t)
=
P(T1 > t)). Then, there are some C < + and
0 < < 1 such that:
(p)
(x)
p
0
x
v
(1 Q()
e 0 (u)du dv) dx
0
5.2
Numerical results
(,)
(x) =
x1 if x < x0
P,,x0 (x) if x0 x < x0 + 2
where (, ) O = [0, +] [2 + ], x0 is chosen such that T1 > x0 is a rare event (P0 (T1 > x0 ) =
ex0 small) and P,,x0 (x) is some smoothing function which makes x (,) (x) continuous on R+ .
For such a failure rate, it is then easy to check that
assumptions H1 and H2 are true, using Proposition 6.
Taking (, ) = (105 , 4) and x0 = 100 (which
ensures P0 (T1 > x0 )
5 10435 ), we are now able
to compute IF (t) and IF () for t . In order to
validate our results, we also compute such quantities
by finite differences (FD) using:
Q(t)
1
(Q(p+) (t) Q(p) (t))
p
159
FD
EMR
IF ()
IF ()
102
104
106
108
1010
4.625 103
8.212 102
2.411 101
2.499 101
2.500 101
2.500 101
2.824
2.821
2.821
2.821
2.821
2.821
-2
FD 10
-3
FD 10
-4
FD 10
-5
FD 10
-6
FD 10
-7
FD 10
-8
FD 10
EMR
IF ()
0.9
0.8
0.7
0.6
0.5
0.4
EMR
10
0.3
10
0.2
10
0.1
10
0
0
-2
10
-6
10
-7
10
-8
-5
-4
-3
10
x
20
30
40
1
du = +,
r0 (u)
1
du = +
r1 (u)
50
10
FD 10
FD 10 -2
-3
FD 10
-4
FD 10
IF ()
for i = 0, 1 and
f0 (x) =
(u) (u)
x
K R/2
( r 1(u) r 0(u) ) du
1
0
e
r0 (x)
(10)
f1 (x) =
(u) (u)
x
K R/2
( r 1(u) r 0(u) ) du
1
0
e
r1 (x)
(11)
-2
EMR
0
6.1
10
20
30
40
50
A SECOND EXAMPLE
Presentationtheoretical results
160
we set:
Q1 (t) =
=
=
6.3
1
E
t 0
1
t
1
t
t
0
i=0
1 R aXs R +b ds
2
1 t
R
2 +b
R
2 a
0 (x) = x0 ;
s (i, dx) ds
s h1 ds
(12)
r0 (x) = (R x)0 ;
1 (x) = (R x)1 ;
1
Q2 (t) = E0
t
Numerical example
1{Is =0 and Is =1}
0<st
t
1
E0
0 (Xs )1{Is =0} ds
t
0
t R
1
=
0 (x)s (0, dx) ds
t 0 0
1 t
s h2 ds
=
t 0
(13)
r1 (x) = x1
0 = 1.2;
R = 1;
1 = 1.10;
a = 0.2;
b = 0.2.
d
vi (x) (Uhi0 (i, x)) + i (x)(Uhi0 (1 i, x)
dx
Uhi0 (i, x)) = Qi0 () hi0 (i, x)
for i0 = 0, 1, which may be solved analytically.
Q 0 ()
using
A closed form is hence available for ip
(1011) and (8).
161
FD
EMR
Relative error
0
1
0
1
a
b
3.59 102
3.57 102
5, 40 103
3, 65 103
6, 95 103
7, 19 103
1, 06 107
1, 53 107
4.45 102
3.19 101
2.80 101
4.98 101
5.09 101
4.43 102
3.17 101
2.78 101
4.98 101
5.09 101
FD
EMR
Relative error
0
1
0
1
1.81 101
1.81 101
6.22 102
6.05 102
6.19 102
6.01 102
1, 67 104
1, 30 104
5, 21 103
5, 58 103
1.71 101
1.71 101
FD
EMR
Relative error
0
1
0
1
a
b
8.83 102
9.10 103
4.89 101
1.97 101
2.48 101
7.11 101
8.82 102
9.05 103
4.85 101
1.97 101
2.48 101
7.11 101
1, 08 103
5, 29 103
7, 51 103
4, 04 103
4, 89 104
7, 77 106
ACKNOWLEDGEMENT
The authors would like to thank Anne Barros,
Christophe Brenguer, Laurence Dieulle and Antoine
Grall from Troyes Technological University (Universit Technologique de Troyes) for having drawn their
attention to the present subject.
REFERENCES
FD
EMR
Relative error
0
1
0
1
2.06 101
6.80 102
1.25 101
4.11 103
2.06 101
6.79 102
1.24 101
4.03 103
9, 12 104
2, 12 103
4, 27 103
2, 00 102
162
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
I. Caamn
Universidad Politcnica de Madrid, Madrid, Spain
ABSTRACT: The present status of the developments of both the theoretical basis and the computer implementation of the risk and path assessment modules in the SCAIS system is presented. These modules are
supplementary tools to the classical probabilistic (i.e. fault tree event tree and accident progression, PSAET/FT/APET) and deterministic (dynamic accident analysis) tools that are able to compute the frequency of
exceedance in a harmonized approach, based on a path and sequences version (TSD, theory of stimulated
dynamics) of the general stimulus driven theory of probabilistic dynamics. The contribution examines the
relation of the approach with classical PSA and accident dynamic analysis, showing how the engineering
effort already made in the nuclear facilities may be used again, adding to it an assessment of the damage
associated to the transients and a computation of the exceedance frequency. Many aspects of the classical safety assessments going on from technical specifications to success criteria and event tree delineation/
evaluation may be verified at once, making effective use of regulatory and technical support organizations
resources.
INTRODUCTION
This paper deals with the development of new capabilities related with risk informed licensing policies
and its regulatory verification (RIR, Risk Informed
Regulation), which require a more extended use of
risk assessment techniques with a significant need to
further extend PSA scope and quality. The computerized approach (see ESREL8 companion paper on
the SCAIS system) is inspired by a coherent protection theory that is implemented with the help of a
mathematical formalism, the stimulus driven theory
of probabilistic dynamics, SDTPD, (ref. 1) which is
an improved version of TPD (ref. 3), whose restrictive
assumptions are released.
Computer implementation of SDTPD makes necessary to design specific variants that, starting from the
SDTPD original formal specification as formulated
in ref. 1, go down the line towards generating a set of
hierarchical computing algorithms most of them being
the same as those in the classical safety assessments.
They aim to compute the exceedance frequency and
the frequency of activating a given stimuli as the main
figures of merit in RIR defense in depth regulation
requirements, in particular the adequate maintenance
of safety margins (ref. 2).
2.1
As it is well known, the differential semi-Markov equations for the probability j (t) of staying in state j at
time t are
d
j (t) = j (t)
pjk (t) + j (t)
dt
j
k =j
(1)
k =j
t
exp ds A(s) k ( )
j (t) =
163
[A(s)]jk
jk
pjk (s)k = j
j (s)
pjk (s)k = j
k =j
(2)
They may also be written as integral equations representing the solution of the semi Markov system for
the frequency of entering state j, (ingoing density)
j (t), and its probability j (t) given by
t
[k ( ) ( ) + k ( )]qjk (t, )
d
j (t) =
0
j (t) =
that is
j (t) =
d[j ( ) ( ) + j ( )]e
ds
k =j
seq j
(t)
j
j11 (o)
k =j
d n
seq j
Qj
(t/
n )
(8)
j V ( <t)
n,j n
pjk (s)
(3)
where stands for the Dirac delta, and qjk dt is the
probability that, being in state k at time , event jk
takes place at exactly time t; it is then given by
qjk (t, ) = pkj (t)e
l =k
pkl (s)ds
(4)
jn (t)
n=1
jn (t) =
0
k =j
j1 (t)
(5)
k =j
2.2
where n stands for the number of experienced events.
Note that the qs satisfy a strong closure relation for
all 1 < 2 < t
2
qk,j (t, 2 ) 1
dvql,j (, 1 ) = qk,j (t, 1 )
l =j
(6a)
that ensures that at all times
j (t) = 1
(6b)
Qj
(t/
n ) = qj,jn (t, n )qjn ,jn1 (n , n1 ).....qj2 ,j1
(2 , 1 )1 < < n1 < n < t
n (1 , . . . , n )
(7)
x0 = g i (0, x0 )
(9)
164
x = x(t, j ) = gjn (t n+ , u n )
n > t > n1
+
, u n1 )
u n = gn1 (n n1
(10)
l =jn
(11)
2.3 Extension to stimuli: modeling the plant states
n1
Qjnn,jn1
,seq j
seq j
(n , n1 ) qjn ,jn1 (n , n1 )
(Jn , n /Jn1 , n1
) (12)
n1 . To compute this stimuli probability is then a typical problem of the classical PSA binary , Markov
problem with the stimuli matrix, STn , associated with
the (+ activated, deactivated) state of each stimuli
(i.e. A STn in eq. 2) and compound-stimuli state
vector I , once a dynamic path is selected, i.e. the
solution is
t
J (t) =
STn (u)du
Kn (n+ )
(13)
exp
Matrix [+
n ] models the potential reshuffling of the
state of the stimuli that may take place as a result of
the occurrence of dynamic event n, i.e. some activate,
some deactivate, some stay as they were. It also incorporates the essential feature of SDTPD that stimuli
G n , responsible for the nth dynamic transition, should
be activated for the transition to take place. Then the
equivalent to eq (8) may be applied with:
n1
Qjnn,jn1
3.1
J ,Kn
Jn
[+
n ]Kn ,Jn (n )
(14)
Jn
J
Qj,seq
) =
j (t/
J ,J ,seq j
Qj,jn n
J ,J
n1
(t, n )Qjnn,jn1
,seq j
J1 ,J2 ,...Jn
J ,J ,seq j
(n , n1 ) . . . . . . Qj22,j11
(2 , 1 ) J1 (0+ )
(15)
with
J ,J
Kn
,seq j
(t, n )
seq j
165
qjn ,jn1 (n , n1 )
t
exp STn (u)du
Kn
Jn, Kn
[+
n ]Kn ,Jn1 (16)
Example
I
Qj,seq
n ) = G+1 J G+ J . . . . . . G+ J ...
j (t/
2
I ,J ...
G1
G1
Gn
G1
Gn1
seq j
Qj (t/
n )
(22)
seq j
1 if Gn I is activated
0 if Gn I is deactivated
(17)
Then
t
STn (u)du = Identity matrix
STn = 0 exp
where Qj (t/
n ) is given in eq. (7). For n > N all
stimuli would be deactivated and then the [+
n ]I ,J contribution would be zero, as well as in any sequence with
repeated headers.
In other words, the result reduces to a standard
semi Markov case, but in the solution equation (8)
only sequences of non-repeated, up to N events are
allowed. This academic case illustrates the reduction
in the number of sequences generated by the stimuli
activation condition that limits the sequence explosion
problem of the TPD in a way consistent with explicit
and implicit PSA customary practice.
Results of this example in case of N = 2 are shown
in sections 45 below, mainly as a way of checking the
method for integration over the dynamic times, that is
the subject of the next section.
(18)
J
[+
n ]I ,J (n )
G+n J I ,J J (n )
Gn
(19)
damage
(t) =
j ,JN
jJ11 (0)
d
J damage
N,
Qj,seq
j
(t/
)
Vn,j (
<t)
1 for J1 = (+, +, +N ) all stimuli activated
J1 +
(0 ) =
0 all other states
(23)
(20)
and
I (1+ < t < 2 ) = I (1+ ) =
=
G+1 J I ,J J (1 )
1 for I = JG1 = (, +, +N )
0 all other states
G1
(21)
166
Table 1.
tests.
Event Description
1
2
3
4
4.1
b)
A(Xi Xj ) =
2 The
a)
(tAD tini )2
2
(i, j = 1, 2; i = j)
V (Xi Xj Xk ) =
(tAD tini )3
6
(i, j, k = 1, 2, 3; i = j = k)
and the generalization of this equation to N dimensions:
V (Xi1 Xi2 XiN ) =
(tAD tini )N
N!
(i1 , i2 , . . . , iN = 1, 2, . . . , N ; i1 = i2 = . . . = iN )
(24)
We can associate to any damage point in the
N -dimensional sampling domain an incremental vol
ume of the form V = Ni=1 ti where the incremental times on each axis have to be determined.
167
4.2
14000
12000
12000
10000
10000
8000
6000
a)
Sequence [1 2 4]
14000
COMB
COMB
Sequence [1 2 4]
8000
6000
6000
8000
10000
SIS
12000
14000
b)
6000
8000
10000
SIS
12000
14000
Figure 3.
168
damage domain, i.e. if a neighbor of an existing damage point is not a damage one, then a
new point is sampled between them and evaluated. Figure 4b shows the result of this stage for
the same example sequence [1 2 4].
Seeding stage: An aleatory seeding of new sampling points along the whole domain has been
included here, in order to discover new damage
zones disjoint from the previous ones. Several
parameters control the stopping criteria in this
stage. In particular, we stop when a number of
seeding points proportional to the refinement of
the initial sampling mesh grid has been reached.
Figure 4c shows the result of this stage for the
actual example.
Growing stage: At this stage, the algorithm
extends the sampling through all the interior zone
of the damage domain. With the combination of
the refining stage + growing stage, we optimize
the number of new points being sampled while
refining all inside the damage domain.
Figure 4d shows the result of this stage in the
example sequence.
VERIFICATION TESTS
To verify the integration routines we took the simple case of the section 3.2 example, already used in
b)
Sequence [1 2 4]
Sequence [1 2 4]
13000
13000
12000
12000
11000
11000
10000
10000
COMB
COMB
14000
9000
9000
8000
8000
7000
7000
6000
6000
5000
5000
6000
8000
10000
12000
14000
6000
8000
SIS
d)
Sequence [1 2 4]
14000
14000
13000
13000
12000
12000
11000
11000
10000
10000
COMB
COMB
c)
10000
12000
14000
12000
14000
SIS
9000
Sequence [1 2 4]
9000
8000
8000
7000
7000
6000
6000
5000
5000
6000
8000
10000
12000
14000
6000
8000
10000
SIS
SIS
Figure 4. Stages of the adaptive search algorithm in the sequence [1 2 4]: a. Initial stage; b. Refining stage; c. Seeding stage;
d. Growing stage.
169
Probability
1
13
12
132
123
0.1250
0.1250
0.3750
0.1875
0.1875
# Total paths
Probability
1
13
12
132
123
1
16
16
136
136
0.1250
0.1231
0.3717
0.1632
0.2032
# Total paths
Probability
1
13
12
132
123
1
64
32
2080
2080
0.1250
0.1251
0.3742
0.1672
0.2091
Numerical results
1.4
[1]
[1 3]
[1 2]
[1 2 3]
[1 3 2]
1.2
0.8
pj
i
5.1
Sequence
0.6
0.4
0.2
0
0.4
Figure 5.
0.6
0.8
1
time (s)
1.2
1.4
1.6
x 10
j probability) can be observed, as well as the fulfilment of the normalization condition given by eq. (6b),
further confirmed in Fig. 5, that shows a bar diagram
with the probabilities at all times.
170
new TSD methodology based on it is feasible, compatible with existing PSA and accident dynamic tools.
It is able to find the damage exceedance frequency
that is the key figure of merit in any strategy for
safety margins assessment. We have discussed some
issues associated to the development of its computer implementation, and given results of preliminary
verification tests.
Table 5. Damage exceedance relative frequency TSD computation of the verification example; stopping criterium
exc 5%.
iexc i1
Sequence
# Damage paths
# Total paths
Exceed. freq.
1
13
12
132
123
0
9
14
3155
1178
1
15
15
3308
1314
0.0000
0.1718
0.5188
0.0700
0.0350
REFERENCES
The second part of the verification tests computes the relative exceedance frequency exc /(0)
of the damage stimulus 4 (entering the flammability region for H2 combustion) for the same example.
Table 5 shows the values of exc /(0), of each possible
sequence, and the total relative exceedance frequency
will be the sum of them, totalling 0.7956.
CONCLUSIONS
171
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
D. Ginestar
Department of Applied Mathematics, Universidad Politcnica de Valencia, Valencia, Spain
S. Martorell
Department of Chemical and Nuclear Engineering, Universidad Politcnica de Valencia, Valencia, Spain
ABSTRACT: Faults in Heating Ventilating and Air-Conditioning (HVAC) systems can play a significant
role against the system in terms of energy efficiency loss, performance degradations, and even environmental
implications. Being the chiller one of the most important components of an HVAC system, the present work
is focused on it. A lumped model is proposed to predict the chiller fault free performance using data easily
obtained from an industrial facility. This model predicts the chilled water temperature, the operating pressures
and the system overall energy performance. The fault detection methodology is based on comparing actual and
fault free performances, using the proposed model, and operating variables thresholds. The technique has been
successfully applied for fault detection of a real installation in different faulty conditions: refrigerant leakage,
and water reduction in the condenser and in the secondary circuit.
INTRODUCTION
175
T6, P6
T5, P5
T8, P8
T9
T2, P2
T3
P4
P3
T12
T13
For the development of the fault detection methodology, a monitored vapour-compression chiller, which
develops a simple compression cycle, has been used.
This facility consists of the four basic components:
an open-type compressor driven by a variable speed
electric motor, an isolated shell-and-tube (12) evaporator, where the refrigerant is flowing inside the tubes,
using a brine (water-glycol mixture 70/30% by volume) as secondary fluid, an isolated shell-and-tube
(12) condenser, with the refrigerant flowing along
the shell, where water is used inside the tubes as secondary fluid, and a thermostatic expansion valve. In
(Fig. 1) a scheme of the installation is shown.
In order to introduce modifications in the chiller
operating conditions, we use the secondary fluids
loops, which help to simulate the evaporator and condenser conditions in chillers. The condenser water
loop consists of a closed-type cooling system, (Fig. 2),
which allows controlling the temperature of the water
and its mass flow rate.
The cooling load system (Fig. 3) also regulates
the secondary coolant temperature and mass flow rate
using a set of immersed electrical resistances and a
variable speed pump.
T11
T7, P7
T4
T10
T
Figure 1.
Figure 2.
176
P1, T1
Figure 3.
MATLAB
REFPROP
Physical model
USER INTERFACE
Actual
performance
Residual
generator
System
Model (expected
performance)
Figure 4.
xk
Figure 5.
177
~x
k
Residuals
evaluation
A typical fault on both the evaporator and the condenser circuits may be the reduction of the mass flow
rate of the secondary fluids through the interchangers. These reductions can deteriorate the compliance
of installation objectives and, on the other hand, can
cause energy efficiency reduction due to mass flow
rate reduction of the secondary fluids at the condenser
and evaporator, which produces an increase of the
compression rate.
Thus, in the following, we will analyze these three
kinds of failures. The first step for the correct identification of each type of fault is to analyze the variations
produced bye each kind of faulty operation, followed
by the selection of a reduced group of variables that
have a significant variation during the fault that can be
used to detect it.
In order to identify these variables some variations
have been forced in the facility in such a way that they
can simulate those failures. Then, the correlation of
each variable with each fault has been calculated both
in the short term and medium term or quasi-steady
state after it.
Tb ,out
Tw,out
mb , Tb ,in
pe
pk
Model
mw , Tw,in
N
PC
COP
Geometric characteristics of the system
Figure 6.
Model scheme.
water
water
superheated vapour
air
superheated vapour
Expansion valve
Compressor
Figure 7.
178
Variable
Initial
ST var (%)
MT var (%)
Pk (bar)
Pe (bar)
Tki (K)
Tko (K)
Toi (K)
T (K)
Twi (K)
Two (K)
Tbi (K)
Tbo (K)
15.18
4.00
0.068
347.63
308.61
265.09
266.51
291.00
297.68
286.89
1.46
0.22
0.08
0.02
0.04
0.02
0.00
0.01
0.11
0.02
1.96
0.22
0.73
0.14
0.23
0.01
0.49
0.05
0.15
0.00
4.2
As shown in (Table 2), both the pressure at the condenser and the pressure at the evaporator are likely to
be monitored. Furthermore both have a clear reaction
in the transient state and after it, however the evaporator pressure has approximately three times higher
deviation than the pressure at the condenser and thus,
it will be a more reliable indicator for the failure detection. The response of the pressure in the condenser is
shown in (Fig. 9).
4.3 Refrigerant leakage
15.7
15.6
15.5
Condenser
Pressure
15.4
15.3
15.2
15.1
15.0
14.9
1700
1900
2100
2300
2500
2700
2900
3100
Initial
ST var (%)
MT var (%)
Pk (bar)
Pe (bar)
Tki (K)
Tko (K)
Toi (K)
T (K)
Twi (K)
Two (K)
Tbi (K)
Tbo (K)
16.28
3.60
352.97
311.26
263.45
266.52
295.89
302.03
284.44
352.97
1.02
3.77
0.00
0.04
0.27
0.09
0.03
0.03
0.01
0.00
1.20
3.38
0.05
0.16
0.34
0.55
0.03
0.10
0.06
0.05
Initial
Var (%)
Pk (bar)
Pe (bar)
Tki (K)
Tko (K)
Toi (K)
T (K)
Twi (K)
Two (K)
Tbi (K)
Tbo (K)
15.21
3.89
347.57
308.48
264.91
266.41
291.05
297.66
287.07
347.57
1.35
4.60
0.00
0.02
0.15
0.21
0.01
0.05
0.00
0.00
!b
3.85
4.10
3.80
4.05
Pressure [bar]
Pressure [bar]
3.75
3.70
3.65
3.60
Evaporator
Pressure
4.00
3.95
3.90
Evaporator
Pressure
3.55
3.85
3.50
3.45
17300
17800
18300
18800
3.80
7700
19300
Time(s)
7900
8100
8300
8500
8700
8900
Time (s)
179
RESULTS
0.8
+3
-3
0.2
0.0
-0.2
3300
3500
3700
3900
4100
4300
4500
4700
0.20
0.10
0.00
-0.10
-0.20
-0.30
As has been exposed above, the best variable to predict a fault in the condensers circuit is the condenser
pressure. In order to verify if the fault would be predicted when it takes place, we will force this fault
in the facility by shutting down partially the valve at
the condenser circuit. The evolution of the residuals
associated with the condenser pressure is shown in
(Fig. 11).
As we can see in this Figure, the residuals of the condenser pressure remains inside the thresholds until the
fault takes place. That shows that the selected variable
and the methodology proposed would detect the fault,
not causing false alarms during fault-free operation.
5.2
0.4
(1)
0.6
-0.40
5000
+ 3
- 3
5500
6000
6500
180
0.10
0.05
Residuals
0.00
-0.05
+3
-0.10
-0.15
-0.20
8000
-3
REFERENCES
8200
8400
8600
8800
Time (s)
Figure 13.
leakage.
CONCLUSIONS
181
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
R. Winther
stfold University College, Halden, Norway
ABSTRACT: It is inevitable that software systems contain faults. If one part of a system fails, this can affect
other parts and result in partial or even total system failure. This is why critical systems utilize fault tolerance
and isolation of critical functions. However, there are situations where several parts of a system need to interact
with each other. With todays fast computers, many software processes run simultaneously and share the same
resources. This motivates the following problem: Can we, through an automated source code analysis, determine
whether a non-critical process can cause a critical process to fail when they both run on the same computer?
INTRODUCTION
In this paper we report on the results from the practical application of a method that was first presented
at ESREL 2007 (Sarshar et al. 2007). The presented
method identified failure modes that could cause error
propagation through the usage of the system call interface of Linux. For each failure mode, its characteristics
in code were determined so it could be detected when
analyzing a given source code. Existing static analysis
tools mainly detect language specific failures (Koenig
1988), buffer overflows, race conditions, security vulnerabilities and resource leaks, and they make use of
a variety of analysis methods such as control and data
flow analysis. An examination of some of the available tools showed that no existing tool can detect all
the identified failure modes.
The purpose of this paper is to present and evaluate a
conceptual model for a tool that allows automatic analysis of source code for failure modes related to error
propagation. Focus is on the challenges of applying
such analysis automatically. An algorithm is proposed,
and a prototype implementation is used to assess this
algorithm. In contrast to existing tools, our checks
specifically focus on failure modes that can cause error
propagation between processes during runtime. As we
show by applying the tool on a case, most of the failure modes related to error propagation can be detected
automatically.
The paper is structured as follows: Section 2
describes the background and previous work. Section 3
describes the analysis method (Sarshar et al. 2007).
In Section 4, existing tools are considered and in
Section 5 a method for automatically assessing source
BACKGROUND
183
The errors identified in this approach were erroneous values in the variables passed to the system
call interface and errors caused when return, or modified, pointer variables were not handled properly.
From the analysis we know not only which functions behave non-robustly, but also the specific input
that results in errors and exceptions being thrown
by the operating system. This simplifies identification of the characteristics a failure mode has in
source code.
Our proposed approach of analyzing error propagation between processes concerns how the process
of interest can interact with and affect the environment (the operating system and other processes).
A complementary approach could be to analyze how
a process can be affected by its (execution) environment. In (Johansson et al. 2007), the authors inject
faults in the interface between drivers and the operating system, and then monitor the effect of this faults
in the application layer. This is an example where processes in the application layer are affected by their
execution environment. Comparing this method to our
approach, it is clear that both methods make use of
fault injection to determine different types of failure
effects on user programs. However, the examination
in (Johansson et al. 2007) only concerns incorrect values passed from the driver interface to the operating
system. Passing of incorrect values from one component to another is a mechanism for error propagation
and relate to problems for intended communication
channels. Fault injection is just one method to evaluate a process robustness in regards to incorrect values
in arguments. In (Sarshar 2007), the failure effects
of several mechanisms were examined: passing of
arguments and return value, usage of return value,
system-wide limitations, and sequential issues. These
methods complement each other. Brendan Murphy,
co-author of (Johansson et al. 2007), from Microsoft
Research1 pointed out his worries at ISSRE 2007:
The driver developers do not use the system calls
correctly. They do for instance not use the return values from the system calls. It is nothing wrong with the
API, it is the developer that does not have knowledge
about how to use the system calls.
Understanding the failure and error propagation
mechanisms in software-based systems (Fredriksen &
Winther 2006) (Fredriksen & 2007) will provide the
knowledge to develop defences and avoid such mechanisms in software. It is therefore important to be aware
of the limitations for the proposed approach. This analysis only identifies failure modes related to the using
of system calls in source code. Other mechanisms for
1 Cambridge,
UK.
error propagation that do not involve usage of the system call interface will not be covered by this approach.
Eternal loop structures in code is an example of a failure mode that does not make use of system calls. This
failure mode can cause error propagation because it
uses a lot of CPU time.
3
ANALYSIS
184
Table 1.
Ref.
Failure mode
Detectability in code
Check
F.29.1.D
Parameter key is
less than type key_t
Parameter key is
greater than type key_t
Parameter key is
of wrong type
IPC_PRIVATE specified
as key when it should not
Parameter size is
of wrong type
Parameter shmflg is
not legal
Value
Value
F.29.1.E
F.29.1.F
F.29.1.G
F.29.2.F
F.29.3.B
F.29.3.C
F.29.3.D
F.29.3.E
F.29.3.F
F.29.3.G
F.29.4.A
Parameter shmflg is
of wrong type
Permission mode is not
set for parameter shmflg
Access permission is
given to all users instead
of user only
Permission mode is
write when it should
have been read
Permission mode is
read when it should
have been write
Return value is not used
Type
Type
Value
Type
Value
Value
Value
Value
Variable
usage
2 Allocates
185
Table 2.
Failure mode characteristics in code related to sequential issues for shared memory.
Ref.
Function
Failure mode
Detect-ability in code
Check
F.shm.A
shmdt()
Target segment
is not attached
Line numbers
F.shm.B
shmat()
Segment to attach
is not allocated
F.shm.C
shmctl()
Target segment
is not identified
F.shm.D
shmat()
Segment is not
detached prior
to end of scope
EXISTING TOOLS
Table 3.
Line numbers
Line numbers
Line numbers
Tool
Rel.
Checks
Coverity
Some
Klockwork
PolySpace
Purify
Flexelint
Some
No
Some
Some
LintPlus
CodeSonar
Safer C toolset
Unkn.
Some
Some
DoubleCheck
Sotoarc
Astree
Mygcc
Splint (LC-Lint)
Some
No
Some
Some
Some
RATS
Sparce
Unkn.
No
Passing of arguments
and return values
Ignored return values
Passing of arguments
Passing of arguments
and return values
User-defined checks
Passing of arguments,
ignored return values
and sequential issues
186
1. Preliminary part:
Figure 1.
187
3. Report part
This part is integrated with the analysis part in
this prototype. Printed messages are in different
color depending on their contents. Warnings are
printed in orange and errors are printed in red.
As a test program, consider the code of shm.c in
Listing 1.
/ / C re a t e a new s h a re d memory s eg m e n t
i f ( ( shmid = s h m g e t ( key , s i z e , s h m f l g ) )
==
1) {
p e r r o r ( s hm g e t f a i l e d ) ;
re t u r n 1 ;
} else {
( vo i d ) f p r i n t f ( s t d o u t , s h m g e t
r e t u r n e d %d\ n , shmid ) ;
}
23
24
25
26
27
28
29
/ / Make t h e d e t a c h c a l l and r e p o r t t h e
results
r e t = shmdt ( wo r k a d d r ) ;
p e r r o r ( shmdt ) ;
30
31
32
33
/ / Make t h e a t t a c h c a l l and r e p o r t t h e
results
wo r k a d d r = s h m a t ( shmid , shmaddr , s i z e ) ;
i f ( wo r k a d d r == ( char * ) ( 1) ) {
p e r r o r ( shmat f a i l e d ) ;
re t u r n 1 ;
} else {
( vo i d ) f p r i n t f ( s t d o u t , s h m a t
returned succesfully ) ;
}
re t u r n 0 ;
34
35
36
37
38
39
40
41
42
# i n c l u d e < s t d i o . h>
# i n c l u d e < s y s / t y p e s . h>
# i n c l u d e < s y s / i p c . h>
# i n c l u d e < s y s / shm . h>
e x t e r n vo i d p e r r o r ( ) ;
i n t main ( ) {
/ / k e y t o be p a s s e d t o s h m g e t
i n t key = 1 0 0 ;
/ / s h m f l g t o be p a s s e d t o s h m g e t
i n t shmflg = 00001000;
/ / r e t u r n v a l u e f ro m s h m ge t
i n t shmid ;
/ / s i z e t o be p a s s e d t o s h m ge t
i n t s i z e = 1024;
/ / shmaddr t o be p a s s e d t o s hm a t
char * shmaddr = 0 0 0 0 0 0 0 0 ;
/ / r e t u r n e d w o rk i n g a d d re s s
c o n s t char * wo r k a d d r ;
int ret ;
4 This
representation was inspired by the SIMPLE representation proposed in the McCAT compiler project
at McGill University for simplifying the analysis and
optimization of imperative programs.
188
Figure 2.
DISCUSSION
In the examination of the system call interface we identified failure modes that could cause error propagation.
For each failure mode, a check was described to determine its presence in code. The focus was on detection
of these when analyzing a given source code. The analysis process was then to be automated using static
analysis. Furthermore, we created a prototype tool
that managed to detect these when analyzing source
code.
Static analysis tools look for a fixed set of patterns, or compliance/non-compliance with respect to
rules, in the code. Although more advanced tools allow
new rules to be added over time, the tool will never
detect a particular problem if a suitable rule has not
been written. In addition, the output of a static analysis tool still requires human evaluation. It is difficult
for a tool to know exactly which problems are more
189
REFERENCES
Fredriksen R. & Winther R. 2006. Error PropagationPrinciples and Methods. Halden Internal
Report, Norway, HWR-775, OECD Halden Reactor Project.
Fredriksen R. & Winther R. 2007. Challenges Related
to Error Propagation in Software Systems. in Safety
and Reliability Conference, Stavanger, Norway,
June 2527. Taylor & Francis, 2007, pp. 8390.
Johansson A., Suri N. & Murphy B. 2007. On the
Impact of Injection Triggers for OS Robustness
Evaluation. International in Proceedings of the 18th
IEEE International Symposium on Software Reliability Engineering (ISSRE 2007). Npvember, 2007,
pp. 127136.
Koenig A. 1988. C Traps and Pitfalls. Reading, Mass.,
Addison-Wesley.
Sarshar S. 2007. Analysing Error Propagation
between Software Processes in Source Code. Masters thesis, Norway, stfold University College.
Sarshar S., Simensen J.E., Winther R. & Fredriksen R.
2007. Analysis of Error Propagation Mechanisms
between Software Processes. in Safety and Reliability Conference, Stavanger, Norway, June 2527.
Taylor & Francis, 2007, pp. 9198.
190
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In maintenance field, prognostic is recognized as a key feature as the estimation of the remaining
useful life of an equipment allows avoiding inopportune maintenance spending. However, it can be difficult to
define and implement an adequate and efficient prognostic tool that includes the inherent uncertainty of the
prognostic process. Within this frame, neuro-fuzzy systems are well suited for practical problems where it
is easier to gather data (online) than to formalize the behavior of the system being studied. In this context,
and according to real implementation restrictions, the paper deals with the definition of an evolutionary fuzzy
prognostic system for which any assumption on its structure is necessary. The proposed approach outperform
classical models and is well fitted to perform a priori reliability analysis and thereby optimize maintenance
policies. An illustration of its performances is given by making a comparative study with an other neuro-fuzzy
system that emerges from literature.
INTRODUCTION
step, even impossible. Intelligent Maintenance Systems must however take it into account. 2) On the
other hand, in many cases, it is not too costly to equip
dynamic systems with sensors, which allows gathering real data online. Furthermore, monitoring systems
evolve in this way.
According to all this, neuro-fuzzy (NF) systems
appear to be very promising prognostic tools: NFs
learn from examples and attempt to capture the subtle
relationship among the data. Thereby, NFs are well
suited for practical problems, where it is easier to
gather data than to formalize the behavior of the system being studied. Actual developments confirm the
interest of using NFs in forecasting applications (Wang
et al. 2004; Yam et al. 2001; Zhang et al. 1998). In
this context, the paper deals with the definition of an
evolutionary fuzzy prognostic system for which any
assumption on its structure is necessary. This model
is well adapted to perform a priori reliability analysis
and thereby optimize maintenance policies.
The paper is organized in three main parts. In
the first part, prognostic is briefly defined and positioned within the maintenance strategies, and the
relationship between prognostic, prediction and online
reliability is explained. Following that, the use of
Takagi-Sugeno neuro-fuzzy systems in prognostic
applications is justified and the ways of building such
models are discussed. Thereby, a NF model for prognostic is proposed. In the third part, an illustration
of its performances is given by making a comparative study with an other NF system that emerges from
literature.
191
2
2.1
Figure 1.
(1)
Figure 2.
192
be expressed as follows:
R(t) = 1 Pr[y(t) ylim ] = 1
g(y/t) dy
ylim
(3)
The remaining useful life (RUL) of the system can
finally be expressed as the remaining time between
the time in which is made the prediction (tp) and the
time to underpass a reliability limit (Rlim ) fixed by the
practitioner (see Fig. 2).
These explanations can be generalized with a multidimensional degradation signal. See (Chinnam and
Pundarikaksha 2004) or (Wang and Coit 2004) for
more details. Finally, the a priori reliability analysis can be performed if an accurate prognostic tool
is used to approximate an predict the degradation of
an equipment. This is the purpose of next sections of
this paper.
3
3.1
major ANNs drawback (lack of knowledge explanation) while preserving their learning capability. In this
way, neuro-fuzzy systems are well adapted. More precisely, first order Tagaki-Sugeno (TS) fuzzy models
have shown improved performances over ANNs and
conventional approaches (Wang et al. 2004). Thereby,
they can perform the degradation modeling step of
prognostic.
3.2 Takagi-Sugeno models: Principles
a) The inference principle
A first order TS model provides an efficient and
computationally attractive solution to approximate a
nonlinear input-output transfer function. TS is based
on the fuzzy decomposition of the input space. For
each part of the state space, a fuzzy rule can be constructed to make a linear approximation of the input.
The global output approximation is a combination
of the whole rules: a TS model can be seen as a
multi-model structure consisting of linear models that
are not necessarily independent (Angelov and Filev
2004).
Consider Fig. 3 to explain the first order TS model.
In this illustration, two inputs variables are considered,
two fuzzy membership functions (antecedent fuzzy
sets) are assigned to each one of them, and the TS
model is finally composed of two fuzzy rules. That
said, a TS model can be generalized to the case of n
inputs and N rules (see here after).
The rules perform a linear approximation of inputs
as follows:
Ri : if x1 is A1i and . . . and xn is Ani
THEN yi = ai0 + ai0 x1 + + ain xn
(4)
Figure 3.
193
ij = exp[4xx
i
i 2
j ]/[(j ) ]
(5)
where (ji )2 is the spread of the membership function, and xi is the focal point (center) of the ith rule
antecedent.
The firing level of each rule can be obtained by the
product fuzzy T-norm:
i = i1 (x1 ) in (xn )
(6)
i = i
N
(7)
j=1
y=
N
i=1
i yi =
N
i xeT i
(8)
i=1
194
3.4
k 1+
k 1
j=1 i=1
n+m
k1
zi zk 2
(9)
k 2 + Pk
(k 1)Pk1 (z )
n+m
2
k (z )
j=1 z zk1 j
(10)
(z ) + P
(11)
where (P = maxNi=1 {Pi (z )}) is the highest density/potential, (P = minNi=1 {Pi (z )}) is the lowest
density/potential and N is number of centers clusters
(xi , i = [1, N ]) formed at time k.
Step 4. If, the new data point has a potential in
between the boundaries (11) any modification of the
rules is necessary. Else, they are two possibilities:
y k+1 =
N
i=1
i yi =
N
i xeT i = kT k
(12)
i=1
195
k = 2, 3, . . .
(13)
Ck = Ck1
Ck1 k kT Ck1
1 + kT Ck1 k
(14)
C1 = I
(15)
4
4.1
Figure 4.
Air temperature in a mechanical system. The second data set is issued from an hair dryer. It has been
contributed by W. Favoreel2 from the KULeuven University. This data set contains 1000 samples. The air
temperature of the dryer is linked to the voltage of the
heating device.
For simulations, both ANFIS and exTS have been
used with five inputs variables. Predictions concern
the air temperature, and the TS models were build as
follows:
input 1 to 4: air temperature at times (t 3) to (t),
input 5: x5 (t)voltage of the heating device,
output 1: y (t + h)predicted air temperature.
4.3 Simulations and results
In order to extract more solid conclusions from the
comparison results, the same training and testing data
1 ftp://ftp.esat.kuleuven.ac.be/sista/data/process_industry
2 ftp://ftp.esat.kuleuven.ac.be/sista/data/mechanical
196
Table 1.
Simulation results.
ANFIS
exTS
Industrial dryer
t+1
Rules
RMSE
MASE
t+5
Rules
RMSE
MASE
t + 10
Rules
RMSE
MASE
32
0.12944
16.0558
32
0.84404
114.524
32
1.8850
260.140
18
0.01569
2.16361
17
0.05281
7.38258
17
0.18669
27.2177
Air temperature
t+1
Rules
RMSE
MASE
t+5
Rules
RMSE
MASE
t + 10
Rules
RMSE
MASE
32
0.01560
0.4650
32
0.13312
2.01818
32
0.23355
3.66431
4
0.01560
0.47768
6
0.12816
1.97647
6
0.22997
3.66373
sets were used to train and test both models. Predictions were made at (t + 1), (t + 5) and (t + 10)
in order to measure the stability of results in time.
The prediction performance was assessed by using
the root mean square error criterion (RMSE) which
is the most popular prediction error measure, and the
Mean Absolute Scaled Error (MASE) that, according
to (Hyndman and Koehler 2006), is the more adequate
way of comparing prediction accuracies.
For both data sets, the learning phase was stopped
after 500 samples and the reminding data served to test
the models. Results are shown in table 1.
4.4
Figure 5.
Predictionsindustrial dryer, t + 1.
Figure 6.
Table 2.
nb inputs
5
nb rules
32
type of mf
Gaussian
antecedent parameters
mf/input
2
tot. nb of mf
25
parameters/mf
2
ant. parameters
2 2 5 = 20
consequent parameters
parameters/rule
6 (5 inputs +1)
cons. parameters
6 32 = 192
parameters
20 + 192 = 212
Discussion
a) Accuracy of predictions
According to the results of table 1, exTS performs
better predictions than ANFIS model. Indeed, for the
industrial dryer (data set 1), both RMSE and MASE
are minors with exTS than with ANFIS. An illustration
of it is given in Fig. 5.
However, in the case of the air temperature data
set, exTS do not provide higher results than ANFIS
(RMSE and MASE are quite the same). Moreover, as
it is shown in Fig. 6, the error spreadings of both model
are very similar. Yet, one can point out that exTS only
needs 6 fuzzy rules to catch the behavior of the studied
phenomenon (against 32 for the ANFIS model). This
lead us to consider the complexity of the structure of
both prediction systems.
ANFIS
exTS
5
6
Gaussian
= nb rules = 6
66
2
2 6 5 = 60
6
6 6 = 36
60 + 36 = 96
197
CONCLUSION
REFERENCES
Angelov, P. and D. Filev (2003). On-line design of takagisugeno models. Springer-Verlag Berlin Heidelberg: IFSA,
576584.
Angelov, P. and D. Filev (2004). An approach to online identification of takagi-sugeno fuzzy models. IEEE Trans. on
Syst. Man ad Cybern.Part B: Cybernetics 34, 484498.
Angelov, P. and X. Zhou (2006). Evolving fuzzy systems
from data streams in real-time. In Proceedings of the Int.
Symposium on Evolving Fuzzy Systems, UK, pp. 2632.
IEEE Press.
Byington, C., M. Roemer, G. Kacprzynski, and T. Galie
(2002). Prognostic enhancements to diagnostic systems
for improved condition-based maintenance. In 2002 IEEE
Aerospace Conference, Big Sky, USA.
Chinnam, R. and B. Pundarikaksha (2004). A neurofuzzy
approach for estimating mean residual life in conditionbased maintenance systems. Int. J. materials and Product
Technology 20:13, 166179.
Ciarapica, F. and G. Giacchetta (2006). Managing the
condition-based maintenance of a combined-cycle power
plant: an approach using soft computing techniques. Journal of Loss Prevention in the Process Industries 19,
316325.
Espinosa, J., J. Vandewalle, and V. Wertz (2004). Fuzzy
Logic, Identification and Predictive Control (Advances
in Industrial Control). N.Y., Springer-Verlag.
Goebel, K. and P. Bonissone (2005). Prognostic information
fusion for constant load systems. In Proceedings of 7th
annual Conference on Fusion, Volume 2, pp. 12471255.
Hyndman, R. and A. Koehler (2006). Another look at
measures of forecast accuracy. International Journal of
Forecasting 224, 679688.
ISO 13381-1 (2004). Condition monitoring and diagnostics
of machinesprognosticsPart1: General guidelines.
Int. Standard, ISO.
Iung, B., G. Morel, and J.B. Lger (2003). Proactive maintenance strategy for harbour crane operation improvement.
Robotica 21, 313324.
Jang, J. and C. Sun (1995). Neuro-fuzzy modeling and
control. In IEEE Proc., Volume 83, pp. 378406.
Jardine, A., D. Lin, and D. Banjevic (2006). A review
on machinery diagnostics and prognostics implementing
condition-based maintenance. Mech. Syst. and Sign. Proc.
20, 14831510.
198
199
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the
increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process
control to make the process more robust. Multivariate monitoring and diagnosis techniques have the power
to detect unusual events while their impact is too small to cause a significant deviation in any single process
variable. Robust methods for outlier detection in process control are a tool for the comprehensive monitoring
of the performance of a manufacturing process. The present paper reports a comparative evaluation of robust
multivariate statistical process control techniques for process fault detection and diagnosis in the zinc-pot section
of hot dip galvanizing line.
INTRODUCTION
Data process
201
17
7
6
5
18
4
3
19
2
1
0
0
Figure 2.
202
25
Contribution
95 %
15
10
90 %
t[2]
Var1
Var2
Var3
Var4
Var5
Var6
99 % limit
20
2
1
0
-1
-2
0
Integrated Contribution
-5
-10
- 15
- 20
17
70
60
50
40
30
20
10
0
Var6
Var5
Var1
Var3
Var2
Var4
-25
- 30
-20
- 10
10
20
30
t[1]
Figure 5.
batch 17.
#7
12
Hotelling T 2 - statistic
20
8
95 %
14
99%
15
99%
10
10
0
95 %
6
95 %
0
0
20
40
60
# 18
12
2
20
40
Time
60
20
40
60
# 18
16
12
20
40
Time
60
8
4
4
0
0
0
20
40
60
# 19
12
3.3
#7
16
12
SPE
18
99 %
20
40
60
# 19
16
12
8
4
0
0
20
40
60
20
40
60
203
advantage because this trend towards abnormal operation may be the start of a serious failure in the
process. This paper show the performance monitoring potential of MSPC and the predictive capability
of robust statistical control by application to an industrial process. The paper has provided an overview of
an industrial application of multivariate statistical process control based performance monitoring through
the robust analysis techniques.
Outliers were identified, based on the robust distance.
Again, we remove all detected outliers and repeat
the process until a homogeneous set of observations
is obtained. The final set of data is the in-control set
data or reference data. The robust estimates of location and scatter were obtained by the MCD method of
Rousseeuw.
A T 2 Hotelling and SPE control charts for reference
data process are carried out for nonitoring de multivariate process. Contributions from each fault detected
using a PCA model are used for fault identification
approach to identify the variables contributing most to
abnormal situation.
batch # 7
30
Var4
0
-10
Var1
Var2
Var5
Var6
Var3
Contribution
50
Var2
0
Var1
Var3
Var4
Var3
Var4
Var5
Var6
-50
batch #18
-100
40
20
batch #19
Var1
Var2
Var6
0
-10
Var5
CONCLUSIONS
REFERENCES
Dunia, R. and Qin, S.J. 1998a. Subspace approach to multidimensional fault identification and reconstruction. The
American Institute of Chemical Engineering Journal 44,
8: 18131831.
Dunia, R. and Qin, S.J. 1998b. A unified geometric approach
to process and sensor fault identification. Computers and
Chemical Engineering 22: 927943.
Himmelbau, O.M. 1978. Fault Detection and Diagnosis in Chemical and Petrochemical Process. Elsevier,
Amsterdam.
Jackson, J.E. 1991. A users guide to principal components,
Wiley-Interscience, New York.
Nomikos, P. and MacGregor, J.F. 1995. Multivariate SPC
charts for monitoring batch processes. Technometrics 37,
1: 4159.
Rousseeuw, P.J. and Van Driessen, K. 1999. A Fast Algorithm
for the minimum Covariance Determinant Estimator.
Technometrics, 41: 212223.
Tang, N.-Y. 1999. Characteristics of continuous galvanizing baths. Metallurgical and Materials Transactions B, 30:
144148.
Verboven, S.; Hubert, M. 2005. LIBRA: a Matlab library for
robust analysis, Chemometrics and Intelligent Laboratory
Systems 75: 127136.
204
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
M.L. Penalva
Fatronik. San Sebastian, Spain
ABSTRACT: The motivation of this paper is to minimize the flatness errors in hard turning (facing) operations
of hardened tool steel F-5211 (AISI D2) discs using Polycrystalline Cubic Boron Nitride (PCBN) tools with
finishing cutting conditions. To achieve this, two strategies have been developed. First an on-line Conditional
Preventive Maintenance (CPM) system based on monitoring a parameter that correlates well with the geometrical
error, namely, the passive component of the force exerted by the workpiece on the tool. The second strategy
is more sophisticated and consists of the development of an on-line Error Compensation System that uses the
error value estimated using the first strategy to modify the tool trajectory in such a way that flatness errors are
kept within tolerances. Moreover, this procedure allows the life of the tools to be extended beyond the limit
established by the first CPM system and also the reduction of part scrap and tool purchasing costs.
INTRODUCTION
FLATNESS ERROR
205
ef (d)
Chip
Ef
Workpiece
Tool
(II)
(III)
(I)
Workpiece
Figure 1.
model.
Tool
Programed path
Workpiece
Tool
Real path
Figure 2.
TEST CONDITIONS
Two parameters were considered for the total Ef facing error estimation (see Figure 3), a) the length cut
by the tool and b) the passive force exerted on the
tool. Since both of them are expected to be sensitive
206
Material
Cutting passes
Workpiece
CBN Tool content
Operation
Cutting conditions
Force acquisition
Geometrical error
Tool wear
50
Table 1.
error.
tool 1
tool 2
tool 3
tool 4
tool 5
tool 6
40
30
20
10
0
500
Figure 5. Flank wear for two different tools after having cut
the same length.
207
60
Tool
Workpiece
Room
Temperature
Cutting
Temperature
Fp
e f (microns)
Regression
95% CI
95% PI
40
30
20
10
0
50
100
150
200
0
-10
-20
150
125
100
75
50
75
50
Diameter(mm)
Fp (N)
50
160
140
120
100
80
150
125
100
(1)
One step further in the direction of tool use optimization is the development of an Error Compensation
System that allows the on-line correction on the
tool tip trajectory as it deviates during the facing pass
as result of the tool tip expansion.
208
1) Signal (Fp(d))
PC
2) Flatness error
(ef (d))
Dynamometer
Z
carriage
Mz
Clamping
Workpiece
Z Motor
CNC
Figure 9.
5.1
ACKNOWLEDGEMENTS
This work has been made under the CIC marGUNE
framework and the authors would like to thank the
Basque Government for its financial support.
PC-CNC communication
CONCLUSIONS
REFERENCES
Grzesik W., Wanat T. 2006. Surface finish generated in hard
turning of quenched alloy steel parts using conventional
and wiper ceramic inserts. Int. Journal of Machine Tools
& Manufacture.
Knig W.A., Berktold A., Kich K.F. 1993. Turning vs
grindingA comparison of surface integrity aspects and
attainable accuracies. CIRP Annals 42/1: 3943.
Lazoglu I., Buyukhatipoglu K., Kratz H., Klocke F. 2006.
Forces and temperatures in hard turning. Machining
Science and Technology 10/2: 157179.
Luce S. 1999. Choice criteria in conditional preventive maintenance. Mechanical Systems and Signal Processing 13/1:
163168.
zel T., Karpat Y. 2005. Predictive modelling of surface
roughness and tool wear in hard turning using regression
and neural networks. Int. Journal of Machine Tools &
Manufacture 45/45: 467479.
Penalva M.L., Arizmendi M., Daz F., Fernndez J. 2002.
Effect of tool wear on roughness in hard turning. Annals
of the CIRP: 51/1: 5760.
Rech J., Lech M., Richon, J. 2002. Surface integrity in finish hard turning of gears. Metal cutting and high speed
machining, Ed. Kluwer, pp. 211220.
Santos J., Wysk Richrad A., Torres, J.M. 2006. Improving
production with lean thinking. John Wiley & Sons, Inc.
Scheffer C., Kratz H., Heyns P.S., Klocke F. 2003, Development of a tool wear-monitoring system for hard turning,
International Journal of Machine Tools & Manufacture,
Vol. 43, pp. 973985.
Schwach D.W., Guo Y.B. 2006, A fundamental study
on the impact of surface integrity by hard turning on
209
210
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Maintenance policies are driven by specific needs as availability, time and cost reduction.
In order to achieve these targets, the recent maintenance approach is based on a mix of different policies such
as corrective, planned, and curative. The significance of the predictive activities is becoming more and more
important for the new challenges concerning machines and plants health management.
In the present paper we describe our experience about the development of a rule-based expert system for the
electric locomotive E402B used in the Italian railways system, in order to carry out an automated prognostic
process. The goal of the project was to develop an approach able to improve the maintenance performance. In
particular we would like to develop an advanced prognostic tool able to deliver the work orders. This specific
issue has been identified and requested from the maintenance operators as the best and the only solution that
could ensure some results.
INTRODUCTION
information about the commercial services of the locomotive. The user of the locomotive fleet, Trenitalia
S.p.A., has potentially a lot of information but theyre
fragmented in many different databases. Therefore,
this data and knowledge, as shown in paragraph 3, is
very unproductive and inefficient.
In paragraph 4 is presented the analysis of the diagnostic data. Their great number couldnt be easily
managed. Hence, in order to keep only the relevant
data with some useful information, we adopted a
customized statistical approach.
In this manner we could remove many of the useless
records, for example the ones describing secondary
events. We could achieve our aim of fault identification thanks to the analysis of the only truthful
diagnostic records.
The prognostic skill, based on the rules logic, was
gained by the matching of maintenance activities and
of diagnostic records for a restricted time interval and
distinguishing a specific fault event.
As illustrated in paragraph 5, in the development of
the latter task we faced a lot of problems as the quantity
and reliability of the data, the information ambiguity
and the weak know-how either of the product and of
the maintenance activities.
In order to take advantage of all the available data
and information we have suggested, in paragraph 6
a new configuration for the predictive maintenance
process.
As a final point, in the last paragraph are proposed
some considerations regarding the future perspectives
211
Table 2.
Data
Database
Description
Database
2.1
Thanks to the GPS technology, each data is also provided with some information about the geographic
position of the place where it was generated.
At the same time, some essential information (event
code) is shown directly on the on board screen of the
drivers display in order to help managing the train
while the mission is going on.
The last data type are the daily missions carried
out from the locomotive, enriched with some related
information. These data come from a software, made
for fleets management which, at the present time, is
not integrated with the DRView tool.
The first two just mentioned kinds of information
are not available for all the fleet, since the data logger
and the transmission apparel is installed only on two
vehicles (namely number 107 and 119). The data from
the other locomotives are never downloaded into the
databases. The available data recorded on the fixed
ground server have been recorded within a time interval starting from June 06 until now. In the meanwhile
Trenitalia is developing an ongoing project for the
extension to other 10 locomotives of the on board GPS
transmitter in order to be available a wider number
of data.
212
Figure 1.
As mentioned in paragraph 2, the maintenance warnings could be generated in the operational phase even
from the traffic control room. It happens, for instance,
when the locomotive has a problem during a mission
and the driver signals a specific problem.
213
Figure 2.
Figure 3.
On the other side of the picture we can see the maintenance data. Unfortunately their poor quality doesnt
allow us to use them to drive the rules construction
process, as expected for an ideal system configuration. At the moment the maintenance information can
be used barely to receive a confirmation after a diagnosis carried out from the diagnostic code of the on
board monitoring system.
A diagnostic rule is a proposition composed by two
arguments, the hypothesis and the thesis. The Hypothesis (XX, YY) are the events that must happen in
order to generate an event (ZZ) and therefore verify
a thesis. If the diagnostic rule is maintenance oriented,
as a replacement for the cause the rule can contain
the specific maintenance activity able to interrupt the
degradation or to resolve a breakdown if the fault is
already occurred.
In case of advanced diagnostic systems the rule
should can be prognostics oriented. This means that
the thesis statement is a prediction of the remaining
life to failure of the monitored component.
In case of an on condition based maintenance, the
thesis statement contains the prediction of the deadline
for performing the appropriate maintenance activities.
This actions should be able to cut off or at least manage
the incoming component failure. The following chart
is meant to explain these different rule methods.
In this project one of the goals is to gain a prognostic approach from the diagnostic system. Possibly
it should also be biased toward conducting an on condition maintenance. So the thesis declaration will be
generated by matching the users manual indications,
the experts suggestions and the work orders activities
descriptions.
Since the hypothesis are the most critical element in
the rule, they will be extracted from the on board diagnostic database by a significant statistical treatment.
We used the techniques developed for the process
control in order to manage a huge number of data.
214
As mentioned in paragraph 2, at the moment the available data of the on board diagnostic system are only
those of locomotives number (#) 107 and 119. As
reported in the following table the data comparison
shows significant difference for each means of transportation. The locomotive #107 has nearly twice the
number of codes per kilometer of the #119. Conversely the maintenance activities are distributed with
the opposite ratio: #119 has roughly the 33% of maintenance operations more than #107. The data used
in the analysis were collected in five months, from
January to May 2007.
Although they are identical for the design, for all
subsystems characteristics and for the operational and
maintenance management, each rolling stock has its
own specific behavior.
So the diagnostic approach, basing on statistical
methods, cant be developed without considering the
specific vehicle. Aging, obviously, causes a natural
and not avoidable drift of the functional parameters.
This phenomenon is strongly amplified in complex
systems, as is a modern electric locomotives, where all
the subsystems are dependent on each other. Accordingly the time dependence of the reference parameters
will be appreciated to guarantee a dynamic diagnostic
process.
5.1
Filtering
First of all the main important action of the data treatment was the filtering that allowed the management of
a suitable number of information.
As reported in Table 3, the initial number of codes
was very high, the average production is 1,8 codeskm1 and 0,8 codeskm-1 respectively for #107 and #119.
The progressive filtering procedure and its results
in terms of code identification is reported in the following table. For each step is also reported the reason
of the choice.
The first step of the filtering process is due to the
indications reported on the users manuals where it
is shown which codes are useful in support of the
driver for the management of the abnormal conditions in the mission occasions. The next removal hits
the codes whose meaning is completely unknown and
Table 3.
Figure 4.
Filtering process.
Figure 5.
Data comparison.
Data
Loco n.107
Loco n.119
Event codes
Diagnostic codes
Maintenance notes
Work orders
0, 53 km1
1, 2 km1
88
23
0, 28 km1
0, 52 km1
129
44
215
could be easily clustered in two different sets referring to two subsystems of the locomotive (engineers
cab HVAC and AC/DC power transformation), we will
treat them separately without considering any mutual
interaction.
Another important element is the logic of codes
generation by the on board diagnostic system. As
mentioned in paragraph 2 a diagnostic code is generated when a set of specific physical parameters
reach their threshold values. This means that you have
reached abnormal operating conditions. Each code
record has a start time and an end time field, representing respectively the appearance and disappearance
of the anomalous conditions. The values of these
attributes allow us to suggest a classification of codes
basing on the duration of the abnormal conditions:
Impulsive signal, code with the same value for the
start time and end time. It means that the duration
of the abnormal conditions is less than a second;
Enduring signal, code with different values for the
start time and the end time. The durations has an
high variability (from some seconds up to some
hours);
Ongoing signal, code characterize by a value for
the start time but without end time. It means an
abnormal condition still persistent.
Many codes are generated by different arrangements of signals. Nevertheless they represent an
alteration of the equipments state so they are able to
describing the states of the locomotives subsystems.
6
6.1
i=1 ui
= u
(1)
u
ni
(2)
Lower control limit, LCL = max u 3
u
0
ni
(3)
Upper warning limit, UWL = u + 1, 5
u
ni
(4)
216
Figure 6.
FINAL CONSIDERATION
The next step of the analysis
217
Overview
218
Human factors
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this study, in order to validate the appropriateness of R-TACOM measure that can quantify
the complexity of tasks included in procedures, an operators response time that denotes the elapsed time to
accomplish a given task was compared with the associated R-TACOM score. To this end, operator response time
data were extracted under simulated Steam Generator Tube Rupture (SGTR) conditions of two reference nuclear
power plants. As a result, it was observed that operator response time data seem to be soundly correlated with the
associated R-TACOM scores. Therefore, it is expected that R-TACOM measure will be useful for quantifying
the complexity of tasks stipulated in procedures.
INTRODUCTION
2
2.1
BACKGROUND
The necessity of quantifying task complexity
As stated in the foregoing section, a human performance related problem has been regarded as one of
the radical determinants for the safety of any humaninvolved systems. It is natural that a great deal of work
has been performed to unravel the human performance
related problem. As a result, it was revealed that the
use of procedures is one of the most effective countermeasures for the human performance related problem
(AIChE 1994, IAEA 1985, OHara et al. 2000). In
other words, procedures are very effective to help
human operators in accomplishing the required tasks
because they can use detailed instructions describing
what is to be done and how to do it. However, the use
of procedures has the nature of a double-edged knife.
That is, since procedures strongly govern the physical
behavior as well as the cognitive behavior of human
operators, it is expected that the performance of human
operators would be largely attributable to the complexity of procedures. Actually, existing literatures support
this expectation, since the performance of human operators seems to be predictable when they use procedures
(Stassen et al. 1990, Johannsen et al. 1994).
221
Figure 1.
2.3
222
R-TACOM MEASURE
Figure 2.
Figure 3.
In order to investigate the appropriateness of RTACOM measure, two sets of response time data
collected from nuclear power plants (NPPs) were compared with the associated R-TACOM scores. In the
case of emergency tasks included in the emergency
operating procedures (EOPs) of NPPs, a task performance time can be defined as an elapsed time from the
commencement of a given task to the accomplishment
of it. Regarding this, averaged task performance time
data about 18 emergency tasks were extracted from
the emergency training sessions of the reference NPP
(plant 1) (Park et al. 2005). In total 23 simulations
were conducted under steam generator tube rupture
(SGTR) conditions.
Similarly, averaged task performance time data
about 12 emergency tasks under SGTR conditions
were extracted from six emergency training sessions
of another reference NPP (plant 2). It is to be noted
that, although the nature of simulated scenario is very
similar, the emergency operating procedures of two
NPPs are quite different.
Fig. 4 represents the result of comparisons
between averaged task performance time data and the
associated TACOM as well as R-TACOM scores. For
the sake of convenience, equal weights were used to
quantify complexity scores ( = = = = = 0.2
223
Figure 4.
Comparing two sets of response time data with the associated R-TACOM scores.
GENERAL CONCLUSION
In this study, the appropriateness of R-TACOM measure based on the generalized task complexity model
was investigated by comparing two sets of averaged task performance time data with the associated
R-TACOM scores. As a result, it was observed that
response time data obtained when human operators
accomplished their tasks using different procedures
consistently increase in proportion to the increase of
R-TACOM scores. In other words, even though human
operators used different procedures, it is expected that
the performance of human operators would be similar if the complexity of tasks they are faced with is
similar.
REFERENCES
AIChE. 1994. Guidelines for preventing human error in process safety. Center for Chemical Process Safety of the
American Institute of Chemical Engineers.
Cohen, J., Cohen, P., West, S.G. and Aiken, L.S. 2003.
Applied multiple regression/correlation analysis for the
behavioral sciences. Lawrence Erlbaum Associates, Third
Edition.
Ghosh, S.T. and Apostolakis, G.E. Organizational contributions to nuclear power plant safety. Nuclear Engineering
and Technology, vol. 37, no. 3, pp. 207220.
Guimaraes, T., Martensson, N., Stahre, J. and Igbaria, M.
1999. Empirically testing the impact of manufacturing
system complexity on performance. International Journal
of Operations and Production Management, vol. 19, no.
12, pp. 12541269.
Harvey, C.M. and Koubek, R.J. 2000. Cognitive, social,
and environmental attributes of distributed engineering
collaboration: A review and proposed model of collaboration. Human Factors and Ergonomics in Manufacturing,
vol. 10, no. 4, pp. 369393.
Hollnagel, E. Human reliability assessment in context.
Nuclear Engineering and Technology, vol. 37, no. 2,
pp. 159166.
IAEA. 1985. Developments in the preparation of operating
procedures for emergency conditions of nuclear power
224
225
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Jan-Erik Holmberg
VTT (Technical Research Centre of Finland), Espoo, Finland
Pekka Pyy
Teollisuuden Voima Oy, Helsinki, Finland
ABSTRACT: The Enhanced Bayesian THERP (Technique for Human Reliability Analysis) method has been
successfully used in real PSA-studies at Finnish and Swedish NPPs. The method offers a systematic approach
to qualitatively and quantitatively analyze operator actions. In order to better know its characteristics from a
more international perspective, it has been subject to evaluation within the framework of the HRA Methods
Empirical Study Using Simulator Data. This paper gives a brief overview of the method with major findings
from the evaluation work including identified strengths and potential weaknesses of the method. A number of
possible improvement areas have been identified and will be considered in future development of the method.
1
1.1
INTRODUCTION
HRA as part of PSA
The modeling and quantification of human interactions is widely acknowledged as a challenging task
of probabilistic safety assessment (PSA). Methods for
human reliability analysis (HRA) are based on a systematic task analysis combined with a human error
probability quantification method. The quantification
typically relies on expert judgments, which have rarely
been validated by statistical data.
1.2
1.3
Scope
International study
In order to compare different HRA methods an international study HRA Methods Empirical Study Using
Simulator Data has been initiated using actual simulator data as reference for the comparison (Lois et al
2007 & Dang et al 2008). The overall goal of the international HRA method evaluation study is to develop an
empirically-based understanding of the performance,
strengths, and weaknesses of the HRA methods. It is
expected that the results of this work will provide the
technical basis for the development of improved HRA
guidance and, if necessary, improved HRA methods.
As a first step in the overall HRA method evaluation
study, a pilot study was conducted to obtain initial data
2
2.1
227
2.2
Scenarios
In the pilot study, two variants of a steam generator tube rupture (SGTR) scenario were analyzed:
1) a basic case, i.e., a familiar/routinely practiced case,
and 2) a more challenging case so called complex case.
In the complex case, the SGTR was masked by a simultaneous steamline break and a failure of all secondary
radiation indications/alarms. It could be expected that
operators have difficulties to diagnose the SGTR. The
event sequence involves several operator actions, but
this paper is restricted to the first significant operator
action of the scenarios, i.e., isolation of the ruptured
steam generator (SG).
2.3
In order to facilitate the human performance predictions, the organizers of the experiment prepared
an extensive information package for the HRA analysis teams including descriptions of the scenarios,
description of the simulator and its man-machine interface, differences between the simulator and the home
plant of the crews, procedures used in the simulator, characterization of the crews, their work practices
and training. The task of the HRA analysis teams
was to predict failure probabilities of operator actions
defined, e.g., isolation of the ruptured steam generator,
and to qualitatively assess which PSFs affect positively or negatively to success or failure of the crew.
The members of the Enhanced Bayesian THERP team
included the authors of this paper.
2.4
Time criterion
Qualitative analysis
Quantitative analysis
The human error probability is derived using the timedependent human error probability model as follows,
p(t) = min 1, p0 (t)
5
Ki ,
(1)
i=1
228
5 min
Steam generator
tube rupture
(SGTR )
Manual scram
Feedwater to
SGs
(auto-function)
P=0
Safety injection
to primary circuit
(auto-function)
Identification and
isolation of the
ruptured SG
Valves closed in
all outlet and
inlet paths of the
ruptured SG
P=0
Sequence
continues
P=0
Automatic scram
(on low
pressurizer
pressure )
P=0
Anticipated
transient without
scram
SG dry -out ,
major SG rupture
Loss of core
cooling
Unisolated SGTR ,
contamination of the
secondary side ,
loss of primary coolant
Figure 1. Block diagram of the beginning of the SGTR basic scenario. Light grey boxes are operator actions, white boxes
process events, and dark gray boxes end states of the event sequences. E-0 and E-3 are emergency operating procedures.
Possibility of technical failures is not considered in this analysis (P = 0).
1E+0
1E-1
Probability of failure
1E-2
Swain 95%
1E-3
Swain Median
Swain 5%
1E-4
Base probability
1E-5
1E-6
1E-7
1
10
100
1000
10000
Figure 2.
where tind is time for first indication, t time for identification and decision making and tact time for action.
The following performance shaping factors are used
(Pyy & Himanen 1996):
K1 :
K2 :
(2)
229
K3 :
K4 :
K5 :
j = 1/5, 1/2, 1, 2, 5,
(3)
230
Table 1.
Predictions for operator failure to isolate the ruptured steam generator in time.
Base
Complex
12 min1
7.3E-2
15 min1
0.049
0.6
0.5
0.4
1.8
1.4
2.6E-2
0.7
1.4
2.4
2.7
1.2
1.7E-1
5.8E-4
7.3E-3
9.1E-2
9.9E-4
4.0E-2
1.0E + 03
Difference in time window between the base case and complex case is that, in the complex case, the plant trip is actuated
immediately in the beginning of the scenario, while in the base scenario an alarm is received first and the first indication of
increasing level in the ruptured SG is received 3 min after the alarm.
2 Interpretation of the scale: 0.2 = very good condition, 0.5 = good condition, 1 = normal, 2 = poor condition, 5 = very
poor condition. Note that full scale is not used for all PSFs.
3 Due to the min-function in the human error probability model, see formula (1).
5
5.1
results from this pilot study did not clarify the actual
need for calibration.
EVALUATION FINDINGS
Strengths
Potential weaknesses
5.3
CONCLUSIONS
231
REFERENCES
Lois, E. et al 2007. International HRA Empirical Study
Description of Overall Approach and First Pilot Results
from Comparing HRA Methods to Simulator Data. Report
HWR-844, OECD Halden Reactor Project, draft, limited
distribution.
Dang, V.N. et al 2008. Benchmarking HRA Methods Against
Simulator DataDesign and Organization of the Halden
Empirical Study. In: Proc. of the 9th International Conference on Probabilistic Safety Assessment and Management
(PSAM 9), Hong Kong, China.
Swain, A.D. & Guttmann H.E. 1983. Handbook of Human
Reliability Analysis with Emphasis on Nuclear Power
Plant Applications. NUREG/CR-1278, Sandia National
Laboratories, Albuquerque, USA, 554 p.
Pyy, P. & Himanen R. 1996. A Praxis Oriented Approach
for Plant Specific Human Reliability AnalysisFinnish
Experience from Olkiluoto NPP. In: Cacciabue, P.C., and
Papazoglou, I.A. (eds.), Proc. of the Probabilistic Safety
Assessment and Management 96 ESREL96PSAMIII
Conference, Crete. Springer Verlag, London, pp.
882887.
Holmberg, J. & Pyy, P. 2000. An expert judgement based
method for human reliability analysis of Forsmark 1
and 2 probabilistic safety assessment. In: Kondo, S. &
Furuta, K. (eds.), Proc. of the 5th International Conference on Probabilistic Safety Assessment and Management
(PSAM 5), Osaka, JP. Vol. 2/4. Universal Academy Press,
Tokyo, pp. 797802.
232
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
B. Reer
Swiss Federal Nuclear Safety InspectorateHSK, Villigen HSK, Switzerland
(until July 2007 with the Paul Scherrer Institut)
ABSTRACT: The International Human Reliability Analysis (HRA) Empirical Study started in late 2006, with
the objective of assessing HRA methods based on comparing their results with data. The focus of the initial phase
is to establish the methodology in a pilot study. In the study, the outcomes predicted in the analyses of HRA teams
are compared with the findings obtained in a specific set of simulator studies. This paper presents the results of
one HRA analysis team and discusses how the predicted analysis compares to the observed outcomes from the
simulator facility. The HRA method used is the quantification module of the Commission Errors Search and
Assessment method (CESA-Q), developed within the HRA research project at the Paul Scherrer Institut (PSI). In
this pilot phase, the main focus of the comparison is on qualitative results: the method predictions, the scenario
features and performance factors that would mostly contribute to failure (or support success). The CESA-Q
predictions compare well with the simulator outcomes. This result, although preliminary, is encouraging since
it gives a first indication of the solidity of the method and on its capability of producing founded insights for
error reduction. Also, the comparison with empirical data provided input to improve the method, regarding
the treatment of the time factor and of knowledge- and training-based decisions. The next phases of the HRA
Empirical Study will address also the quantitative aspects of the HRA. It is planned to use further insights from
the next phases to a) refine the CESA-Q guidance and b) evaluate the method to see whether additional factors
need to be included.
INTRODUCTION
233
The motivations for the Empirical Study are the differences in the scope, approach, and models underlying
the diversity of established and more recent HRA
methods (Dang et al., 2007; Lois et al., 2008). These
differences have led to a significant interest in assessing the performance of HRA methods. As an initial
step in this direction, this international study has been
organized to examine the methods in light of data,
aiming to develop an empirically-based understanding
of their performance, strengths, and weaknesses. The
focus of the study is to compare the findings obtained
in a specific set of simulator studies with the outcomes
predicted in HRA analyses.
Hosted by the OECD Halden Reactor Project, the
Empirical Study has three major elements:
predictive analyses where HRA methods are applied
to analyze the human actions in a set of defined
scenarios,
the collection and analysis of data on the performance of a set of operator crews responding to
these scenarios in a simulator facility (the Hammlab
experimental simulator in Halden),
and the comparison of the HRA results on predicted
difficulties and driving factors with the difficulties
and factors found in the observed performances.
The tasks performed in 2007 aimed a) to establish
the methodology for the comparison, e.g. the protocols for interacting with the HRA analyst teams, the
information exchanged, and the methods for the data
analysis and comparison; and b) to test the comparison
methodology with expert teams submitting predictive
234
Step
#
1
4
5
Description
List the decision points that introduce options for
deviating from the path (or set of paths) leading
to the appropriate response, and select relevant
deviations, contributing to the human failure
event (HFE) in question.
For each decision point, evaluate whether a
situational factor (Reer, 2006a) motivates the
inappropriate response (deviation from the
success path). If not proceed with step 3. If yes
proceed with step 4.
If no situational factor applies, estimate a reliability
index (i) in the range from 5 to 9. Guidance is
provided in (Reer, 2007).
Proceed with step 6.
For the EFC case, evaluate the adjustment factors
which mediate the impact.
Refer to the guidance presented in (Reer, 2006a).
For the EFC case, estimate a reliability index (i)
in the range from 0 to 4.
Refer to the reference cases (from operational
events) summarized in (Reer 2006a).
a) Assign an HEP (pF|i value) to each decision
point (CESA-Q associates each reliability index
with an HEP value, Reer 2006a), and b) determine
the overall HEP (as the Boolean sum
of the individual HEPs).
Evaluate recovery (the option for returning to the
correct response path) for the most likely error
assigned in step 6a. Apply the recovery HEP
assessment guidance in Reer (2007).
235
Critical decision points (selected) identified in the CESA-Q analysis of HFE #1B.
Decision point
Consequence
#1B.12 (E-3,
step 3b)
236
Evaluation
Comment
Misleading
No
indication
or instruction (MI)
Adverse
exception
(AE)
No
Adverse
Yes
distraction
(AD)
Risky
incentive
(RI)
No
Situational
factor
Evaluation
Misleading
indication
or Instruction (MI)
Yes
Adverse
exception
(AE)
Yes
Adverse
distraction
(AD)
Yes
Risky
incentive
(RI)
No
Comment
237
Evaluation
RTP
Recovery
Timely
Possible
Yes
RCA
Recovery
Cue
Available
Yes
ST
Shortage
of Time
Yes
MC
Masked
Cue
Probably
Comment
Comment
238
7
7.1
239
CONCLUSIONS
This paper has concerned the results of the HRA performed by the CESA-Q team on one of the nine HFEs
addressed by the HRA empirical study. The paper has
discussed how the predicted analysis compares to the
observed outcomes from the simulator facility.
It should be emphasized that the application of
CESA-Q in this study is explorative: the methods
development and previous applications have focused
on errors of commission, while this study addresses
errors of omissions.
PSIs CESA-Q method performed well on the qualitative aspects of the exercise, i.e. how well the
methods predicted what elements of the actions may
be challenging. This result, although preliminary, is
encouraging since it gives a first indication of the solidity of the method and on its capability of producing
founded insights for error reduction. These qualitative
aspects were the main emphasis in this phase of the
study; currently, the assessment group is planning to
treat the more quantitative aspects in the next phase.
The empirical data, consisting of systematic observations of the performances of multiple crews on the
same scenarios, have been useful in deriving insights
on potential improvements of the CESA-Q method.
For instance, in treating time, CESA-Q focuses on
the effect of time pressure on the quality of decisionmaking and accounts for shortage of time in decision
error recovery. It seems that the method, in its current
version, does not give proper credit to the effect of
running out of time while making correct decisions
as guided by the procedure.
It should be also investigated whether guidance
should be added to base the analysis on multiple
expected response paths and to consider knowledgebased and training-based decisions in the definition of
the expected response paths and of the critical decision
points.
An aspect that makes CESA-Q well suited for
comparison against simulator data is that it produces
detailed descriptions of crews behaviors, in the form
of paths of response actions and critical decisions
taken along the response. These paths and decisions
could be indeed observed in the simulators. CESA and
CESA-Q shares this characteristic with some other
recent HRA methods like EDFs MERMOS and US
NRCs ATHEANA.
Finally, the study can provide an opportunity not
only to compare CESAs predictions with empirical
data but also to compare HRA methods and their
resulting analyses on the same set of actions. In particular, when performed in the context of empirical data,
a method comparison has the added value that there is
a shared basis (the data) for understanding the scope
of each factor considered by a method and how the
method treats these in detail.
ACKNOWLEDGEMENTS
This work is funded by the Swiss Nuclear Safety
Inspectorate (HSK), under DIS-Vertrag Nr. 82610.
The views expressed in this article are solely those
of the authors.
B. Reer contributed to this work mostly while he
was with the Paul Scherrer Institut (since July 2007,
he is with the HSK).
LIST OF ACRONYMS
CESACommission Errors Search and Assessment
CESA-QQuantification module of CESA
EFCError Forcing Condition
EOPEmergency Operating Procedures
HAMMLABHalden huMan-Machine LABoratory
HFEHuman Failure Event
HRAHuman Reliability Analysis
MSLBMain Steam Line Break
PSFPerformance Shaping Factor
PSAProbabilistic Safety Assessment
PSIPaul Scherrer Institut
SGTRSteam Generator Tube Rupture
REFERENCES
Dang, V.N., Bye, A. 2007. Evaluating HRA Methods in Light
of Simulator Findings: Study Overview and Issues for
an Empirical Test. Proc. Man-Technology Organization
Sessions, Enlarged Halden Programme Group (EHPG)
Meeting, HPR-367, Vol. 1, paper C2.1, 1116 March
2007, Storefjell, Norway.
Dang, V.N., Bye, A., Lois, E., Forester, J., Kolaczkowski,
A.M., Braarud, P.. 2007. An Empirical Study of HRA
MethodsOverall Design and Issues Proc. 2007 8th
IEEE Conference on Human Factors and Power Plants
(8th HFPP). Monterey, CA, USA, 2631 Aug 2007,
CD-ROM, (ISBN: 978-1-4244-0306-6).
Dang, V.N., Reer, B., Hirschberg, S. 2002. Analyzing Errors
of Commission: Identification and First Assessment for a
240
Swiss Plant. Building the New HRA: Errors of Commissionfrom Research to Application, NEA OECD report
NEA/CSNI/R(2002)3, 105116.
Forester, J., Kolaczkowski, A., Dang, V.N., Lois, E. 2007.
Human Reliability Analysis (HRA) in the Context of HRA
Testing with Empirical Data. Proc. 2007 8th IEEE Conference on Human Factors and Power Plants (8th HFPP).
Monterey, CA, USA, 2631 Aug 2007. CD-ROM, (ISBN:
978-1-4244-0306-6).
Lois, E., Dang, V.N., Forester, J., Broberg, H., Massaiu, S.,
Hildebrandt, M., Braarud, P.., Parry, G., Julius, J., Boring, R., Mnnist, I., Bye, A. 2008. International HRA
empirical studydescription of overall approach and first
pilot results from comparing HRA methods to simulator
data. HWR-844. OECD Halden Reactor Project, Norway
(forthcoming also as US NRC report).
Reer, B., 2006a. Outline of a Method for Quantifying Errors
of Commission. Paul Scherrer Institute, Villigen PSI,
Switzerland, Draft.
241
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J.L. Meli
University of Valencia, Valencia, Spain
ABSTRACT: Although several researchers have argued that social norms strongly affect health behaviors,
the measurement of health and safety norms has received very little attention. In this paper, we report the
results of our study designed to: 1) test the reliability and construct an effective questionnaire devoted to
the measurement of social influences on safety behavior; 2) assess the predictive validity of supervisor and
coworkers descriptive and injunctive safety norms on safety behavior; 3) test a Four-Factor CFA model of social
influence on safety behavior (confirmatory factor analysis). The questionnaire has 11 items four-scales and
used a 7-point Likert-type scale of safety behavior. A self-reporting scale of safety behavior was also included.
A sample (N = 250) of operational team workers from a Portuguese company participated voluntarily and
anonymously in the study. Overall results from this study (EFA and CFA) confirmed the questionnaire structure
and provided support for a correlated, Four-Factor model of Safety Group Norms. Furthermore, this study
had demonstrated that coworkers descriptive and injunctive safety norms were a strong predictor of safety
behavior.
INTRODUCTION
243
structure of norms, can be valuable to the understanding of the contextual normative influences on workers
health behaviors.
2
OVERVIEW OF THEORETICAL
FRAMEWORK
METHOD
3.1 Participants
Participants in this study were operational workers,
members of work teams employed in a Portuguese
company (passenger transportation company with
high safety standards). The sample consisted of
250 workers, who provided anonymous ratings of
the descriptive and injunctive safety norms of their
coworkers and supervisors and of their own individual
safety behavior. All participants were male and most
respondents (83,2%) were not supervisors and had heterogeneous jobs in the company. Almost half of the
participants (48,8%), were between 31 e 40 years old
and the other half (42,5%) were over 41 years old.
Finally, 38,4% had tenure on the job between 6 and 15
years and 41,6% tenure superior to 15 years.
3.2 Material
In this study, data was collected from individual group
members, using a quantitative methodology (survey
administered in the form of a questionnaire).
Four 11-item scales measured descriptive and
injunctive safety group norms and we also considered
the referent implied in the survey items. Therefore,
descriptive and injunctive norms were assessed in
244
1 In
RESULTS
245
Table 1.
1A. Supervisors
safety norms.
1B. Coworkers
safety norms.
Items
Items
P1g
P1h
P1i
P1j
P1k
P2b
P2c
P2d
P2e
P2f
.84
.87
.86
.83
.79
.42
.42
.35
.37
.37
.38
.38
.38
.45
.35
.79
.82
.84
.85
.86
P3a
P3c
P3d
P3e
P3k
P4f
P4g
P4h
P4i
P4j
.22
.27
.15
.24
.39
.81
.87
.90
.83
.87
.81
.81
.75
.78
.65
.26
.26
.24
.30
.24
Alpha
.96
.95
Alpha
.86
.94
Model
1
2*
3
CPSB
SE B B
.03
.07
.29
.29
SE B
0.05
0.08
0.28
0.20
0.05
0.05
0.06
0.06
GFI
CFI
RMSEA
AIC
.90
.93
.48
.97
.98
.63
.07
.11
.24
287.8
105.5
1595.95
ECVI
1.19
.44
6.62
.09
.13
.37
.23
err1
p1g
err2
p1h
err3
p1i
err4
p1j
,91
,94
,93
Regression analysis was used to assess the predictive validity of safety group norms on safety behavior
(variable criteria). The principal results of this multiple
regression are reported in Table 2.
The results of the statistical test of the regression
models ability to predict proactive safety behaviors reveals that the model is statistically significant
(F = 24, 201; p < .0001); supervisor and coworkers
descriptive and injunctive safety norms accounted
together for 28,1% of the variation in proactive safety
practices (Adjusted R 2 = .28). Nevertheless, proactive safety practices were only significantly associated
with coworkers descriptive safety norms (Beta =
.29, p < .0001) and coworkers injunctive safety
norms (Beta = .29, p < .0001).
The model for the prediction of compliance
safety behaviors was also confirmed (F = 28, 819;
p < .0001); supervisor and coworkers descriptive
SDSN
,94
,79
err5
p2b
err6
p2c
err7
p2e
err8
p2f
,90
,94
,86
,62
SISN
,92
,49
,55
err9
p3a
err10
p3c
err11
p3e
err12
p3k
err13
p4g
err14
p4h
err15
p4i
err16
p4j
,89
,91
,59
CDSN
,51
,69
,55
,89
,95
,84
CISN
,85
246
The whole set of fit statistics confirms the FourFactor CFA model (according to Hu and Bentler,
1999 criteria for fit indexes). The examination of the
first set of model fit statistics shows that the Chisquare/degrees of freedom ratio for the Four-Factor
CFA model (2.16) is adequate; the GFI index (.90),
the CFI (.97) and the RMSEA (.07) are consistent
in suggesting that the hypothesized model represents
an adequate fit to the data; finally, the AIC and
the ECVI (criteria used in the comparison of two or
more models) presents smaller values than in the OneFactor CFA model, which represents a better fit of the
hypothesized Four-Factor CFA model.
The standardized path coefficients are portrayed
in Figure 1. The parameters relating items to factors
ranged between .59 and .95 (p < .0001). Correlations between the four factors were also significant
(p < .0001) and ranged between .49 and .79.
5
CONCLUSIONS
ACKNOWLEDGEMENTS
This research was supported by Fundao para a Cincia e TecnologiaPortugal (SFRH/BDE/15635/2006)
and Metropolitano de Lisboa.
REFERENCES
Ajzen, I. 1991. The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50:
179211.
Ajzen, I. 2005. Laws of human behavior: Symetry, compability and attitudebehavior correspondence. In A. Beauducel, B. Biehl, M. Bosniak, W. Conrad, G. Shnberger e D.
Wagener (eds.), Multivariate Research Strategies: 319.
Aachan, Germany: Shaker Verlag.
Armitage, C.J. & Conner, M. 2001. Efficacy of the theory of planned behaviour: a meta-analytic review. British
Journal of Social Psychology, 40: 471499.
Burke, M.J., Sarpy, S.A., Tesluk, P.E. & Smith-Crowe, K.
2002. General safety performance: A test of a grounded
theoretical model, Personnel Psychology, 55.
Cialdini, R.B., Kallgren, C.A. & Reno, R. 1991. A focus
theory of normative conduct. Advances in Experimental
Social Psychology, 24: 201234.
Cialdini, R.B., Reno, R. & Kallgren, C.A. 1990. A focus theory of normative conduct: Recycling the concept of norms
to reduce littering in public places. Journal of Personality
and Social Psychology, 58(6): 10151026.
Cialdini, R.B., Sagarin, B.J., Barrett, D.W., Rhodes, K. &
Winter, P.L. 2006. Managing social norms for persuasive
impact, Social Influence, 1(1): 315.
Cialdini, R.B. & Trost, M.R. 1998. Social influence: Social
norms, conformity and compliance. In D.T. Gilbert,
S.T. Fiske, & G. Lindzey (Eds.), The Handbook of
Social Psychology (4th ed., Vol. 2): 151192. New York:
McGraw-Hill.
Conner, M. & McMillan, B. 1999. Interaction effects in
the theory of planned behavior: Studying cannabis use,
British Journal of Social Psychology, 38: 195222.
Conner, M., Smith, N. & McMillan, B. 2003. Examining
normative pressure in the theory of planned behaviour:
Impact of gender and passengers on intentions to break
the speed limit, Current Psychology: Developmental,
Learning, Personallity, Social, 22(3): 252263.
Deutsch, M. & Gerard, H.B. 1955. A study of normative
and informational social influences upon individual judgment. Journal of Abnormal and Social Psychology, 51:
629636.
Fekadu, Z. & Kraft, P. 2002. Expanding the Theory
of Planned Behavior: The role of social Norms and
Group Identification. Journal of Health Psychology, 7(1):
3343.
Hmlinen, P., Takala, J. & Saarela, K.L. 2006. Global
estimates of occupational accidents. Safety Science, 44:
137156.
Hofmann, D.A., Morgeson, F.P. & Gerras, S.J. 2003. Climate as a moderator of the relationship between LMX
and content specific citizenship behavior: Safety climate
as an exemplar. Journal of applied Psychology, 88(1):
170178.
247
Hu, L.-T. & Bentler, P.M. 1999. Cutoff criteria for fit indexes
in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A
Multidisciplinary Journal, 6: 155.
Johnson, S. & Hall, A. 2005. The prediction of safe lifting
behavior: An application of theory of planned behavior.
Journal of Safety Research, 36: 6373.
Linnan, L., Montagne, A., Stoddard, A., Emmons, K.M. &
Sorensen, G. 2005. Norms and their relationship to behavior in worksite settings: An application of the Jackson
Return Potential Mode, Am. J. Health Behavior, 29(3):
258268.
Rivis, A. & Sheeran, P. 2003. Descriptive norms as
an additional predictor in the Theory of Planned
Behaviour: A meta-analysis. Current Psychology:
Developmental, Learning, Personality, Social, 22(3):
218233.
Tesluk, P. & Quigley, N.R. 2003. Group and normative influences on health and safety, perspectives from taking a
broad view on team effectiveness. In David A. Hofmann
e Lois E. Tetricck (Eds.), Health and Safety in Organization. A multilevel perspective: 131172. S. Francisco:
John Wiley & Sons.
Zohar, D. 2000. A group-level model of safety climate:
Testing the effect of group climate on microaccidents in
manufacturing jobs. Journal of Applied Psycholog, 85:
587596.
Zohar, D. 2002. The effects of leadership dimensions, safety
climate, and assigned priorities on minor injuries in work
groups. Journal of Organizational Behavior, 23: 7592.
Zohar, D. & Luria, G. 2005. Multilevel model of safety
climate: Cross-level Relationships between organization
and group-level climates. Journal of Applied Psychology,
9(4): 616628.
248
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper is devoted to selected aspects for knowledge-based layer of protection analysis (LOPA)
of industrial hazardous systems with regard to human and organizational factors. The issue is discussed in the
context of functional safety analysis of the control and protection systems to be designed and operated according
to international standards IEC 61508 and IEC 61511. The layers of protection can include, for instance, the
basic process control system (BPCS), human-operator (HO) and safety instrumented system (SIS). Such layers
should be independent; however, due to some factors involved the dependencies can occur. Thus, it may result in
the risk level increasing of accident scenarios identified in the process of risk analysis. The method is illustrated
on example of the control system (TCS) and protection system (TPS) of a turbo-generator set, which has to
be shut-down in some situations of internal or external disturbances. The required risk reduction is distributed
among TCS and TPS to be designed for appropriate safety integrity level (SIL).
INTRODUCTION
249
disturbances, faults and accidents as well as the diagnostic activities, the functionality and safety integrity
tests, maintenance actions and repairs after faults.
The operators supervise the process and make decisions using some alarm panels within the operator support system (OSS), which should be designed carefully
for abnormal situations and accidents, also for cases of
partial faults and dangerous failures within the electric,
electronic and programmable electronic (E/E/PE) systems (IEC 61508) or the safety instrumented systems
(SIS) (IEC 61511). The OSS when properly designed
will contribute to reducing the human error probability
and lowering the risk of potential accidents.
The paper outlines the concept of using a
knowledge-based method for the layer of protection
analysis (LOPA) of industrial hazardous systems with
regard to the influence of human and organizational
factors (H&OF). Various layers of protection can
be distinguished in the context of identified accident scenarios, including e.g. basic process control
system (BPCS), human-operator (HO) and safety
instrumented system (SIS) designed according to
requirements and probabilistic criteria given in functional safety standards: IEC 61508 and IEC 61511.
The protection layers should be independent. However, due to some factors involved the dependencies
can occur. It can result in a significant increase of
the risk level of accident scenarios identified in the
risk evaluating process. The problem should be carefully analyzed at the system design stage to consider
relevant safety related functions of appropriate SILs.
To reduce the human failure probability (HEP) an
advanced OSS should be designed. For dynamic processes, with a short HO reaction time permitted, the
HEP can be high, close to 1. The paper emphasizes the
importance of the context-oriented human reliability
analysis (HRA) within the functional safety management and necessity to incorporate in a systematic way
more important influencing factors.
2
2.1
Modern industrial systems are extensively computerised and equipped with complex programmable control and protection systems. In designing of the control
and protection systems a functional safety concept
(IEC 61508) is more and more widely implemented
in various industrial sectors, e.g. the process industry
(IEC 61511) and machine industry (IEC 62061).
The aim of functional safety management is to
reduce the risk of hazardous system to an acceptable or tolerable level introducing a set of safetyrelated functions (SRFs) to be implemented using the
250
PFDavg
PFH [h1 ]
4
3
2
1
[105 , 104 )
[104 , 103 )
[103 , 102 )
[102 , 101 )
[109 , 108 )
[108 , 107 )
[107 , 106 )
[106 , 105 )
2.2
251
and systems (Kosmowski 2007). It will be demonstrated that a functional safety analysis framework
gives additional insights in HRA.
Several traditional HRA methods have been used
in PSA practice, including THERP method (Swain
& Guttmann 1983), developed for the nuclear
industry, but applied also in various industrial sectors.
Other HRA methods, more frequently used in practice are: Accident Sequence Evaluation ProcedureHuman Reliability Analysis Procedure (ASEP-HRA),
Human Error Assessment and Reduction Technique
(HEART), and Success Likelihood Index Method
(SLIM)see description and characterization of HRA
methods (Humphreys 1988, COA 1998).
Two first mentioned methods (THERP and ASEPHRA) are the decomposition methods, based on a
set of data and rules for evaluating the human error
probabilities (HEPs). HEART consists of generic
probabilistic data and a set of the influence factors
for correcting nominal human error probability. SLIM
enables to define a set of influence factors, but requires
data for calibrating the probabilistic model.
In the publication by Byers et al. (2000) five HRA
methods were selected for comparison on the basis
of either relatively widespread usage, or recognized
contribution as a newer contemporary technique:
252
AT
functional test (FT); PFDavg
the probability of subsystem failure on demand, detected in automatic test
(AT); PFDHE the probability of failure on demand
due to human error (HE). Depending on the subsystem and situation considered the human error can be a
design error (hardware of software related) or an operator error (activities of the operator in the control room
or as a member of maintenance group).
The E/E/PE safety-related system in designing consists of subsystems: sensors/transducers/converters
(STC), programmable logic controllers (PLC) and
equipment under control (EUC). Each of these subsystems can be generally treated as KooN architecture,
which is determined during the design. Each PLC
comprises the central unit (CPU), input modules (digital or analog) and output modules (digital or analog).
The average probability of failure on demand PFDavg
of the E/E/PE safety-related system (SYS) is evaluated as the sum of probabilities for these subsystems
(assuming small values of probabilities) from the
formula
STC
PLC
EUC
SYS
+ PFDavg
+ PFDavg
PFDavg
= PFDavg
(4)
independent of the initiating event and the components of any other IPL already claimed for the same
scenario,
auditable, i.e. the assumed effectiveness in terms
of consequence prevention and PFD must be capable of validation (by documentation, review, testing,
etc.).
An active IPL generally comprises following components:
Aa sensor of some type (instrument, mechanical,
or human),
Ba decision-making element (logic solver, relay,
spring, human, etc.),
Can action (automatic, mechanical, or human).
Such IPL can be designed as a Basic Process Control System (BPCS) or SIS (see the layers 2 and 4
in Fig. 1). These systems should be functionally and
structurally independent; however, it is not always possible in practice. Figure 2 illustrates the functional
relationships of three protection layers: 2, 3 and 4
shown in Figure 1. An important part of such complex
system is the man-machine interface (MMI) (GSAChP
1993, Gertman & Blackman 1994). Its functionality
and quality is often included as an important PSF in
HRA (Kosmowski 2007).
3.1
1. Installation /
PROCESS
Figure 1.
tion.
PLANT / PROCESSES
Equiment under
Control(EUC)
Safety-related
system (SRS)
EUC (SIS)
STC (SIS)
Safety
Instrumented
System (SIS)
State control,
Testing, Supervision
STC
EUC
Basic Process
Control System
(BPCS)
Information /
interface
Decisons /
Control
SRS
Sensors /
Transducers /
Converters
(STC)
Information /
Interface
253
PL1
BPCS
PL2
HO
3.3
PL3
SIS
PL1
TCS
PL2
HO
PL3
TPS
3.2
(5)
(6)
= (1 H )QA QB + H QA
(7)
254
A.
STC
B.
E/E/PES
SIL2
SIL3
Figure 5.
SIL1
A.
STC
B.
E/E/PES
SIL2
SIL3
Figure 6.
(TPS).
C. EUC
V-TCS
C. EUC
V-TPS
SIL2
PL1
TCS
PL2
HO
PL3
TPS
SLI1
HEP~1
SLI2->SIL3
255
CONCLUSIONS
ACKNOWLEDGMENTS
The author wish to thank the Ministry for Science
and Higher Education in Warsaw for supporting the
research and the Central Laboratory for Labour Protection (CIOP) for co-operation in preparing a research
programme concerning the safety management of
hazardous systems including functional safety aspects.
REFERENCES
Byers, J.C., Gertman, D.I., Hill, S.G., Blackman, H.S.,
Gentillon, C.D., Hallbert, B.P. & Haney, L.N. 2000.
Simplified Plant Risk (SPAR) Human Reliability Analysis (HRA) Methodology: Comparisons with Other HRA
Methods. INEEL/CON-00146. International Ergonomics
Association and Human Factors & Ergonomics Society
Annual Meeting.
Carey, M. 2001. Proposed Framework for Addressing Human
Factors in IEC 61508. Prepared for Health and Safety
Executive (HSE). Contract Research Report 373. Warrington: Amey Vectra Ltd.
COA 1998. Critical Operator ActionsHuman Reliability Modeling and Data Issues. Nuclear Safety,
NEA/CSNI/R(98)1. OECD Nuclear Energy Agency.
Dougherty, E.M. & Fragola, J.R. 1988: Human Reliability Analysis: A Systems Engineering Approach with
Nuclear Power Plant Applications. A Wiley-Interscience
Publication, New York: John Wiley & Sons Inc.
Dougherty, Ed. 1993. Context and human reliability analysis. Reliability Engineering and System Safety, Vol. 41
(2547).
Embrey, D.E. 1992. Incorporating Management and Organisational Factors into Probabilistic Safety Assessment.
Reliability Engineering and System Safety 38: 199208.
Gertman, I.D. & Blackman, H.S. 1994. Human Reliability
and Safety Analysis Data Handbook. New York: A WileyInterscience Publication.
Gertman, D., Blackman, H., Marble, J., Byers, J. &
Smith, C. 2005. The SPAR-H Human Reliability Analysis Method. Idaho Falls: Idaho National Laboratory.
NUREG/CR-6883, INL/EXT-05-00509.
GSAChP 1993. Guidelines for Safe Automation of Chemical
Processes. New York: Center for Chemical Process Safety,
American Institute of Chemical Engineers.
Hickling, E.M., King, A.G. & Bell, R. 2006. Human Factors
in Electrical, Electronic and Programmable Electronic
Safety-Related Systems. Warrington: Vectra Group Ltd.
Hollnagel, E. 2005. Human reliability assessment in context. Nuclear Engineering and Technology, Vol. 37, No. 2
(159166).
Humphreys, P. 1988. Human Reliability Assessors Guide.
Wigshaw Lane: Safety and Reliability Directorate.
IEC 61508:2000. Functional Safety of Electrical/ Electronic/
Programmable Electronic Safety-Related Systems, Parts
17. Geneva: International Electrotechnical Commission.
IEC 61511:2003. Functional safety: Safety Instrumented
Systems for the Process Industry Sector. Parts 13.
Geneva: International Electrotechnical Commission.
IEC 62061:2005. Safety of MachineryFunctional Safety of
Safety-Related Electrical, Electronic and Programmable
256
Kosmowski, K.T. (ed.) 2007. Functional Safety Management in Critical Systems. Gdansk: Gdansk University of
Technology.
LOPA 2001. Layer of Protection Analysis, Simplified Process Risk Assessment. New York: Center for Chemical
Process Safety, American Institute of Chemical Engineers.
Rasmussen, J. & Svedung, I. 2000. Proactive Risk Management in a Dynamic Society. Karlstad: Swedish Rescue
Services Agency.
Reason, J. 1990. Human Error. Cambridge University Press.
Swain, A.D. & Guttmann, H.E. 1983. Handbook of Human
Reliability Analysis with Emphasis on Nuclear Power
Plant Application. NUREG/CR-1278.
257
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Modern large scale technology systemslike power gridsreliance on information systems is
considerable. Such systems employ not only one, but a range of different information systems. This creates three
important, interdependent challenges for safe and reliable operations. The first is the sheer volume of systems,
which tie up organisational members in bureaucratic work, removing them from operational tasks and thus
introduce additional stress. This implies that the employees must be willing to speak out their previously tacit
knowledge, rules of thumb and know-hownot written in formal job instructionsand enter this information
into the systems, risking to loose personal assets relating to career and workplace identity, being thus a second
challenge. The third problem relates to data quality. Without valid and reliable data the systems will not have
any real value. The systems rely on the quality of key information entered by organisational members.
INTRODUCTION
The introduction of ICT systems creates three important interdependent challenges tied to work practices
and processes for safe and reliable grid operations. The
first is the understanding of the interaction between
technology and organisation, identifying actual as
opposed to formal work practice in the adaptation and
implementation of ICT. Second, employees must be
willing to change their occupational identity and share
their tacit knowledge, rules of thumb and know-how
and enter this information into the systems, which
might have consequences for their careers. The third is
the shift in balance between operational and administrative tasks for the individual. This shift forces the
individual to work more with computers. ICT systems rely on input from people codifying and storing
information. Thus the systems tie up employees in
bureaucratic work, removing them from operational
259
FINDINGS
The two companies we have studied have both chosen to invest heavily in the same large scale Net
Information System (NIS). They have the same goal
for their investment, more cost-efficient and reliable
power grid operations. In this chapter we will describe
the two companies, the work practice of their planners and installers, as well as their ICT reality and
implementation strategy.
3.1
260
Modern power grid operations are completely dependent on complex and secure ICT-solutions as tools for
safe and reliable functioning, and for becoming more
cost-efficient. The quality of information is vital in
order to ensure safe operations, involving trustworthiness, relevance, clarity, opportunities and availability.
The quality of the information flow is equally important, including the attitude to learning, cooperative
climate, process and competencies. Data quality thus
has prime priority and is a condition for ICT-based
cooperationthe data systems in use must be correct
and continually updated. ICT makes this multi-actor
reality possible.
NIS is installed and in use today among planners and management in the companies, but a key
challenge is the installers lack of competencies
in computers and data systems. Another important
challenge is the installers attitude to the use of ICTbased tools as an inherent part of their routines and
work practicesshowing the identity issues lying
between working physically on the grid versus increasingly working with a system. The question is which
organisational elements need to be considered in order
to facilitate this change in work practice, intensifying
the interplay between technology and organisation.
3.2.1 The planners use of ICT
The planners and the management of the two enterprises use NIS in their daily work. An area where
use of NIS has been successful is authority reporting.
NIS has simplified and automated several of the steps
in yearly reports. Such reporting was a process that
earlier was manual, cumbersome and time consuming
(Nsje et al. 2005).
NIS is also used as a tool for early stage projecting when a new project is started, usually before it is
decided when or by whom the project will be carried
out. When determined, a more detailed planning is
done. However, for this more detailed planning work
NIS is considered to be awkward and not up to date.
The planners state that: We use NIS as a first step for
simple and overall calculations. We dont use NIS for
detailed project planning. To do so is inconvenient;
NIS lacks basic features, and data is unreliable. Once
you get under the ground everything looks different
anyway.
Thus, the project planners want to get out in the
field, to have look at it and discuss it with the doers.
While the goal of top management is that planners
should be able to do all the planning without leaving
their office desks the planners themselves believe in
visiting the project site, discussing with the installers,
and rely on the installers local knowledge as far as
possible. They simply do not trust data in NIS to
be good enough to carry out project work, without
calibrating with the physical reality.
All though NIS is in use among the planners, they
are aware of the systems shortcomings regarding data
quality, and in order to carry out projects smoothly
and establish reliable grids they prefer hands on local
knowledge instead of decontextualised information in
an ICT system.
3.2.2 The installers use of ICT
Most installers do not use NIS other than in the
most limited way possible. Some of them manage to
print out topographic instructions entered by planners,
while most get a printed copy from his project leader
which is used as a reference out in the field. Where
the NIS print-outs do not concur, they deal with the
unforeseen problem physicallydigging holes in the
ground checking the situation themselves, and calling
their employers, contractors and subcontractors.
261
DISCUSSION
262
263
264
CONCLUSION
aspects of NIS were seen as attainable through an overall level implementation of the systemfrom management in order to analyse the reliability and efficiency
of grid operations to installers, using data to locate
exactly where they should operate geographically out
in the field, reporting back standard work data, as well
as inconsistencies between the information systems
and the actual field in order to continuously update
the systems. ICT is thus significant both as a grid
controlling and as an operating device.
As the two companies see it, they also need to
become efficient ICT-based organisations so as to be
able to respond to the increased complexity of their
working environment and organisational structures, in
order to continue to carry out reliable grid operations.
But because of too little focus on strategy and necessary conditions for continuous use and functioning,
the implementation process has more or less stagnated. Results show that while planners are coping
fairly well based on previous experience with computers and constant interaction with management and
support functions, the installers only have been given
between one and two hours of training in using their
new equipment, which basically has been the only
interaction they have had with other organisational layers regarding the implementation and use of the new
technology.
Recent research on organisational change put
emphasis on the importance of well conducted change
processes, important elements are taking existing
social norms into account, being aware of workforce
diversity, making conflicts constructive, clear distribution of roles and responsibilities, and the managements availability (Saksvik et al. 2007). Orlikowski
(1996) views organisational change as enacted through
the situated practices of organisational actors as they
improvise, innovate and adjust their work routines.
As ICT engender automating as well as informating capacities (Zuboff, 1988), implementation of NIS
requires a process where dialogue and negotiations are
key. Technology transfer as the learning of new ICT
must be recognised as a collective achievement and
not top-down decision-making, or grid reliability will
remain random and difficult to control.
The studied companies learned the necessity of
rethinking several aspects of information handling
in the process of becoming ICT-based somewhat the
hard way. The systems are implemented but remain
disintegrated, and thus data quality is poor. Some problemsintrinsically tied to the lack of focus on systems
specifications before the start of the implementation
processes, were the non-compatibility of the old systems with the new ones, as well as poor routines for
data quality assurance and data maintenance. The lack
of synchronisation between the development of the
software and which functionalities that are actually
needed in order to use it as operating device, is also a
265
REFERENCES
Alvesson, M. (1993). Organizations As Rethoric: KnowledgeIntensive Firms and the Struggle with Ambiguity. Journal
of Management Studies, 30(06), 9971015.
Argyris, C. & Schn, D. (1978). Organizational learning:
A theory of action perspective. USA: Addison Wesley
Publishing.
Barley, S.R. (1996). Technicians in the workplace: Ethnographic evidence for bringing work into organization
studies. Administrative Science Quarterly 41(3), 404.
Barley, S.R. & Kund G. (2001). Bringing work back in.
Organization Science 12(1): 76.
266
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Incorporating simulator evidence into HRA: Insights from the data analysis
of the international HRA empirical study
S. Massaiu, P.. Braarud & M. Hildebrandt
OECD Halden Reactor Project, Norway
ABSTRACT: The observation of nuclear power plant operating crews in simulated emergencies scenarios
reveals a substantial degree of variability in the timing and execution of critical safety tasks, despite the extensive
use of emergency operating procedures (EOP). Detailed analysis of crew performance shows that crew factors
(e.g. leadership style, experience as a team, crew dynamics) are important determinants of this variability.
Unfortunately, these factors are problematic in most Human Reliability Analysis (HRA) approaches, since most
methods do not provide guidance on how to take them into account nor on how to treat them in predictive
analyses. In other words, factors clearly linked to the potential for errors and failures, and information about
these factors that can be obtained from simulator studies, may be neglected by the HRA community. This paper
illustrates several insights learnt from the pilot phase of the International HRA Empirical Study on analysis,
aggregation and formatting of the simulator results. Suggestions for exploiting the full potential of simulator
evidence into HRA are made.
1
INTRODUCTION
267
Table 1.
Method
Team
Country
ATHEANA
NRC staff +
consultants
EPRI (Scientech)
PSI
NRI
USA
CBDT
CESA
CREAM
Decision
Trees + ASEP
HEART
KHRA
MERMOS
PANAME
SPAR-H
THERP
THERP (Bayesian
enhanced)
Simulations
IDAC
MICROSAINT
QUEST-HP
NRI
Ringhals +
consultants
KAERI
EDF
IRSN
NRC staff +
consultants, INL
NRC staff +
consultants
VTT/TVO/Vattenfall
University of Maryland
Alion
Riso
Politecnico di Milano
USA
Switzerland
Chech
Republic
Chech
Republic
Sweden/
Norway
South Kores
France
France
USA
USA
Finland/
Sweden
USA
USA
Denmark
Italy
268
2.3
Observations
(raw data)
- Audio / video
- Operational story
- Simulator logs
- Observed
influences
(PSFs)
- Observed
difficulties
(interpreted in
light of other
types of raw
data)
- On-line performance
ratings
- On-line comments
- Crew self-ratings
- Observer ratings
Figure 1.
14 Crew-level
performances
- Interviews
- OPAS
2 scenario-level
performances (over
all crews)
- 2 operational
expressions
- 2 sets of driving
factors with
ratings
269
Table 2.
SceSG
SG
Crew nario Time1 level2 Crew Scenario Time1 level2
M
H
L
B
A
I
E
K
D
J
G
F
C
N
Base
Base
Base
Base
Base
Base
Base
Base
Base
Base
Base
Base
Base
Base
10:23
11:59
13:06
13:19
13:33
13:37
14:22
15:09
16:34
17:38
18:38
18:45
18:53
21:29
20
10
6
21
17
31
40
39
55
44
39
73
57
75
L
B
I
M
G
N
H
K
D
A
C
F
J
E
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
Complex
19:59
21:10
21:36
22:12
23:39
24:37
24:43
26:39
27:14
28:01
28:57
30:16
32:08
45:27
78
1003
70
81
88
86
91
64
100
100
99
100
100
98
1 From
EXPERIMENT RESULTS
270
Table 3.
Crew
Procedure progression
C
G
L
N
A
M
E
F
I
H
B
D
J
K
E-0 step 21
E-0 step 21
E-0 step 21
E-0 step 21
E-0 step 21ES-1.1 foldout page
E-0 step 21ES-1.1 foldout page
E-0 step 21ES-1.1E-0 step 19
E-0 step 21ES-1.1E-0 step 19
E-0 step 21ES-1.1E-0 step 19
E-0 step 21ES-1.1FR-H5E-0 step 19
E-0 step 24
E-0 step 2425
E-0 (second loop) step 14E-2 step 7
E-0 step 19
little integration of crew characteristics in most methods, integration that should ideally also account for the
relations between the two kinds of crew factors: crew
characteristics and crew interaction.
271
3. Aggregated driving PSFs (based on driving factors summaries for best and worse performing
crews and operational stories).
These presentation formats were chosen to allow the
comparison of HRA method predictions with observed
simulator performance. The response times were necessary in order to assess the performance of the HFEs
of the study. The aggregated stories were written in
order to summarize the performance of 14 different
crews (in the two scenarios), into single operational
expressions, which are the typical level of representation of HRA analyses (as a discretization of all possible
scenario variants). The same goes for the summary
of the driving PSFs, which could be considered as
the PSFs of the aggregated stories, as opposed to
the various configurations of context in the individual
scenario runs.
Concerns could be raised about the level of accuracy and completeness of the empirical information
reached trough this process of aggregation and formatting, concerns which regard all three types of results
output.
4.1
Crew performance in the pilot study was operationalized as crew performance of the HFEs, in terms of
completion time and ruptured SG level at isolation
(the lowest the better). When the crews are evaluated
on the performance of the HFE a strong emphasis was
laid on time, with best crews being the fastest to
isolate, and the worst the slowest. This is a consequence of defining the HFE on a time criterion,
although the time criterion has a strong relation to several functional goals, including the PSA relevant one
of avoiding filling up the steam generators (Broberg
et al. 2008b).
On the fine-grained level of a simulator trial,
however, the speed of action can only be one of
several indicators of good performance, and one
that can never be isolated from the other indicators. For instance, a crew can act very fast when
a shift supervisor takes an extremely active role,
decides strategies without consultation and orders the
crew to perform steps from procedures, although the
latter is a reactor operator responsibility. This performance would not be optimal in terms of other
indicators: first, such behavior would not be considered in accordance to the shift supervisor function
and the training received. Further, it would reduce the
possibility for second checks, with one person centralizing all diagnosis and planning functions. Third,
it may disrupt successive team collaboration as the
reactor operator would feel disposed of his/her functions and could assume either a passive or antagonistic
position.
272
scenarios descriptions
the nature of the simulated plant responses, procedures, and interface
the plant specific work practices of the participating
crews.
4.3
(1)
273
CONCLUSIONS
274
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Insights from the HRA international empirical study: How to link data
and HRA with MERMOS
H. Pesme, P. Le Bot & P. Meyer
EDF R&D, Clamart, France
ABSTRACT: MERMOS is the reference method used by Electricite de France (EDF) for Human Reliability
Assessment (HRA), to assess the operation of nuclear reactors during incidents and accidents in EDF Probabilistic and Safety Assessment (PSA) models. It is one of the second generation HRA methods that have participated
in the HRA international empirical study organised by the NRC and the Halden Reactor Project. This international study is not finished but has been already an opportunity to debate on relevant HRA issues during the
workshops in Halden and in Washington in 2007.
In this paper we will focus on the nature and meaning of predictive HRA, compared to the nature of data
(from observations on simulators or small real incidents). Our point of view on this subject will be illustrated
with an example of a MERMOS analysis implemented for the international study. Predictive HRA exists when
failure cannot be observed: it is a way to explore and reduce uncertainty, regarding highly reliable socio-technical
systems. MERMOS is a method which is supported by a model of accident that enables to describe the risk
and to link it to data. Indeed failure occurs when a way of operating (that usually leads to success) happens to
be inappropriate to a very specific context. Then data, in fact knowledge, is needed to describe two things: the
different operating ways (for example focusing for a while on the recovery of a system), and specific situations
(a serious failure of this system with a problem for identifying it). The HRA analyst has then to find which
combinations of operating habits and very specific situations could mismatch and lead to a serious failure of the
human action required to mitigate the consequences of the accident.
These links between operating habits, small incidents and big potential accidents that we will try to describe
in this paper should be understood for decision making in the field of safety, human factors and organisation;
indeed for example changing a working situation might be very risky regarding the whole panel of situations
modelled in a PSA. HRA should thus help the decision making process in the Human Factors field, besides
ergonomic and sociological approaches.
INTRODUCTION
275
2
2.1
4.1
C
success
C context A
CICA 1
C context B
failure
CICA 1
276
277
Operational expressions
observed during the dedicated experiments in
Halden (HFE 1A)
trained transient
easy to diagnose
transient
Operational expressions
Operational expressions observed during the
predicted in
dedicated experiments in
MERMOSHFE1B
Halden (HFE 1B)
The system does not
Supported by the evidence:
perform the procedural
A crew was delayed
steps fast enough and
in E-0 due to their
does not reach the
early manual steam line
isolation step within
break identification and
the allotted time
isolation
Also: They use some time,
check if sampling is
open
Identification of the
SGTR by checking
steam generator levels
can cause problems or
time wastage.
The absence of
This is strongly supported
radioactivity does not
by the empirical evidence
facilitate diagnosis or
and is in fact a dominant
enable other hypotheses
operational issue
to be developed for the
(when combined with
event in progress.
the procedural guidances
reliance on this indication
and an apparent lack of
training on the alternative
cues).
ARO takes time to
check that the level in
SG#1 is rising uncon-
This is similar to
the second operational
expression above. Note
(continued)
Operational expressions
predicted in
MERMOSHFE1B
trollably. This is probable (assigned p = 0.3).
The ARO will not be
fast because this check
is not often trained
during SGTR scenarios
[which rely more
strongly on other cues]
Operational expressions
observed during the
dedicated experiments in
Halden (HFE 1B)
that the lack of training
on checking of alternative
cues for SGTR is
supported strongly by
the empirical data.
SS abruptly to RO:
you can go to FR-H5
(one is not allowed to
enter this procedure
before step 22)
278
Last, it is important to underline that some of the meaningful operational stories in MERMOS could not be
observed on the Halden dedicated simulations. Indeed
the MERMOS analyses take the most of all the simulations that we know (from our EDF simulators) and
that could be extrapolated for this study.
Here are some examples of those data:
CONCLUSION
In this paper we have focused on the nature of predictive HRA, compared to the nature of data (from
simulations or from small real incidents). Predictive
HRA exists when failure cannot be observed: it is
a way to explore and reduce uncertainty, regarding
high reliable socio-technical systems. MERMOS is a
method which is supported by a model of accident that
enables to describe the risk and to link it to data, as
we could see through examples from the international
study; the HRA analyst has then to find which combinations of operating habits and very specific situations
could mismatch and lead to a serious failure of the
human action required to mitigate the consequences
of the accident, as considered in PSAs.
These links between operating habits, small incidents and big potential accidents that we have tried
to describe in this paper should be understood for
decision making in the field of safety, human factors and organisation; indeed for example changing
a working situation might be very risky regarding the
whole panel of the situations modelled in a PSA. HRA
should thus help the decision making process in the
Human Factors field, besides ergonomic and sociological approaches, even if it still needs research to
push away its boundaries.
REFERENCES
Operational
expressions predicted in
MERMOSHFE 1A
and HFE 1B
local actions may
cause delay
run through the procedures
step by step
the SS does not incite
the operators to accelerate
the procedural path
the SS does not worry
about the effective performance of the action
delegation of . . . to the
other operator
the operator makes a
mistake in reading . . .
the shift supervisor
leads or agrees the strategy
of the operators
suspension of operation
Operational
expressions observed
on EDF simulators
Yes (and this is
simulated by the
trainers)
Yes
Yes
Yes
Yes
Yes
Yes
Yes (in order to
gain a better
understanding of
the situation)
279
[1] Lois E., Dang V., Forester J., Broberg H., Massaiu
S., Hildebrandt M., Braarud P. ., Parry G., Julius J.,
Boring R., Mnnist I., Bye A. International HRA
Empirical StudyDescription of Overall Approach and
First Pilot Results from Comparing HRA Methods
to Simulator Data, HWR-844, OECD Halden Reactor Project, Norway (Forthcoming also as a NUREG
report, US Nuclear Regulatory Commission, Washington, USA), 2008.
[2] Bieder C., Le Bot P., Desmares E., Cara F., Bonnet
J.L. MERMOS: EDFs New Advanced HRA Method,
PSAM 4, 1998.
[3] Meyer P., Le Bot P., Pesme H. MERMOS, an extended
second generation HRA method, IEEE/HPRCT 2007,
Monterey CA.
[4] Pesme H., Le Bot P., Meyer P., HRA insights from the
International empirical study in 2007: the EDF point of
view, 2008, PSAM 9, Hong Kong, China.
[5] Le Bot P., Pesme H., Meyer P. Collecting data for MERMOS using a simulator, 2008, PSAM 9, Hong Kong,
China.
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Operators response time estimation for a critical task using the fuzzy
logic theory
M. Konstandinidou & Z. Nivolianitou
Institute of Nuclear Technology-Radiation Protection, NCSR Demokritos, Aghia Paraskevi, Athens, Greece
G. Simos
Hellenic Petroleum S.A., Aspropyrgos, Athens, Greece
ABSTRACT: A model for the estimation of the probability of an erroneous human action in specific industrial
and working contexts based on the CREAM methodology has been created using the fuzzy logic theory. The
expansion of this model, presented in this paper, covers also operators response time data related with critical
tasks. A real life application, which is performed regularly in a petrochemical unit, has been chosen to test
the model. The reaction time of the operators in the execution of this specific task has been recorded through
an indication reported in the control room. For this specific task the influencing factors with a direct impact
on the operators performance have been evaluated and a tailored made version of the initial model has been
developed. The new model provides estimations that are in accordance with the real data coming from the
petrochemical unit. The model can be further expanded and used in different operational tasks and working
contexts.
INTRODUCTION
Many factors influence human performance in complex man-machine systems like the industrial context
but not all of them influence the response time of
operators, at least not with the same importance.
Many studies have been performed to estimate operators response time mainly for Nuclear Power Plants
(Boot & Kozinsky 1981, Weston & Whitehead 1987).
Those were dynamic simulators studies with the objective to record response time of operators under abnormal events (Zhang et al. 2007) and also to provide
estimates for human error probabilities (Swain &
Guttmann 1983). A fuzzy regression model has also
been developed (Kim & Bishu 1996) in order to assess
operators response time in NPP. The work presented
in this paper is an application for the assessment of
operators response times in the chemical process
industry.
A model for the estimation of the probability
of an erroneous human action in specific industrial
and working contexts has been created using the
fuzzy logic theory (Konstandinidou et al. 2006b).
The fuzzy model developed has been based on the
CREAM methodology for human reliability analysis and includes nine input parameters similar to the
common performance conditions of the method and
281
A fuzzy logic system for the estimation of the probability of a human erroneous action given specific
industrial and working contexts has been previously
developed (Konstandinidou et al. 2006b). The fuzzy
logic modeling architecture has been selected on
account of its ability to address qualitative information and subjectivity in a way that it resembles the
human brain i.e. the way humans make inferences and
take decisions. Although fuzzy logic has been characterized as controversial by mathematician scientists,
it is acknowledged that it offers a unique feature: the
concept of linguistic variable. The concept of a linguistic variable, in association with the calculi of fuzzy
ifthen rules, has a position of centrality in almost all
applications of fuzzy logic (Zadeh, 1996).
According to L. Zadeh (2008) who first introduced
fuzzy logic theory, today fuzzy logic is far less controversial than it was in the past. There are over 50,000
papers and 5,000 patents that represent a significant
metric for its impact. Fuzzy logic has emerged as a very
useful tool for modeling processes which are rather
complex for conventional methods or when the available information is qualitative, inexact or uncertain
(Vakalis et al. 2004).
The Mamdani type of fuzzy modeling has been
selected and the development of the system has been
completed in four steps.
i.
ii.
iii.
iv.
282
The new model disposes of a new output parameter namely operators response time. The output
parameter provides the needed estimations for operators response time. In order to maintain the connection with the initial model the same names and
notions in the output parameters were used. The output fuzzy sets correspond to the four control modes
of the COCOM model that is the cognitive model
used in CREAM (Hollnagel 1998). Those modes are:
the strategic control mode; the tactical control
mode; the opportunistic control mode; and the
scrambled control mode.
For the application of the ORT fuzzy model the
four control modes were used to define the time intervals within which the operator would act to complete
a critical task. Hence quick and precise actions that
are completed within very short time are compatible
with the strategic control mode; tactical control mode includes actions within short time intervals
slightly more broad than the previous one; opportunistic control mode corresponds to slower reactions
that will take longer time while scrambled control mode includes more sparse and time consuming
reactions.
The relevant time intervals as defined for the four
control modes in the ORT fuzzy model are presented in table 1. A graphical representation of the four
fuzzy sets is given in figure 1. The range of the four
fuzzy sets is equivalent to the range used in the probability intervals of action failure probabilities in the
initial model (Konstandinidou et al. 2006b) expressed
in logarithmic values.
Table 1.
In order to produce estimates for response time of operators in industrial context the fuzzy model for Human
Reliability Analysis has been used. With this model
as a basis the fuzzy model for Operators Response
TimeORT estimation has been built.
The functional characteristics of the initial model
remained as they were defined. That means that the
same nine input parameters with the same defined
fuzzy sets have been used. The phrasing and the linguistic variables have remained the same too. This was
very helpful in order to have a correspondence between
the two models. For more details concerning the functional characteristics of the initial model please refer
to (Konstandinidou et al. 2006b).
Control mode
Min
Strategic
Tactical
Opportunistic
Scrambled
0
0.01
0.1
1
Max
<t<
<t<
<t<
<t<
0.1
1
5
10
1
Strategic
Tactical
Opportunistic
Scrambled
0
0
10
Time interval
283
284
For the development of this tailored made Operators Response TimeORT short model the Mamdani
type of fuzzy modeling has been selected and the
development of the system has been completed in four
steps.
i. Selection of the input parameters
Three input parameters have been chosen
according to the conclusions stated in the previous
section. These input parameters are:
a. The number of simultaneous goals
b. The adequacy of training and experience
c. The time of the day
As unique output parameter was defined the
Operators Response Time.
ii. Development of the fuzzy sets
In the second step, the number and characteristics
of fuzzy sets for the input variables and for the output
parameter were defined. The definition of the fuzzy
sets was made according to the observations from the
real data and the comments of the key personnel as
stated previously.
Number of simultaneous goals: for the first input
parameter three fuzzy sets were defined namely Normal operation, Maintenance and Emergency
Situation.
Adequacy of training and experience: for the
second input parameter two fuzzy sets were defined
namely Poor Level of Training and Experience and
Good Level of Training and Experience.
Time of the day: for the last input parameter two
fuzzy sets were distinguished corresponding to Day
and Night.
Operators response time: The output parameter
had to cover the time interval between 0 and 10 minutes. Five fuzzy sets were defined to better depict small
differences in reaction time and the equivalent time
range was expressed in seconds. The fuzzy sets with
the time intervals each of them covers are presented
in table 2. More precisely operators response time is
Very good from 0 to 20 seconds, Good from 10
to 110 seconds, Normal from 60 to 180 seconds,
Critical from 120 to 360 seconds and Very critical
from 270 to 1170 seconds. A graphical representation
of the five fuzzy sets is given in figure 2 in order to
visualize the range of each time set.
iii. Development of the fuzzy rules
a. Time of the day (day/night) does not affect operators response time during normal operations
b. Time of the day (day/night) does not affect operators response time for operators with good level of
training and experience
According to the observed data and by taking into
account the above mentioned statements 8 fuzzy rules
were defined for the short ORT fuzzy model:
Rule 1: If number of goals is equivalent to normal
operation and adequacy of training and experience
is good then operators response time is very good.
Rule 2: If number of goals is equivalent to normal
operation and adequacy of training and experience
is poor then operators response time is good.
Rule 3: If number of goals is equivalent to maintenance and adequacy of training and experience is
good then operators response time is good.
Rule 4: If number of goals is equivalent to maintenance and adequacy of training and experience is
poor and time is during day shift then operators
response time is normal.
Rule 5: If number of goals is equivalent to maintenance and adequacy of training and experience is
poor and time is during night shift then operators
response time is critical.
Rule 6: If number of goals is equivalent to emergency
and adequacy of training and experience is good
then operators response time is normal.
Rule 7: If number of goals is equivalent to emergency
and adequacy of training and experience is poor and
time is during day shift then operators response
time is critical.
Rule 8: If number of goals is equivalent to emergency
and adequacy of training and experience is poor and
time is during night shift then operators response
time is very critical.
iv. Defuzzification
Since the final output of the fuzzy system modeling
should be a crisp number for the operators response
time, the fuzzy output needs to be defuzzified. This
is done through the centroid defuzzification method
(Pedrycz 1993) as in the previously developed fuzzy
models.
The fuzzy logic system has been built in accordance
with the real data coming from the petrochemical unit.
The testing of the model and its comparison with
the full version will be presented in the section that
follows.
285
Very Good
Good
Normal
Critical
Very Critical
0
0
100
200
300
400
500
600
Time Interval
Fuzzy set
Very good
Good
Normal
Critical
Very Critical
0
10
60
120
270
6
6.1
<t<
<t<
<t<
<t<
<t<
20
110
180
360
1170
286
Table 3.
Number of
Adequacy Working
MMI and simultaTraining Crew
Operators
of organiz- condi- Procedures operational neous
Available Time and expe- collabora- response
ation
tions
and plans support
goals
time
of day rience
tion
time (sec)
0
10
90
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
0
10
90
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
0
10
90
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
0
10
90
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
10
15
90
90
50
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
15
50
90
10
20
90
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
0
2
12
13
12
0
0
0
12
12
12
0
0
0
12
12
12
0
0
0
12
12
12
0
0
0
12
12
12
0
0
0
12
12
12
0
0
0
12
12
12
10
10
90
100
50
0
0
0
0
0
0
100
100
100
100
100
100
50
50
50
50
50
50
0
0
0
0
0
0
100
100
100
100
100
100
50
50
50
50
50
50
0
10
90
100
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
480
476
8
7
59
276
59
59
59
59
59
59
59
59
59
59
59
59
59
59
59
59
59
294
294
294
276
59
59
59
59
59
59
59
59
276
59
59
59
59
59
287
ORT ORT
Time model short
Training of day (sec)
(sec)
Normal operation
Maintenance
Emergency situation
Normal operation
Maintenance
Emergency situation
Normal operation
Maintenance
Emergency situation
Normal operation
Maintenance
Emergency situation
Good
Good
Good
Good
Good
Good
Poor
Poor
Poor
Poor
Poor
Poor
Day
Day
Day
Night
Night
Night
Day
Day
Day
Night
Night
Night
59
59
59
59
59
59
59
59
276
294
294
294
13
60
120
13
60
120
60
120
240
60
240
570
CONCLUSIONS
288
REFERENCES
Bott, T.F. & Kozinsky, E. 1981. Criteria for safety-related
nuclear power plant operator actions. NUREG/CR-1908,
Oak Ridge National Lab, US Nuclear Regulatory Commission.
Embrey, D.E. 1992. Quantitative and qualitative prediction of human error in safety assessments, Major hazards
Onshore and Offshore, Rugby IChemE.
Hollnagel, E. 1998. Cognitive reliability and error analysis
method (CREAM). Elsevier Science Ltd.
Isaac, A., Shorrock, S.T. & Kirwan, B. 2002. Human error
in European air traffic management: the HERA project.
Reliability Engineering & System Safety 75 (2): 257272.
Kim, B. & Bishu, R.R., 1996. On assessing operator response
time in human reliability analysis (HRA) using a possibilistic fuzzy regression model. Reliability Engineering &
System Safety 52: 2734.
Kontogiannis, T. 1997. A framework for the analysis of cognitive reliability in complex systems: a recovery centred
approach. Reliability Engineering & System Safety 58:
233248.
Konstandinidou, M., Kiranoudis, C., Markatos, N. &
Nivolianitou, Z. 2006a. Evaluation of influencing factors transitions on human reliability. In Guedes Soares
& Zio (eds). Safety and Reliability for Managing Risk.
Estoril Portugal, 1822 September 2006. London: Taylor
& Francis.
Konstandinidou, M., Nivolianitou, Z., Kyranoudis C. &
Markatos, N. 2006b. A fuzzy modelling application of
CREAM methodology for human reliability analysis.
Reliability Engineering & System Safety 91(6): 706716.
Pedrycz, W. 1993. Fuzzy control and Fuzzy Systems Second
extended edition, London: Research Studies Press Ltd.
Swain, A. & Guttmann, H. 1983. Handbook on Human Reliability Analysis with Emphasis on Nuclear Power Plant
Application NUREG/CR-1278 US Nuclear Regulatory
Commission.
289
290
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper describes the process of investigating, defining and developing measures for
organizational supportiveness in employment situations. The methodology centres on a focus group of people of diverse age, gender, grade and commercial and industrial disciplines that met many times over a period
of several weeks. The focus group contribution was developed into a large questionnaire that was pilot tested
on a general population sample. The questionnaire was analysed using factor analysis techniques to reduce it
to a final scale of 54 items, which was evaluated by a team of judges, and was then field tested in a nuclear
power station. The analyses revealed a supportiveness construct containing eight factors, being: communication,
helpfulness, empowerment, barriers, teamwork, training, security and health and safety. These factors differ
from other support-related measures, such as commitment, by the inclusion of a barrier factor. The findings
are evaluated with an assessment of the host company results and opportunities for further research.
INTRODUCTION
with our family, friends, neighbours and work colleagues (Palmer, 2001). In the work context, new
technology and increasing competition have forced
businesses to examine their business strategies and
working practices and evolve new and innovative
means of maintaining their competitive advantage in
the face of stiff competition, particularly from emerging economies. This has resulted in the globalisation
of markets, the movement of manufacturing bases,
the restructuring of organizations, and changes to the
employer/employee relationship. Also, the changing
nature of traditional gender roles that has occurred in
the latter half of the 2th century (Piotrkowski et al.,
1987; Moorhead et al., 1997) and an increase in popularity of flexible working policies (Dalton & Mesch,
1990; Friedman & Galinski, 1992) are impacting significantly upon the traditional psychological contract.
Wohl (1997) suggests that downsizing and the associated reduction in loyalty and secure employment has
caused people to re-evaluate their priorities in respect
of their working and private lives, whilst Moses (1997)
states:
BACKGROUND
291
DEFINING A SUPPORTIVENESS
CONSTRUCT
292
Table 1.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
4
4.1
Gender
Age
Industry
Job type
Female
Male
Female
Female
Female
Male
Male
Male
Male
Male
Female
Female
Male
Female
Male
3140
5160
5160
5160
<20
2130
2130
3140
4150
4150
2130
3140
>61
3140
5160
Power industry
Media
Leisure
Education
Commerce
Student
Construction
Research/development
Engineering
Local Govt
Retailing
Health
Shipping
Power industry
Off-shore
Senior manager
Employee
Employee
Middle manager
Clerical employee
n/a
Employee
Professional
Technician
Accountant
Assistant
Nursing sister
Employee
Clerical
Technician
METHODOLOGY
The focus groups
4.2
The brief
demonstrate their support to their workforce. In considering this task I would like you to include examples
from your own experience of:
Company policies, Temporary initiatives, Manager
and co-worker traits, Personal aspirations that you may
have held but not directly experienced. Include also
any other factor that you consider could have an impact
on this concept. It is equally important that you include
both positive and negative examples and ideas of organizational supportiveness in the workplace in order to
assess not only what a supportiveness concept is, but
also, what is unsupportive.
4.3
293
5
5.1
RESULTS
The 8-factor solution
The results from the two independent datasets, the initial pilot test in the general population (N = 103)
and the field test in a host organisation (N = 226),
were compared and found to be statistically in 91%
agreement. So the two datasets were combined into a
one (N = 329) and subjected to another exploratory
factor analysis where they demonstrated a 96% agreement with the field test result. The factors of this
second exploratory factor analysis with alpha coefficients for the host organisation (N = 226) are shown
in Table 2. The eighth factor was identified as health
& safety and conditions of employment). Since this
involves clear lines of statutory enforcement and corporate responsibility, it is omitted henceforth from the
analysis since these are largely outwith the control of
the local workforce.
Further analysis of each of the 7 factors in respect
of: age, gender, grade, length of service and shiftwork/daywork effects showed no significant difference from the generalised test results with the exception of length of service effects for employees with
less than 2 years service who demonstrated lower alpha
294
Table 2. The 8-factor solution of the field test in the host organization, excluding factor 8 (N = 226).
Factor 1 COMMUNICATION ( = .866)
1 My Manager communicates too little with us.
2 I am told when I do a good job.
3 I can speak to my Manager at anytime.
4 My Manager operates an open door policy.
5 My Manager is always open to my suggestions.
6 My Manager will always explain anything we need
to know.
7 My Manager only tells me what I have done wrong,
not what should be done to improve things.
8 I sometimes feel I am wrongly blamed when things
go wrong.
9 I can speak to my Manager and tell him/her what I
would like.
Table 3.
26
27
28
29
30
Factor
Mean score
Cronbach coefficient
Length of service
<1
12
25
510
1020
>20
<1
12
25
510
1020
>20
Communication
Helpfulness
Empowerment
Barriers
Teamwork
Training
Security
N
5.097
4.908
4.304
4.634
5.000
4.917
3.958
8
5.232
4.948
4.335
4.438
5.630
5.014
3.174
23
4.421
4.350
3.929
4.104
5.792
4.625
3.625
24
4.222
3.978
4.190
3.881
5.583
3.444
3.333
3
5.003
4.711
4.777
4.110
5.666
4.617
4.042
80
4.833
4.600
4.758
4.097
5.599
4.920
4.102
88
.790
.900
.929
.803
.880
.786
.730
8
.861
.921
.764
.887
.317
.840
.341
23
.907
.953
.920
.946
.837
.831
.795
24
.967
.986
.841
.849
.793
.627
.60
3
.844
.909
.909
.893
.722
.600
.738
80
.855
.933
.905
.926
.852
.681
.740
88
295
F1
Communications
F4
Barriers
Normative
Continuance
Affective
.282
.066
.545
.498
.041
.592
Ordinal scale
REFERENCES
6
CONCLUSION
296
297
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: There are several theories that relate to drivers risk-taking behaviour and why they might choose
to increase the level of risk in any particular situation. Focus groups were run to identify transient factors that might
affect driver risk-taking; these were recorded, transcribed and content-analysed to obtain causal attributions. Five
main themes emerged, which could then be sub-divided; these themes are discussed in light of existing theories.
It was found that the attribution to self was the most frequent, but that causal explanations were consistent with
the theories of zero-risk, risk homeostasis and flow.
1
1.1
Behavioural adaptation
In a seminal paper Taylor (1964) measured the galvanic skin response (GSR) of drivers in the following
road types or conditions: urban shopping streets; winding country roads; arterial dual carriageways; peak and
off-peak; day and night and found that GSR, taken to
be a measure of subjective risk or anxiety, was evenly
distributed over time over the range of roads and conditions studied. His results suggest that driving is a
self-paced task governed by the level of emotional tension or anxiety that drivers wish to tolerate (Taylor,
1964). It is now accepted that drivers adapt to conditions and state of mind: Summala (1996, p.103) stated
that the driver is inclined to react to changes in the
traffic system, whether they be in the vehicle, in the
road environment, or in his or her own skills or states.
Csikszentmihalyis (2002) flow model represents
the following two important dimensions of experience: challenges and skills. When individuals are
299
METHODOLOGY
RESULTS
DISCUSSION
300
Table 1.
Statements where the cause was coded under other road users, vehicle and roads authority.
Attributional statement
[a] For other road users as a cause
Id accept a smaller gap, if I was waiting to get out
(the old bloody biddies driving like idiots) really, really gets up my nose
there was no traffic, at all (on M1) so you just put your foot down
then I saw a straight and took a risk that I wouldnt usually have taken
If Im in a traffic jam I turn the radio up
[b] For vehicle as a cause
driving the big car is helped because Ive got a much better view
they all assume that Im a builder or plasterer and therefore behave in a lewd way
[c] For roads authority as a cause
I would definitely stick to the speed limit if it was wet
Im unwilling to overtake on those roads because partly they remind you how many
people have been killed there
Beach Road (I think thats a forty limit) is one of my favourites for (breaking speed
limits); Its . . . in effect, a three lane but its configured as a two-lane road
Table 2.
Identity code
Freq.
68
17
12
54
22
Large vehicle
White van
45
19
60 mph limit
Road safety
measures
22
25
20
Statements where the cause was coded under physical environment, road condition, and road familiarity.
Attributional statement
[a] Physical environment theme
I fell asleep on the motorway
I drive with smaller safety margins in London
You just go round them faster (as you drive along the route)
In Newcastle old ladies are terribly slow and that irritates me
Trying to overtake on those two stroke three lane roads is quite nerve racking
Id accept a smaller gap, if I was in a hurry
(I can enjoy the driving when on) familiar rural roads (in Northumberland)
Driving round streets where people live, (Im much more cautious)
I got to the end of the Coast Road, and thought, I dont actually remember going
along the road
Put me on a bendy country road: Ill . . . rise to the challenge
I was a bit wound up so I was sort of racing, traffic light to traffic light
youve got 3-lane 30 mph roads; I think it is safe to drive there at 60 mph)
[b] Road condition theme
(there was black ice) and I went up the bank and rolled the car
[c] Road familiarity theme
(the old people that come into the city centre here) changes the way I drive
Code identity
Freq.
Dual/motorway
London
Many roundabouts
Newcastle centre
Open rural roads
Priority junction
Rural roads
Slow urban roads
123
18
15
10
55
28
33
24
40
98
26
14
Adverse
25
familiar
88
301
Table 3.
Statements where the cause was coded under the umbrella theme, self .
Attributional statement
[a] Perception theme
In adverse weather I drive more slowly
because I knew it really well, I would drive faster
[b] Capability theme
you drive differently in the small car
(I can drive faster when alone in the car because no distractions)
[c] Journey type theme
I find I can push out into traffic when driving a mini bus
I actually unwound driving home
When Im on a long journey I drive faster
I can enjoy the driving when its pretty rolling countryside
[d] Time pressure theme
you have to (drive more aggressively (in London))
(I was driving calmly) partly because there wasnt a set time I had to be home
[e] Extra motives theme
Everyone expects you to abuse a minibus a bit, like U-turns in the road)
I drive my fastest when Im on my own
(GPS) encourages me to keep the speed up
Yes, (I drive at a speed Im comfortable at regardless of the speed limit)
if Im following someone who is more risky (I will raise my risk threshold)
If somebodys driving really slowly Ill judge whether I can get past safely
(Sometimes I find if you turn it up) youre just flying along
I was really upset, and I drove in a seriously risky manner
I know I can get distracted if Ive got passengers in the car
I only drive slowly with you (because) Id want you see me as a good driver
Ill just tear down there (in order to impress my friend)
I saw the traffic jams, as a means of winding down and actually unwound
I would hang back (if I saw risky driving in front of me)
I drive fast on bendy country roads
I just wanted to show him he was an idiot (so chased him)
I was starting to get really sleepy and I blasted the radio up really loud
[f] Flow state theme
In adverse weather I drive more slowly
(If Im in a traffic jam and its boring) I annoy other drivers by . . . (singing)
[g] Experience theme
Almost hitting the slow car in front makes me concentrate more for a while
Code identity
Freq.
Feels risky
Feels safe
102
123
Reduced
increased
59
57
Club trip
Commute home
Long journey
Leisure trip
46
15
20
38
Yes
No
62
41
32
13
40
52
17
222
21
17
11
31
18
57
110
103
10
11
Anxious
Bored
108
224
16
302
303
CONCLUSIONS
REFERENCES
Arnett, J.J., D. Offer, et al. (1997). Reckless driving in adolescence: State and trait factors. Accident Analysis &
Prevention 29(1): 5763.
Bjorklund, G.M. Driver irritation and aggressive behaviour.
Accident Analysis & Prevention In Press, Corrected Proof.
Csikszentmihalyi, M. (2002). The classic work on how to
achieve happiness, Rider.
DfT (2007). Road Casualties Great Britain: 2006Annual
Report, The Stationary Office.
Fuller, R. (2000). The task-capability interface model of the
driving process. RechercheTransportsSecurite 66:
4757.
304
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
L. Jammes
Schlumberger Carbon Services, Paris La Dfense, France
ABSTRACT: Carbon Capture and Storage (CCS) is a promising technology to help mitigate climate change
by way of reducing atmospheric greenhouse gas emissions. CCS involves capturing carbon dioxide (CO2 ) from
large industrial or energy-related sources, transporting, and injecting it into the subsurface for long-term storage.
To complete the limited knowledge about the site and experience about the operational and long-term behavior
of injected CO2 , a massive involvement of expert judgment is necessary when selecting the most appropriate
site, deciding the initial characterization needs and the related measurements, interpreting the results, identifying
the potential risk pathways, building risk scenarios, estimating event occurrence probabilities and severity of
consequences, and assessing and benchmarking simulation tools. The paper sets the basis for the development
of an approach suited for CCS applications and its role in the overall framework of CO2 long-term storage
performance management and risk control. The work is carried out in the frame of an internal research and
development project.
INTRODUCTION
a massive involvement of expert judgment is necessary when selecting the most appropriate site, deciding
the initial characterization needs and related measurements, interpreting results, identifying potential risk
pathways, building risk scenarios, estimating event
occurrence probabilities and severity of consequences,
and assessing and benchmarking simulation tools.
The study of CO2 injection and fate currently has
very few technical experts, and there is a need to capitalize on the expertise from other domains, such as
oil and gas, nuclear waste disposal, chemical, etc. to
obtain relevant and valuable judgments.
Formalized methods able to infer the searched
information and data from individual expertise in a
well-defined field, and to analyze and combine the
different experts opinions in sound, consistent final
judgments seems the best way to achieve this.
Several bibliographic references concerning expert
judgment elicitation processes exist in literature
(Simola & al., 2005), as well as practical applications in some industrial sectors, such as oil and gas,
nuclear, aerospace, etc. The approaches presented,
quite generic and in principle almost independent from
the industrial area, represent a valuable background
for the development of expert judgment elicitation
processes dedicated to CO2 storage studies.
These general methodologies need to be made more
specific to be applied in the domain of CO2 geological storage, whenever relevant data are lacking or
305
CHALLENGES
Using expert judgment to help overcome the insufficient, lack of, or poor, data is challenged by the
following:
306
always be required to interpret and assess site specific data. Extrapolations to or from other sites
are normally risky and shall be appropriately documented.
These challenges influence the way the formal
process for expert judgment elicitation is built.
The internal project is currently in the phase of setting the requirements for the development of an expert
judgment elicitation process able to:
Address the challenges described above;
Capture the most relevant information and data from
the experts;
Be well balanced between time and costs and quality
of captured information;
Address specific needs and objectives of expert
judgment use;
Be reproducible.
The formal process will be based on existing practices and methods (Cooke & al., 1999; Bonano
& al., 1989), which will then be customized to be
implemented on CO2 storage projects.
While building the process, it is important to define
the domains it will be applied to, and ensure it covers
all of them.
The following domains are identified:
Measurements interpretation (e.g. CO2 plume size,
mass in place, etc.);
Physical parameters estimation (e.g. permeability,
fault/fracture extent, etc.);
Risk related parameters estimation (e.g. scenario
identification, likelihood and severity estimation,
etc.);
Conceptual and mathematical models selection and
validation (e.g. CO2 transport model, mechanical
model, etc.).
All along these domains, the experts are also
required to qualify or quantify uncertainty, e.g. in
qualitative assessment, in numerical values for key
parameters, in predictions, etc.
The process will be deployed into the following four
main steps.
4.1
Experts selection
307
4.2
308
Structure
CO 2
Monitoring
well
Potable Water
Injector
well
Abandoned
well
Fault
Fractures
Caprock
Deep Saline Formation
Figure 2.
309
Modeling is the core of the CO2 storage site performance management and risk control methodology.
Its main purpose is to capture the main features of
the overall site and simulate the most important processes induced by CO2 injection, to represent the
reality in the most truthful way. Simulation models are used, for example, to quantitatively evaluate
the risk pathways/scenarios. To take into account the
limited knowledge of today, the models have to be
probabilistic and include uncertainty analysis. Experts
are involved in the process of selecting the most
appropriate and representative models.
The quality of the results depends on the quality of
the simulation models and tools, and the role of experts
in benchmarking the tools is significant. The decision
diagram for structured expert elicitation process selection, as it is conceived today, does not address methods
for model assessment because of a different nature and
a dedicated approach will be investigated.
EXAMPLES OF POTENTIAL
APPLICATIONS
CONCLUSIONS
REFERENCES
Barlet-Goudard, V. Rimmel, G. Goff, B. & Porcherie, O.
2007. Well Technologies for CO2 Geological Storage:
CO2 -Resistant Cement. Oil & Gas Science and TechnologyRev. IFP, Vol. 62, No. 3, pp. 325334.
Benson, S.M. Hepple, R. Apps, J. Tsang, C.-F. &
Lippmann, M. 2002. Lessons Learned from Natural
310
and Industrial Analogues for Storage of Carbon Dioxide in Deep Geological Formations. Lawrence Berkeley
National Laboratory LBNL-51170.
Brard, T. Jammes, L. Lecampion, B. Vivalda, C. &
Desroches, J. 2007. CO2 Storage Geomechanics for Performance and Risk Management, SPE paper 108528,
Offshore Europe 2007, Aberdeen, Scotland.
Bonano, E.J. Hora, S.C. Keeny, R.L. & von Winterfeldt, D.
1989. Elicitation and use of expert judgment in performance assessment for high-level radioactive waste repositories. NUREG/CR-5411; SAND89-1821. Washington:
US Nuclear Regulatory Commission.
Bowden A.R. & Rigg, A. 2004. Assessing risk in CO2
storage projects. APPEA Journal, pp. 677702.
Cooke, R.M. & Gossens, L.J.H. 1999. Procedures guide
for structured expert judgment. EUR 18820. Brussels,
Euratom.
Grard, B. Frenette, R. Auge, L. Barlet-Goudard, V.
Desroches J. & Jammes, L. 2006. Well Integrity in CO2
environments: Performance & Risk, Technologies. CO2
Site Characterization Symposium, Berkeley, California.
Gossens, L.H.J. Cooke, R.M. & Kraan, B.C.P. 1998. Evaluation of weighting schemes for expert judgment studies.
In Mosleh, A. & Bari, A. (Eds.) Probabilistic safety
assessment and management. Springler, Vol. 3, pp.
19371942.
Helton. J.C. 1994. Treatment of Uncertainty in Performance
Assessments for Complex Systems. Risk Analysis, Vol.
14, No. 4, pp. 483511.
Hoffman, S. Fischbeck, P. Krupnik, A. & McWilliams, M.
2006. Eliciting information on uncertainty from heterogeneous expert panels. Discussion Paper. Research for the
future. RFF DP 06-17.
311
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This work advocates an architecture approach for an all-hazards risk model, in support of
harmonized planning across levels of government and different organizations. As a basis for the architecture, a taxonomic scheme has been drafted, which partitions risk into logical categories and captures the
relationships among them. Provided that the classifications are aligned with the areas of expertise of various departments/agencies, a framework can be developed and used to assign portions of the risk domain to those
organizations with relevant authority. Such a framework will provide a structured hierarchy where data collection
and analysis can be carried out independently at different levels, allowing each contributing system/organization
meet its internal needs, but also those of the overarching framework into which it is set. In the end, the proposed
taxonomy will provide a blueprint for the all-hazards risk domain, to organize and harmonize seemingly
different risks and allow a comparative analysis.
INTRODUCTION
UNDERSTANDING RISK
In simple terms, risks are about events that, when triggered, cause problems. Because risks refer to potential
problems in the future, often there is a great deal of
uncertainty with regard to how and to what degree
such events may be realized. If an organizations interests are potentially affected, processes are set up to
manage the uncertainty and strategies are developed
to minimize the effect of such future events on desired
315
Figure 1. Generic risk management process, showing a typical sequence of steps commonly followed, as well as the review
and feedback processes.
316
Figure 2.
Impact taxonomy.
317
Figure 3.
Based on the initial categories included in the risk taxonomy, the team of analysts proceeded to building
a Risk Domain Architecture. To that end, a survey was prepared and administered in order to collect
relevant information from various risk communities
within the Canadian federal government. The survey
was structured around three main parts:
1. A standard first part allowed the respondents to
self-identify and communicate their role as risk
practitioners;
2. A second part consisted of a suite of questions
developed to elicit specific information for use
in the risk architecturelargely based on the risk
categories in the taxonomy;
3. The survey ended with a series of open questions
aimed at gaining an overall impression of the stateof
318
Figure 4.
319
Finally, based on the survey responses, it was possible to identify risk practitioners who volunteered
to provide information on tools, methods or specific
assessments. Based on the material collected from
these respondents, and an extensive review of the
research literature on the topic, the team of analysts
at CSS hopes to develop a harmonized methodology capable of sustaining an all-hazards scope. The
next section goes back to the AHRA taxonomy and
shows how it can be used to guide methodology
development.
Magnitude
of occurrence
Consequence
This is the fundamental equation in risk management, and it would be difficult to carry out authentic
risk assessment without basing the overall process on
this relationship. And while the choice of assessment
processes, scaling/calibration schemes, or even the set
of parameters must accommodate specific risks, as it
will be further discussed, this basic equation provides
the common framework required for comparative analysis of different risks, albeit at a fairly high level.
This being said, the next paragraph will discuss how
the taxonomy can be used to understand the different requirements in treating each of the major risk
categories.
The bottom part of Figure 3 showcases the discriminating principle that separates between the two main
classes of risk events: under Malicious Threats,
risk events originate in malicious intent acts of enemy
actors, who can be individuals, groups, or foreign states. The division hints at one major difference in methodological approaches employed in the
Figure 5.
Figure 6.
320
Figure 7.
Figure 8. Risk assessment tetrahedrona graphical representation of the common risk equation.
man-made disasters. Also, the calculations of Consequences could be quite different, although this
paper strongly advocates a common approach for Consequence assessment. A modular AHRA would,
however, need to provide commonality in the way
in which the final results are presented to decision
makers. Figure 7 illustrates a possible breakdown in
risk assessment modules, while Figure 8 represents
a graphical representation of risk equation, showing
how the modules can be brought together to provide a
common picture of different assessments.
To end this section, a last look at the AHRA taxonomy is in order. Figure 3 also illustrates a challenge in
finding the right treatment for risks that do not seem
amenable to the same kind of tidy subdivision. The
sub-categories Ecological Disasters and Emerging Technologies remain somewhat ill-defined and
unwieldy. The difficulty with these two sub-categories
originates in one shortcoming of the current scheme:
the classification principle does not consider the time
sequence in the realization of risk events, as discussed
in section 2.2. From a time perspective, these two
groups of risk would naturally fall under the gradual type; many of the other sub-categories belong
to the sudden occurrence category, although some
of the boxes in Figure 3 will break under the new
lens. This last point highlights the challenges in the
ambitious enterprise of tackling all-risk in one battle, particularly the difficulty of bringing the time
321
CONCLUDING REMARKS
This paper proposes a taxonomic scheme that partitions the All-Hazards Risk Domain into major event
categories based on the nature of the risk sources, and
discusses the advantages of using this approach, as
well as the shortcomings of the proposed scheme. The
taxonomy enables an architecture approach for an allhazards risk model, in support of harmonized planning
across levels of government and different organizations. Provided that the classifications are aligned with
the areas of expertise of various departments/agencies,
a framework can be developed and used to assign portions of the risk domain to those organizations with
relevant authority. Essential actors, who are active
in conducting assessments and/or performing functions that need to be informed by the assessments,
are often invisible from the point of view of authority structures. Such a framework provides a structured
hierarchy where data collection and analysis can be
carried out independently at different levels, allowing each contributing system/organization to meet
its internal needs, but also those of the overarching
framework into which it is set.
Finally, based on the survey responses, it was possible to identify risk practitioners who volunteered
to provide information on tools, methods or specific
assessments. Based on the material collected from
these respondents, and an extensive review of the
research literature on the topic, the team of analysts
at CSS hopes to develop a harmonized methodology
capable of sustaining an all-hazards scope.
The paper also shows how the AHRA taxonomy
can be used to guide methodology development. The
taxonomy can be used to understand the different
requirements in treating distinct risk categories, which
in turn guides the choice of assessment processes,
scaling/calibration schemes, or the set of parameters in order to accommodate specific risks. As a
consequence, a national all-hazards risk assessment
needs a fold-out, modular structure, that reflects and
is able to support the different levels of potential
management decisions. At the same time, the taxonomy pulls together the different components and
provides the common framework required for harmonized assessment and comparative analysis of different
risks, albeit at a fairly high level.
A noteworthy quality of the risks identified in the
taxonomy is that they do not exist, and cannot be
identified and assessed, in isolation. Many are interconnected, not necessarily in a direct, cause-and-effect
relationship, but often indirectly, either through common impacts or mitigation trade-offs. The better the
understanding of interconnectedness, the better one
can design an integrated risk assessment approach and
recommend management options. But this remains
methodologically and conceptually difficult, due to
the inherent complexity of the domain and our limited
ability to represent it adequately.
The above considerations add to the methodological
hurdles around the representation of interconnectedness inter- and intra- risk domains. In addition, one
cannot leave out the global risk context, which is both
more complex and more challenging than ever before,
according to the World Economic Forum. (Global
Risks 2008).
REFERENCES
DoD Architecture Framework Working Group. 2004. DoD
Architecture Framework Version 1.0, Deskbook., USA.
Department of Defence.
Global Risks 2008. A Global Risk Network Report, World
Economic Forum.
Coccia, M. 2007. A new taxonomy of country performance
and risk based on economic and technological indicators.
Journal of Applied Economics, 10(1): 2942.
Cohen, M.J. 1996. Economic dimensions of Environmental and Technological Risk Events: Toward a Tenable
Taxonomy. Organization & Environment, 9(4): 448481.
Haimes, Y.Y. 2004. Risk Modeling, Assessment and Management, John Wiley & Sons.
Keown, M. 2008. Mapping the Federal Community of Risk
Practitioners, DRDC CSS internal report (draft).
Lambe, P. 2007. Organising Knowledge: Taxonomies,
Knowledge and Organisational Effectiveness. Oxford:
Chandos Publishing.
Rowe, W.D. 1977. An anatomy of Risk, John Wiley & Sons.
Verga, S. 2007. Intelligence Experts Group All-Hazards
Risk Assessment Lexicon, DRDC CSS, DRDC-Centre for
Security Science-N-2007-001.
322
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: There exist many discipline oriented perspectives on risk. Broadly categorised we may distinguish between technical, economic, risk perception, social theories and cultural theory. Traditionally, these
perspectives have been viewed to represent different frameworks, and the exchange of ideas and results has been
difficult. In recent years several attempts have been made to integrate these basic perspectives to obtain more
holistic approaches to risk management and risk governance. In this paper we review and discuss some of these
integrated approaches, including the IRGC risk governance framework and the UK Cabinet office approach.
A structure for comparison is suggested, based on the attributes risk concepts and risk handling.
INTRODUCTION
To analyze and manage risk an approach or framework is required, defining what risk is, and guiding
how risk should be assessed and handled. Many
such approaches and frameworks exist. To categorize
these, Renn (1992, 2007) introduce a classification
structure based on disciplines and perspectives. He
distinguishes between:
Statistical analysis (including the actuarial approach), Toxicology, Epidemiology, Probabilistic risk
analysis, Economics of risk, Psychology of risk, Social
theories of risk, Cultural theory of risk.
To solve practical problems, several of these perspectives are required. Consider a risk problem with
a potential for extreme consequences and where the
uncertainties are large. Then we need to see beyond
the results of the probabilistic risk analyses and the
expected net present value calculations. Risk perception and social concerns could be of great importance
for the risk management (governance).
But how should we integrate the various perspectives? In the literature, several attempts have been
made to establish suitable frameworks for meeting
this challenge, integrating two or more of these perspectives. In this paper we restrict attention to four of
these:
The aim of this paper is to compare these integrated perspectives, by looking at their differences and
communalities. The comparison is based on an evaluation of how risk is defined and how risk is handled
(managed).
AN OVERVIEW OF SELECTED
FRAMEWORKS
The main purpose of this approach is to set out an overall framework for decision making by HSE (Health
and Safety Executive) and explain the basis for HSEs
decisions regarding the degree and form of regulatory
control of risk from occupational hazards. However
the framework is general and can also be used for
other type of applications. This approach consists of
five main stages; characterising the issue, examining
the options available for managing the risks, adopting
a particular course of action, implementing the decisions and evaluating the effectiveness of actions. Risk
is reflected as both the likelihood that some form of
harm may occur and a measure of the consequence.
The framework utilises a three-region approach
(known as the tolerability of risk, TOR), based on
the categories acceptable region, tolerable region and
unacceptable region. Risks in the tolerable region
should be reduced to a level that is as low as reasonably
practicable (ALARP). The ALARP principle implies
what could be referred to as the principle of reversed
323
324
Uncertainty
Activity
Values at stake
Figure 1.
2008a).
Risk
Eventsand
consequences
(outcomes)
Values at stake
are many thousands each year. Clearly, this definition of risk fails to capture an essential aspect, the
consequence dimension. Uncertainty cannot be isolated from the intensity, size, extension etc. of the
consequences. Take an extreme case where only two
outcomes are possible, 0 and 1, corresponding to 0
and 1 fatality, and the decision alternatives are A and
B, having uncertainty (probability) distributions (0.5,
0.5), and (0.0001, 0.9999), respectively. Hence for
alternative A there is a higher degree of uncertainty
than for alternative B, meaning that risk according to
this definition is higher for alternative A than for B.
However, considering both dimensions, both uncertainty and the consequences, we would of course judge
alternative B to have the highest risk as the negative
outcome 1 is nearly certain to occur.
The IRGC framework defines risk by C. See
Figure 1.
According to this definition, risk expresses a state
of the world independent of our knowledge and perceptions. Referring to risk as an event or a consequence,
we cannot conclude on risk being high or low, or compare options with respect to risk. Compared to standard
terminology in risk research and risk management, it
lead to conceptual difficulties that are incompatible
with the everyday use of risk in most applications, as
discussed by Aven & Renn (2008a) and summarised
in the following.
The consequence of a leakage in a process plant
is a risk according to the IRGC definition. This
consequence may for example be expressed by the
number of fatalities. This consequence is subject to
uncertainties, but the risk concept is restricted to the
consequencethe uncertainties and how people judge
the uncertainties is a different domain. Hence a risk
assessment according to this definition cannot conclude for example that the risk is high or low, or that
option A has a lower or higher risk than option B, as it
makes no sense to speak about a high or higher consequencethe consequence is unknown. Instead the
assessment needs to conclude on the uncertainty or the
probability of the risk being high or higher. We conclude that any judgement about risk needs to take into
325
Uncertainty
Risk
Severity
Events and
consequences
(outcomes)
Activity
Values at stake
emphasis on consequences
eg if serious/irreversible or
need to address societal
concerns
towards
ignorance
rely on past
experience
of generic hazard
consider putative
consequences
and scenarios
conventional
risk assessment
Values at stake
Figure 2.
and (I).
Figure 3. Procedures for handling uncertainty when assessing risks (HSE 2001).
HSE framework
The main steps of the risk management process are presented in section 2.1. The steps follows to large extent
the standard structure for risk management processes,
see e.g. AS/NZS 4360 (2004). However, the framework has some genuine features on a more detailed
level, of which the following are considered to be of
particular importance:
Weight to be given to the precautionary principle in
the face of scientific uncertainty. The precautionary principle describes the philosophy that should
be adopted for addressing hazards subject to high
scientific uncertainties. According to the framework the precautionary principle should be invoked
where:
a. There is good reason to believe that serious harm
might occur, even if the likelihood of harm is
remote.
b. The uncertainties make it impossible to evaluate the conjectured outcomes with sufficient
confidence.
The tolerability framework (the TOR framework)
and the ALARP principle. This framework is based
on three separate regions (acceptable region, tolerable region and unacceptable region).
An acknowledgment of the weaknesses and limitations of risk assessments and cost-benefit analysis.
These analyses could provide useful decision support, but need to be seen in a wider context, reflecting that that there are factors and aspects to consider
beyond the analysis results. For many type of situations, qualitative assessment could replace detailed
quantitative analysis. In case of large uncertainties,
other principles and instruments are required, for
example the precautionary principle.
The procedures (approaches) for handling uncertainties are illustrated in Figure 3. The vertical axis
Evaluate risks
Assess risks
apetite
Gain assurance
about control
Identify suitable
reponse to risks
Figure 4.
326
Table 1. Risk problem categoryuncertainty induced exampleimplications for risk management (Aven and Renn 2008b,
adapted from Renn (2005)).
Risk problem
category
Uncertainty induced
risk problems
Management strategy
Appropriate instruments
Risk informed.
Robustness and Resilience
focused (risk absorbing system)
At the strategic level decisions involve the formulation of strategic objectives including major external
threats, significant cross-cutting risks, and longer term
threats and opportunities. At the programme level, the
decision-making is about procurement, funding and
establishing projects. And at the project and operational level, decisions will be on technical issues,
managing resources, schedules, providers, partners
and infrastructure. The level on uncertainty (and hence
risk) will decrease as we move from strategic level to
the programme and then operational level.
In addition, the focus is on risk appetite, i.e. the
quantum of risk that you are willing to accept in pursuit
of value. There is a balance to be made between innovation and change on the one hand and, and avoidance
of shocks and crises on the other. Risk management
is often focused on risk reduction, without recognition
of the need for taking risks to add values.
IRGC framework
On a high level the framework is similar to the two
other frameworks presented above. However on a more
detailed level, we find several unique features. One
is related to the distinction between different type
of situations (risk problems) being studied, according to the degree of complexity (SimpleComplex),
Uncertainty and Ambiguity (Aven & Renn 2008b):
Simplicity is characterised by situations and problems
with low complexity, uncertainties and ambiguities.
Complexity refers to the difficulty of identifying
and quantifying causal links between a multitude
of potential causal agents and specific observed
effects.
Uncertainty refers to the difficulty of predicting the
occurrence of events and/or their consequences
based on incomplete or invalid data bases, possible changes of the causal chains and their context
conditions, extrapolation methods when making
inferences from experimental results, modelling
inaccuracies or variations in expert judgments.
Uncertainty may results from an incomplete or inadequate reduction of complexity, and it often leads to
expert dissent about the risk characterisation.
Ambiguity relates to i) the relevance, meaning and
implications of the decision basis; or related to
ii) the values to be protected and the priorities to
be made.
For the different risk problem categories, the IRGC
framework specifies a management strategy, appropriate instruments and stakeholder participation, see
Table 1 which indicates the recommendations for the
category uncertainty.
The consequence-uncertainty framework
The framework follows the same overall structure
as the other frameworks and is characterised by the
following specific features:
It is based on a broad semi-quantitative perspective on risk, in line with the perspective described
in section 3.1, with focus on predictions and highlighting uncertainties beyond expected values and
probabilities, allowing a more flexible approach
than traditional statistical analysis. It acknowledges
that expected values and probabilities could produce
poor predictionssurprises may occur.
Risk analyses, cost-benefit analyses and other types
of analyses are placed in a larger context (referred
to as a managerial review and judgment), where the
327
DISCUSSION
framework stresses the importance of reflecting consequence, likelihoods and uncertainties. The adjusted
definition uncertainty about and severity of the consequences of an activity with respect to something that
humans value (Aven and Renn 2008a), can be seen
as reformulation of the original one to better reflect
the intention.
As another example, the Cabinet office (2002)
refers to risk as uncertainty, which means that risk
is considered low if one expects millions of fatalities
as long as the uncertainties are low. Risk management certainly needs to have a broader perspective on
risk, and this of course also recognised by the cabinet
office framework. The terminology may however be
challenged.
When referring to the likelihood of an event we
mean the same as the probability of the event. However, the term probability can be interpreted in different ways as discussed in Section 3.1 and this would also
give different meanings of likelihood. With exception
of the consequence-uncertainty framework none of
the frameworks have specified the probabilistic basis.
In the consequence-uncertainty framework probability means subjective probabilities. Hence there is
no meaning in discussing uncertainties in the probabilities and likelihoods. If such a perspective is
adopted, how can we then understand for example
Figure 2, which distinguishes between uncertainties about likelihoods (probabilities) and uncertainties
about consequences?
The former types of uncertainties are referred to
as epistemic uncertainties and are also called secondorder probabilities. It is based on the idea that there
exists some true probabilities out there, based on
the traditional relative frequency approach, that risk
analysis should try to accurately estimate. However
this view can be challenged. Consider for example the
probability of a terrorist attack, i.e. P(attack occurs).
How can this probability be understood as a true probability, by reference to a thought-constructed repeated
experiment? It does not work at all. It makes no sense
to define a large set of identical, independent attack
situations, where some aspects (for example related
to the potential attackers and the political context)
are fixed and others (for example the attackers motivation) are subject to variation. Say that the attack
probability is 10%. Then in 1000 situations, with
the attackers and the political context specified, the
attackers will attack in 100 cases. In 100 situations
the attackers are motivated, but not in the remaining
900. Motivation for an attack in one situation does not
affect the motivation in another. For independent random situations such experiments are meaningful,
but not for more complex situations as for example
this attack case.
Alternatively, we may interpret the likelihood
uncertainties in Figure 2 by reference to the level of
328
FINAL REMARKS
329
330
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The aim of this paper is to ensure the development of design projects in an environment with
limited resources (material, human, know-how, etc.) and therefore satisfy the strategic performance objectives
of an organization (cost, quality, flexibility, lead time, deadlines, etc.). The matter of this paper is also to set
down the problems of a real integration between risk and design system managements. With this intention, a
new paradigm for Risk Management Process (RPM) is proposed then illustrated via an industrial case. Such
a RMP includes the tasks of establishing the context, identifying, analysing, evaluating, treating, monitoring
and communicating risks resulting from design system dysfunctions. It also takes into account risks caused
by domino effect of design system dysfunctions and completes risk management methodologies provided by
companies that dont consider this aspect.
INTRODUCTION
With this intention, a new paradigm for Risk Management Process (RPM) is proposed then illustrated via
an industrial case. Such a RMP includes the tasks of
establishing the context, identifying, analysing, evaluating, treating, monitoring and communicating risks
resulting from design system dysfunctions.
This paper is organized as follows. In section 2, general points of risk management are introduced. Then,
the RMP relative to design projects is constructed.
Sections 3, 4 and 5 detail the different phases of the
RMP. Risks are examined according to three points of
view (functional, organic, and operational) making it
possible to determine their impacts on the objectives
of the organization. These impacts are evaluated and
quantified thanks to the FMECA methodology, which
allows to identify potential failure modes for a product or process before the problems occur, to assess the
risk associated with those failure modes and to identify and carry out measures to address the most serious
concerns. An industrial case enables to illustrate the
methodology in section 6. Some conclusion remarks
and discussions are provided in the last section.
331
Monitoring
and
Review
Risk identification
Monitoring
and
Review
Risk analysis
Risk evaluation
Risk assessment
Corrective actions
No corrective action
Monitoring
and
Review
Risk treatment
Choice of design scenarios
Communication and consulting
Figure 1.
may be simply stopped due to uncontrolled parameters and unidentified constraints arising from various
processes within the project such as the product design
process or the supply chain design (identification of
suppliers and subcontractors for example). The consequence: no risk, no design. Risk processes do not
require a strategy of risk avoidance but an early diagnosis and management (Keizer et al., 2002). Nevertheless, most project managers perceive risk management
processes as extra work and expenses. Thus, risk management processes are often expunged if a project
schedule slips (Kwak et al., 2004). In a general way,
main phases of risk management are (Aloini et al.,
2007): context analysis (1), risk identification (2), risk
analysis (3), risk evaluation (4), risk treatment (5),
monitoring and review (6) and communication and
consulting (7). In agreement with such a methodology,
we propose to use the following process to manage the
3
3.1
332
Actor Axis
Actor
Link 3
Enterprise
Link 2
Process
Organisation
Link 5
Link 4
ic
log
no
h
c
Te
Design
system
is
Ax
Scientific and
Technological
Knowledge
Product
En
vir
on
me
nt
Ax
is
Link 6
External and
Internal
Environments
Link 1
Figure 2.
Design system modeling, interactions between factors influencing the design system (Robin et al., 2007).
Modeling contributes to ideas development and structuring, and can be used as a support of reasoning
and simulation. Design management requires understanding of design process context in order to adapt
actors work if it turns out that it is necessary. Hence,
in their generic model of design activity performance, ODonnell and Duffy insist on the necessity
to identify components of the design activity and their
relationships (ODonnell et al., 1999).
The design system can be defined as the environment where design projects (product or system or
network design) take place. We have identified three
factors influencing the design system and which have
to be considered to follow and manage suitably the
333
334
Functional
Global factors
Internal
environment
External
environment
Scientific and
technological
knowledge
Actor
Local factors
Product
Process
Organization
Organic
4.3
Operational
Functional
Organic
Operational
Functional
Organic
Operational
Functional
Organic
Operational
Functional
Organic
Operational
Functional
Organic
Operational
Functional
Organic
Operational
Figure 3.
Risk evaluation
Case 4:
Fatal
Initialization
Functional
definition
Case 3: Strongly
disturbing
Risk
Risk
management
2
Risk
Case 1: Slightly
disturbing
2
Case 2: Fairly
disturbing
Organic
definition
Operational
definition
Project Reengineering
Figure 4.
system.
RPN = O G D
Project Engineering
(1)
5
335
Criticality classes
Level of risk
C1
(case 1, Fig. 4)
C2
(case 2, Fig. 4)
C3
(case 3, Fig. 4)
Difficult to tolerate
C4
(case 4, Fig.4)
Unacceptable
Figure 5.
Decision
No action or modification at the operational level.
Follow-up, monitoring and review.
Risk assessment.
Modification at the organic level.
Follow-up, monitoring and review.
Risks assessment.
Modification at the functional level.
Follow-up, monitoring and review.
Risk assessment.
Change of strategy.
Total reorganization of the project.
Monitoring
and
Review
Risk identification
Monitoring
and
Review
Risk analysis
Risk evaluation
Corrective actions
No corrective action
Monitoring
and
Review
336
6
6.1
INDUSTRIAL CASE
Introduction
The company produces ice creams for the great distribution. Characteristic of this branch of industry lies in
the fact that it is a seasonal production. The off season corresponds to a relatively reduced production. At
the height of the season, the company is required to
have a perfect control of its production equipments
(very high rate of production), because any matter
loss or overconsumption could have a strong impact on
the productivity. Therefore, the appearance of events
susceptible to modify the functionality, the structure
and/or the operational scenarios of the production system is extremely prejudicial for the company, and it is
imperative to analyze the criticality of these events
in order to launch rapidly adequate and corrective
actions.
Innovation has a key role to play in the performance
of such a firm. Considered as a solution for growth and
competitiveness, it is used by managers to create new
sources of value.
Risk management
337
R2
Figure 7.
Recurrent
Permanent
0.50
0.75
Rare
Product
Organic
0.25
Functional
Operational
5 15 25
412.5
325
RPN
(2)
(3)
6.3 Conclusion
The Risk Priority Number of E1 is higher than the Risk
Priority of E2. Therefore, in order to launch the corrective actions efficiently, it will be necessary to initially
treat dysfunctions due to E1. Moreover, impacts of the
risks on the design system have been quantified, which
will enable to adjust the design strategy. Events leading
to operational modifications of the company are common, and are an integral part of everyday life. With
this intention, the company launched out in a step of
continuous improvement (preventive and autonomous
maintenance) in order to, on the one hand, preventing slightly events and, on the other hand, solving all
dysfunctions which can exist in production workshops
(dysfunctions related to workers environment. Such a
step aims at extending the life span of equipments and
decreasing times of corrective maintenance.
7
Organic
5 15 25
Exceptional
5 15 25
Functional
Operational
Process
Organization
Organic
Functional
Operational
Organic
Functional
Functional
Operational
10 15 20 10 15 20 10 15 20
R1
Occurrence :
Appearance frequency
of a dysfunction
Consequences :
Local inductors evolution
Actors
Scientific and
technological
knowledge
Organic
Functional
Operational
Organic
Operational
Internal / external
environments
Causes:
Global inductors evolution
CONCLUSION
occur. The functional, organic and operational models (or definitions) of the design system should then
be tuned accordingly to support reengineering reasoning. The methodological guidelines are based on
event criticality analysis. A classification of events
was made to guide the analysts towards appropriate model tuning, such that the representation of the
system be permanently in conformity with the system despite the continuous modifications encountered
by the system during its life-cycle. A risk management methodology is also provided in order to take
into account risks caused by domino effect of design
system dysfunctions. The Risk Management Process
includes the tasks of establishing the context, identifying, analysing, evaluating, treating, monitoring
and communicating risks resulting from design system
dysfunctions.
REFERENCES
Aloini, D., Dulmin, R., Mininno, V. (2007). Risk
management in ERP project introduction: Review
of the literature, in: Information & Management,
doi:10.1016/j.im.2007.05.004.
Balbontin, A., Yazdani, B.B., Cooper, R., Souder, W.E.
(2000). New product development practices in American
and British firms, in: Technovation 20, pp. 257274.
Gero, J.S. (1998). An approach to the analysis of design
protocols, in: Design studies 19 (1), pp. 2161.
Kececioglu, D. (1991). Reliability Engineering Handbook,
Volume 2. Prentice-Hall Inc., Englewood Cliffs, New
Jersey, pp. 473506.
338
339
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Currently, several decision-support methods are being used to assess the multiple risks faced
by a complex industrial-based society. Amongst these, risk analysis is a well-defined method used in the
nuclear, aeronautics and chemical industries (USNRC, 1998; Haimes, 2004). The feasibility of applying
the Probabilistic Risk Assessment approach (USNRC, 1983) in the nuclear field (PRA-Nuc) for some new
applications has been already demonstrated by using an integrated risk model of internal and external events
for a Generation IV nuclear power plant (Serbanescu, 2005a) and an integrated risk model of random technical and intentional man-made events for a nuclear power plant (Serbanescu, 2007). This paper aims to
show how such experiences and results can be extended and adapted to the non-nuclear sectors. These extensions have been shown to trigger two main methodological novelties: (i) more extensive use of subjective
probabilities evaluations, in the case of non-nuclear applications and (ii) inclusion of hierarchical systems
theory in the PRA modelling. The main aspects of the results and conclusions of the above-mentioned
cases, along with insights gained during this analysis are presented and discussed in this paper. In particular, this paper is a synthesis of insights gained from modelling experiences in extending PRA-Nuc to new
applications.
INTRODUCTION
Risk analysis methods are used to support decisionmaking for complex systems whenever a risk assessment is needed (USNRC, 1983). These methods are
well-defined and routinely employed in the nuclear,
aeronautics and chemical industries (USNRC, 1998;
Haimes, 2004; Jaynes, 2003). This paper will describe
experiences gained and lessons learnt from four applications: (i) use of PRA-like methods combined with
decision theory and energy technology insights in
order to deal with the complex issue of SES (Security
of Energy Supply); (ii) use of PRA for new applications in the nuclear field (PRA-Nuc); (iii) use of PRA
for modelling risks in non-nuclear energy systems,
with special reference to hydrogen energy chains and
(iv) use of PRA for modelling risks in photovoltaic
energy production. An important experience gained
in this research is the discovery that the problems
encountered in the use of this approach in nuclear and
non-nuclear applications have common problems and
solutions that can be grouped into a set of paradigms,
which are presented below.
CONSIDERATIONS ON PRA
EXTENTED USE
The next paragraphs describe the seven problemsolution paradigms common for all four applications.
2.1
341
Figure 2. Representation of a CAS model to be implemented in PRA-Nuc codes (e.g. Risk Spectrum).
System description
Figures 4 and 5 illustrate the general process of generating ES by combining the scenarios defined in the
342
Figure 4.
Figure 5.
Figure 6.
tion.
343
Table 1.
End State
Parameter(s)
Code
10 mbar
500 mbar
EFOV4
EFOV4
Thermal
3 kW/m2
35 kW/m2
FTH1
EFTH3
Gas Cloud
Not dangerous
Dangerous in size
EFGC1
EFGC2
Window break
Building collapse
Eardrum rupture
Fatalities 1%
Fatalities 100%
HOVDG1
HOVDG2__
HOVINJ__
HOVFAT1__
HOVFAT3__
Thermal
Burns degree 1
Burns degree 2
Glass/window fail
Fatalities 1%
Fatalities 100%
HTHBU1
HTHBU2__
HTHFIRE__
HTHFAT1__
HTHFAT3__
Fire
Fatalities 1%
Fatalities 100%
HFIFAT1
HFIFAT3
Explosions
Fatalities 1%
Fatalities100%
HEXFAT1
HEXFAT3
Effect levels
Overpressure
Harm effects
Overpressure
Sinf = XI ln XI ,
of vessels/cylinders in storage cabinet unit); (ii) Leaks
(e.g. leak from the hydrogen inlet ESD); (iii) Overpressurization (e.g. overpressurization of 10 mbar in
PD Unit); (iv) Thermal challenges to installation (e.g.
thermal impact of 3 kW/m2 in PD Unit); (v) Fire,
explosions, missiles (e.g. fire generated by installation failures in PD unit); (vi) Support systems failure
(e.g. loss of central control); (vii) External event (e.g.
external Flood in UND); (viii) Security threats (e.g.
security challenge in area close to units).
A specific feature of CAS model level 3 for security
of energy supply (SES), is that ES are SES risk related
criteria.
In this case, scenarios leading to various SES categories are ranked on the basis of their risk and importance, in accordance with PRA-Nuc methodology. It
is also important to note that for the SES case, a set
of economical (e.g. energy price variations) and sociopolitical initiators (e.g. result of referendum for a given
energy source) were also defined (Serbanescu 2008).
2.4
xi ,
(1)
(2)
In formulas (1) and (2), X stands for any risk metric of a given PRA level (e.g. probability of failure
or risk) and is used for the limitations imposed to
those parameters (e.g. optimization target for acceptable probability of a given ES or risk values). The
solution of equations (1) and (2) represented in the
form of a Lagrangean Function (LF).
In order to analyse possible changes to the risk metrics, a tool is needed to evaluate the resulting impacts
so that decisions can be taken on what course of action
to follow. The proposed tool to carry this out is based
on Perturbation Theory (Kato, 1995). This approach is
used for matrices built with event trees and fault trees
in PRA specialized computer codes. Therefore, in
order to evaluate the impact of challenges on the CAS
models and/or modifications and/or sensitivity analyses related changes, linearity is assumed within the
perturbated limits of the CAS model. This linearity
is related to the logarithmic risk metrics set of results.
For any type of CAS (nuclear, chemical etc), constraints exist in terms of risk. These constraints are
usually represented in the form of a linear dependency
1 Shannon
344
(3)
One of the main uses of the risk analyses is to support the decision making process in various areas. The
graphical representation of the commonly used decision analysis problems methods was introduced by
Howard (1984).
PRANuc has been used to support the decision making process both as a guiding information (Risk Informed Decision MakingRIDM) or
345
(P U(P) )
RP
RG1
(D U(D) )
RD
RG2
(F U(F) )
(4)
RF
Figure 9. Sample representation of areas of applicability for decision making of deterministic and probabilistic
approaches (Serbanescu 2007b).
346
Table 2.
Sample representation of the reasoning operators from formula (4) used in decision making statements.
Case 1
Optimistic
trust in risk
results
Case 2
Pessimistic
trust in risk
results
Case 3
Neutral trust
attitude on risk
results
Case 4
Overoptimistic
trust of risk
results
Case 5
Overpessimistic trust
attitude in risk
results
L
L
H
L
L
L
H
L
M
M
H
M
L
L
H
L
M
H
H
H
H
H
L
M
H
L
L
M
M
L
L
L
H
H
L
M
L
L
L
L
H
H
L
H
H
L
L
M
M
L
L
L
H
H
L
M
L
L
L
L
R G1
R G2
O
Total objective
function
M
H
M
M
L
M
L
H
L
M
Impact Function/
Cases
RP
U(P)
TOTAL P
RP
U(D)
TOTAL D
RF
U(F)
TOTAL F
Solutions to paradoxes
Figure 10.
347
R6
348
P8 Management of risk model leads to managerial/procedural control in order to limit the uncertainty
in the real process of CAS evaluation. However this
action is in itself creating new systematic assumptions
and errors and is shadowing the ones accumulated up
to this phase.
P9 The completion of a nine-cycle phase CAS
and its implementation reveal the need to restart the
process for a better theory.
Second step: defines believes identified to be
the main features of each of the steps
P1 It is assumed that there is a unique definition
for risk and the risk science has a unique and unitary
approach to give all the answers.
P2 It is assumed that the well-established scientific facts showing both random and deterministic data
used in a scientific manner could provide support for
certitude by using risk assessments.
P3 It is assumed that in the case of risk analyses
a scientific method of universal use exists to evaluate
severity of risks by judging them according to their
probability and the outcomes/damages produced.
P4 It is assumed that by using carefully chosen
experience and model results one can derive objective
results proving the validity of results for the given CAS
model.
P5 It is assumed that by using educated guesses
and experiments scientists can find and evaluate any
significant risk due to the objectivity and other specific
features of science.
P6 It is assumed that based on the objectivity of
science and the approach in risk analyses to evaluate
risks against benefits; the results could be used as such
in decision-making process.
P7 It is assumed that by using a scientific method,
which is honest and objective and by using risk reducing measures in all sectors of society (any type of
CAS model) the combined use of well proven tools
in all science of analysis/deterministic and synthesis/probabilistic approaches assures success in CAS
modeling.
P8 It is assumed that science is more procedural
than creative (at least for this type of activity) and the
decisions themselves have to be made by trained staff
and scientists.
P9 It is assumed that science is evolving based on
laws which appear by the transformation of hypotheses
into theories, which become laws and for any CAS, in
this case, if there will be a real risk then the scientists
will find it.
Last step: a set of actions to prevent generation
of paradoxes is defined:
P1 Model diversity of objective functions for
CAS metrics and use hierarchy for its structure looking
for optimum at each Hierarchical level.
CONCLUSIONS
349
REFERENCES
Colli A & Serbanescu D., 2008. PRA-Type Study Adapted
to the Multi-crystalline Silicon Photovoltaic Cells Manufacture Process, ESREL 2008 under issue.
Descartes R., 1637, Discours de la mthode, Paris,
GarnierFlammarion, 1966, edition. Discourse on
Method (1637), Haimes, Yacov Y., 2004, Risk Modeling, Assessment and Management, 2nd Edition, Wiley &
Sons, New Jersey.
Hansson S.O., 2000, Myths on Risk Talk at the conference
Stockholm thirty years on. Progress achieved and challenges ahead in international environmental co-operation.
Swedish Ministry of the Environment, June 1718, 2000
Royal Institute of Technology, Stockholm.
Howard et al, 1984, Howard R.A. and Matheson J.E., (editors), Readings on the Principles and Applications of
Decision Analysis, 2 volumes (1984), Menlo Park CA:
Strategic Decisions Group.
Jaynes E.T., 2003, Probability ThoeryThe Logic of Science, Cambridge University Press, Cambridge, UK.
Kato, Tosio, 1995. Perturbation Theory for Linear Operators, Springer Verlag, Germany, ISBN 3-540-58661.
Mc Comas, W. 1996, Ten Myths of science: Reexamining
what we know, vol 96, School Science & Mathematics,
01-01-1996, p. 10.
Peirce C.S., 1931, Collected Papers of Charles Sanders
Peirce, 8 vols. Edited by Charles Hartshorne, Paul Weiss,
Arthur Burks (Harvard University Press, Cambridge,
Massachusetts, 19311958, http://www.hup.harvard.edu/
catalog/PEICOA.html
350
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper introduces to the fundamental characteristics of weather derivatives and points out the
relevant differences in comparison to classical insurance contracts. Above all, this paper presents the results of a
survey conducted among Austrian companies which aims at investigating the objectives of weather derivatives
usage and at analysing concerns regarding their application. The survey was conducted via face-to-face interviews
among 118 firms from different sectors facing significant weather exposure such as energy and construction
companies, beverage producers and baths. As no other survey has put a focus on weather derivatives so far,
this paper aims at filling a lack of relevant information regarding weather derivative practices. The results will
grant a deeper insight in the risk management practices and the needs of potential costumers. This facilitates
the development of target group specific weather risk management solutions which may enhance the usage of
weather derivates in various industries.
INTRODUCTION
2.1
THEORETICAL BACKGROUND
Weather risk and weather exposure
351
352
THE SURVEY
Research methodology and respondents
profile
353
no
low
12%
5%
very high
49%
21%
moderate
3.3
13%
high
Figure 1.
> 50 Percent
21%
41 - 50 Percent
11,5%
31 - 40 Percent
10%
21 - 30 Percent
11%
11 - 20 Percent
14%
1 - 10 Percent
26%
unknown
7%
0%
Figure 2.
5%
10%
15%
20%
25%
354
> 50 Percent
68%
Exposure
41 - 50 Percent
25%
32%
17%
31 - 40 Percent
58%
60%
21 - 30 Percent
10%
55%
27%
60%
11 - 20 Percent
1 - 10 Percent
20%
15%
40%
Never
18%
20%
52%
0%
30%
60%
Sometimes
20%
33%
80%
100%
Regularly
3.4
355
Mean
Mode
3
2
3,56
3,52
3,21
2,99
2,78
2,52
2,5
2,42
2,26
2,23
2,15
2,12
1,95
3,62
3,92
No benefits expected
1
0
Median
3.5
Figure 4.
reason image of derivatives, which was important for weather derivative users, has a subordinate
position. A potentially negative image of weather
derivatives or their promotion as bets on the weather
seem to have no major influence on the decision not
to use them.
In summary, results indicate that the majority of
companies are not aware of weather derivatives as
risk management tool, as they either do not know the
instrument or never considered using them. Another
important factor seems to be a general lack of knowledge regarding derivatives and their application. This
includes the instrument itself but also areas such as
the quantification of weather exposure. Because of a
lack of knowledge, companies seem to be also very
sceptical in appreciating potential benefits of weather
derivates.
The findings imply that there are some predominant
reasons which are mainly responsible in the decision
of not using weather derivatives. Therefore, a factor
analysis was conducted to identify the significant factors. Factors which strongly correlate with each other
will be reduced to one factor. A Kaiser-Meyer-Olkin
test yields 0.84. It is above the eligible level of 0.8 and
therefore qualifies the sample as suitable for factor
analysis.
356
Table 2.
l
m
n
k
o
j
d
e
a
p
f
h
i
c
b
Difficulty of evaluating
hedge results
Uncertainty about
accounting treatment
Uncertainty about tax and
legal treatment
Difficulty of pricing and
valuing derivatives
Concerns about perception
of derivative use
Difficulty of quantifying
the firms exposure
No benefits expected
Instrument does not fit
firms needs
Insufficient weather
exposure
Company policy not to
use derivatives
Exposure effectively
managed by other means
Costs of hedging exceed
the expected benefits
Lack of expertise in
derivatives
Never considered using
weather derivatives
Weather derivatives
unknown
,902
,872
,869
,833
,678
,551
,455
,742
,716
,662
,641
,639
,516
,616
,820
,752
3.6
,743
Expecting the low number of active weather derivative users we also asked the respondents to indicate
whether they can imagine using weather derivatives
or comparable instruments in the future. The results
show that roughly one-fourth (26.7%) of firms are
generally willing to apply weather derivatives. Given
the actual state this proportion indicates a significant
market potential for weather derivatives.
The question demonstrates that many respondents
are open-minded about weather risk management with
weather derivatives. Of course, it has to be investigated
which firms finally can use these instruments as basis
risk and large contract sizes may have unfavourable
impacts. High cost for small weather derivative transactions could be reduced via bundling schemes in
which different companies located in one region are
sharing a derivative contract to reduce the costs and to
achieve reasonable contract sizes.
Another possibility is the integration of weather
derivatives in loans. This structure enables a company
to enter into a loan agreement with a higher interest
rate that already includes the weather derivatives premium which the bank pays to the counterparty. In case
of an adverse weather event, the company only pays a
357
fraction or nothing of the usual loan due, thus receiving a financial alleviation in an economical critical
situation. Moreover, weather-indexed loans would be
less likely to default which is also favourable for the
bank itself as it strengthens the banks portfolio and
risk profile (Hess & Syroka 2005). Weather-indexed
loans seem especially applicable for SMEs as they are
financed to a large extent by loans. This facilitates the
access for banks and offers considerable cross selling
potential. The potential use of a weather-linked credit
by companies was tested in question 24.
The results show that 17% of respondents can imagine to use a weather-indexed loan. In comparison to
question 23, it does not seem as attractive as weather
derivatives for potential users. On the one hand, this
could be attributed to the more complex product structure. On the other hand, potential users could simply
prefer a stand-alone product instead of buying some
sort of bundling schemes. Further, the comments of
respondents on questions 23 and 24 indicate that there
is a general interest in weather derivatives but additional information is requested. This highlights again
that instrument awareness has to be improved as well
as lack of knowledge has to be reduced.
4
CONCLUSION
REFERENCES
358
359
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
M. Siegrist
Institute of Environmental Decisions, Consumer Behavior, ETH, Zurich, Switzerland
ABSTRACT: Nanoparticulate materials (NPM) pose many new questions on risk assessment that are not
completely answered and concerns have been raised of their potential toxicity and life cycle impacts. Voluntary
industrial initiatives have been often proposed as one of the most promising ways to reduce potential negative
impacts on human health and the environment from nanomaterials. We present a study which had the purpose to
investigate how NPM industry in general perceives precaution, responsibility and regulations, how they approach
risk assessment in terms of internal procedures, and how they assess their own performance. The survey shows
that industry does not convey a clear opinion on responsibility and regulatory action, and that the majority of
companies do not have standardized procedures for changes in production technology, input substitution, process
redesign, and final product reformulation as a result of a risk assessment. A clear majority of the companies
found their existing routines regarding these procedures to be sufficient.
INTRODUCTION
found to be inadequate in dealing with the novel properties of NPM (Davis 2007). Therefore there is an
ongoing discussion regarding assessing and managing
the risks derived from NPM properties, the methodological challenges involved, and the data needed
for conducting such risk assessments (EPA 2007;
Morgan 2005). However, given that NPM may cause
harm and that there are currently no regulations that
take the specific properties on NPM into account,
the responsibility for safe production and products is
mostly left with industry. Risk assessment procedures
and precautionary measures initiated by industry are
therefore vital to managing the environmental health
and safety of nanomaterials (Som et al. 2004). These
aspects have been reflected in previous cases involving
other emerging technologies as well.
The objectives of this paper were to investigate
how NPM industry in general perceives precaution,
responsibility and regulation, how they approach risk
assessment in terms of internal procedures, and how
they assess their own performance. To this end, we
first introduce the survey methodology. Secondly the
results of the survey are presented and finally we will
discuss the implications of the results.
2
METHODS
361
RESULTS
Regarding the industrial interpretations of the precautionary principle, a clear majority of the responders
found that all emissions should be kept As Low As
Reasonably Achievable (ALARA) and that measures
should be taken if specific criteria of potential irreversibility are fulfilled. However no majority opinion
was found regarding whether the burden of proof
should be on the proposing actor. A principle component analysis identified only one factor that explained
a total variance of approximately 87.55% and one can
therefore conclude that the respondents answered all
the questions in a consistent manner.
In the production phase of the life cycle, most of the
companies felt responsible for potential environmental
health impacts that may occur, whereas 2 thought this
responsibility should be shared with the government.
In the use phase, 24 companies opined that the responsibility should be borne mainly by industry, whereas
only 8 thought the government or the consumer should
DISCUSSION
362
CONCLUSIONS
REFERENCES
Ashford, N.A. & Zwetslot, G. 2000. Encouraging inherently
safer production in European firms: a report from the
field. Journal of Hazardous Materials, 78: 123144.
Nowack, B. & Bucheli, T.D. 2007. Occurrence, behavior and
effects of nanoparticles in the environment. Environmental Pollution., 250: 522.
Davis, J.M. 2007. How to assess the risks of nanotechnology:
learning from past experience. Journal of Nanoscience
and Nanotechnology, 7: 402409.
Helland, A., Kastenholz, H., Thidell, A., Arnfalk, P. &
Deppert, K. 2006. Nanoparticulate materials and regulatory policy in Europe: An analysis of stakeholder perspectives. Journal of Nanoparticle Research, 8: 709719.
Helland, A., Scheringer, M., Siegrist, M., Kastenholz, H.,
Wiek, A. & Scholz, R.W. 2008. Risk assessment of engineered nanomaterialsSurvey of industrial approaches.
Environmental Science & Technology, 42(2): 640646.
363
Helland, A. & Kastenholz. H. 2007. Development of Nanotechnology in Light of Sustainability. J. Clean. Prod,
online: doi:10.1016/j.jclepro.2007.04.006.
Morgan, K. 2005. Development of a preliminary framework
for informing the risk analysis and risk management of
nanoparticles. Risk Anal., 25(6): 16211635.
Nel, A., Xia, T., Mdler, L. & Li, N. 2006. Toxic potential of
materials at the nanolevel. Science, 311: 622627.
Oberdrster, G., Oberdrster, E. & Oberdrster, J. 2005.
Nanotoxicology: An emerging discipline evolving from
studies of ultrafine particles. Environmental Health Perspectives, 113(7): 823839.
Reijnders, L. 2006. Cleaner nanotechnology and hazard
reduction of manufactured nanoparticles. Journal of
Cleaner Production, 14: 124133.
364
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Terje Aven
University of Stavanger, Stavanger, Norway
INTRODUCTION
THE METHOD
365
4
3
2
1
105 104
104 103
103 102
102 101
d
pT
(1)
(2)
i.e. dm p T .
Next we address the case of redundant safety systems, i.e. safety functions consisting of two or more
components in parallel such that one failure does
not immediately cause the safety function to fail. It
is observed that the dominant reason for failures of
redundant safety functions is Common Cause Failures
(CCF). To incorporate these failures into the method,
we use a modifying factor , the so-called -factor
(see IEC (2000) and IEC (2003)). Typical values for
range from 1% to 10%. For the present study, a
-factor of 10% was chosen and the degree of redundancy is taken into account as 0r where 0 = 10%
and r is the degree of redundancy. For example, r = 1
means that the safety function tolerates one failure,
and 0r = 0.1. It follows that the expected additional
delay time due to the deferred maintenance equals
d 0r , and hence Equation (1) can be written
D=
d
r
pT 0
(3)
A system might for example include a safety function rated at SIL 2 with redundancy of 1. The function
contains two sensors which can both detect a high
pressure. Over one year the delayed maintenance of
the sensors has amounted to 8.7 days. Using again
p = 0.005 the delay factor becomes D = 0.48. The
maintenance delay should be acceptable.
As will be demonstrated in Section 3, observed
delay time factors Di can be readily calculated from
maintenance data for all safety functions i. Based on
these we can produce overall indices and diagrams
providing the management and the authorities with
a picture of the maintenance backlog status. These
indices and diagrams can be used as a basis for
concluding on whether the maintenance of safety functions is going reasonably well or whether safety is
eroding away. An example of an approach for visualising the status is the following: Calculate all delay
time factors Di for all safety functions i which have
failed in the time interval T and present a graph showing the associated cumulative distribution. At D = 1
a dividing line is drawn which marks the maximum
delay time. The number of safety functions to the right
of this line should be small, i.e. centre of the histogram
should be well to the left of this line.
If several systems are to be compared it would
be interesting to calculate a single number which
summarises the information on the maintenance performance with respect to deferred maintenance of
safety systems. One could readily calculate the mean
value D of all Di* , with respect to the safety functions
which have failed in the time interval T . The mean
value calculates the centre of gravity of the distribution of the Di s, and as will be shown in Section 3,
the mean value provides a good overview over the
maintenance backlog status. Moreover, the mean value
D can easily be plotted as a function of time, which
provides an overview over the evolution of the maintenance backlog in time. The mean value, D , could
also be plotted as an accumulated function of time,
which provides an overview of total delay time factor
over a period of time. Care has however to be shown
when using D as a basis for judgments about the
acceptability of the maintenance backlog. For example, requiring D 1, would allow many large delay
times (far beyond the limit 1), as long as the average
366
1
0,9
0,8
0,7
0,6
0,5
0,4
0,3
0,2
0,1
0
0
0,5
1,5
2,5
3,5
4,5
Figure 1.
4.5
All SILs
SIL1
3.5
SIL2
3
2.5
2
1.5
1
0.5
0
jan
feb
mar
apr
mai
jun
jul
aug
sep
okt
nov
des
Time
367
performance through time, find important contributors to delayed maintenance, prioritise maintenance
from day to day basis or to compare the maintenance
performance of several systems.
As part of future work it is planned to include preventive maintenance delays in the methodology, i.e.
an analogous delay time factor originating from preventive maintenance being performed later than the
planned time.
REFERENCES
A new method for prioritisation of maintenance backlog with respect to maintenance of failed safety
functions has been proposed. The proposed method
is linked to the established Safety Integrity Level
regime from IEC 61508 which specifies the reliability requirements of safety functions. The method is
thus inherently risk based. It furnishes a variety of risk
indices to gain an overview over the status of the maintenance backlog of safety functions. The indices can be
used to determine acceptance levels which tell the user
whether the maintenance of safety functions is going
reasonably well or whether safety is eroding away.
The indices can be used to monitor the maintenance
368
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Environmental Health Risks (EHRs) traditionally have been dealt with in a hierarchical and
technocratic manner. Preferably based on scientific expertise, standards set are uniform and based on legal
regulations. However, this approach has encountered implementation problems and deadlocks, particularly in
cases where scientific knowledge is at best incomplete and interests of powerful stakeholders conflict. Many
new approaches to manage EHRs have been implemented, which share two characteristics: an increased
integration of (a) cost-benefit and other considerations; (b) the public and other stakeholders; and (c) of EHR
objectives in other sectoral policies, and an increased differentiation of EHR standards (partly as a consequence
of the former characteristic). Still little systematic empirical research has been conducted on the experiences
with these shifts in EHR governance, in particular in the light of the shortcomings of the traditional approach
to EHR governance. This paper proposes an analytical framework for analyzing, explaining and evaluating
different categories of, and shifts in, EHR governance regimes. We illustrate our paper with the trends in EHR
governance described above.
INTRODUCTION
2
2.1
369
2.2
EHR governance regimes can be defined as the complex of institutional geography, rules, practice, and
animating ideas that are associated with the regulation
of a particular risk or hazard (Hood et al., 2004: 9).
Only limited research has been conducted on the classification of risk regimes (Hood et al., 2004). Below
we discuss a few of such attempts.
In comparing the ways in which risks were regulated
in different countries, ORiordan (1985) identified
four (partly overlapping) styles of risk regulation that
can be considered as four risk governance regimes.
These regimes primarily differ in the way in which
decision-making is organized (top-down, interactive
etc.) and the extent to which consensus among stakeholders is possible.
Hood et al. (2004) consider risk governance regimes
as systems consisting of interacting or at least related
parts and identify nine distinct configurations or risk
governance regimes, which they identify by means of
two dimensions:
Basic control system components: ways of gathering information; ways of setting standards; and
ways of changing behavior in order to meet the
standards (i.e. policy instruments);
The instrumental and institutional elements of
regulatory regimes: the regulatory regime context
(different types of risks at hand, the nature of public
preferences and attitudes over risk and the ways in
which stakeholders are organized) and the regime
content (the policy setting, the configuration of
state and other organizations directly engaged in risk
regulation and their attitudes, belief and operating
conventions).
The IRGC takes a more normative approach. It
advocates a framework for risk governance that aims
at, among other things, enhanced cost-effectiveness,
equal distribution of risks and benefits, and consistency in risk assessment and management of similar
risks (Bunting, 2008; Renn, 2006). The IRGC framework takes into account the following elements:
The structure and function of various actor groups in
initiating, influencing, criticizing or implementing
risk decisions and policies;
Risk perceptions of individuals and groups;
Individual, social and cultural concerns associated
with the consequences of risk;
370
2.3
2.4
What explains the presence of particular EHR governance regimes and how can we understand the
contribution of these regimes to the reduction of
EHRs? Hisschemller and Hoppe (2001) and Hoppe
(2002) adopted a useful theoretical model that links
governance regimes to two specific characteristics of
the policy problem at issue: certainty of the knowledge basis and the extent to which norms and values
converge (see Figure 1). In contrast to the authors discussed in the preceding Section, Hisschemller and
Hoppe (2001) and Hoppe (2002) explicitly consider
these characteristics as independent variables relevant
to the appropriateness and performance of governance
regimes. Interactions and negotiations with, and input
from, stakeholders are assumed to be necessary when
stakes of the various actors involved are high, norms
and values diverge, and when there is high uncertainty about causes of the policy problem or impacts of
alternative policy programs i.e. when unstructured
policy problems are at issue. This unstructured problem category is similar to the post normal science type
of risk assessment proposed by Funtowicz and Ravetz
(1993). In these situations stakeholder involvement is
required in all stages of policy-making, including analysis, both in order to get access to relevant information
and to create support for policy-making. Examples
of this mode of problem solving are discussed in
Kloprogge and Van der Sluijs (2006) for the risks of
climate change. Structured policy problems, on contrary, can be solved in a more hierarchical way. Here,
policy can be left to public policy-makers; involvement
of stakeholders is not needed for analysis or successful problem solving. In this case policy-making has
a technical character and is often heavily based on
scientific knowledge. In the case of moderately structured, means problems stakeholder involvement is
not required for recognition of the problem at issue,
Structured problems
Moderately structured
/ means problems
Moderately structured
/ goals problems
e.g. abortion
Unstructured
problems
e.g. car mobility
371
372
Regarding the role of stakeholders: usually government agencies set standards in hierarchical ways.
(Formal) public participation is limited or absent.
Informal influences by stakeholder groups however
do exist (see below);
Regarding the role of science: scientists are the
logical providers of knowledge on the nature and
severity of EHRs and levels at which health risks
are acceptable;
Regarding principles underlying EHR standards:
standards are based primarily on estimated health
impacts, preferably based on scientific risk assessments. Cost-benefit and other considerations usually do not play a major role in an early stage of
EHR governance, i.e. in the definition of standards;
Regarding standards: Environmental health risks
are dealt with on an individual basis. Where possible
quantified standards are formulated that generic, i.e.
apply to all of the situations specified and do not discriminate between groups of people. This results in
detailed standards, which is one of the reasons why
this approach is often considered technocratic.
Compensation of health risks geographically or
between groups of people (either concerning one
specific risk type or between different risk types)
usually is not allowed;
Regarding policy instruments: a linear approach to
EHR governance is taken: implementation and the
selection of instruments (of which legislation and
licenses are typical) follows after risk assessment
and standard-setting and does not play a large role
in earlier stages of EHR policy.
Today this approach, which can be called specialized due to its focus on individual EHRs, still can
be observed worldwide. In Europe, for instance, this
approach is visible in the EU Directives regulating
concentrations of particulate matter and other EHRs.
3.1.2 Explaining specialized EHR regimes
The technocratic and hierarchical character of EHR
regimes reflect wider ideas on the role of governments in society, which were dominant in Western,
liberal societies until some two decades ago. (Central) governments were considered to have a strong and
leading role in addressing social problems and there
was much faith in the contribution that science could
have in enhancing the effectiveness and efficiency of
government policy (Fischer, 1997; Van de Riet, 2003).
This has become institutionalized in very specialized
(and growing) bureaucracies and an important role of
(institutionalized) science-based knowledge providers
(policy analysis).
3.1.3 EHR governance outcomes and explanations
The specialized approach has resulted in the reduction of various health risks. For instance, in the
373
3.2
374
375
CONCLUSIONS
Given the limited attention being paid to EHR governance regimes in risk research, our aim was to develop
an analytical framework for characterizing, explaining
and evaluating such regimes. Based on a review of relevant literature we developed a framework, which we
illustrated by means of some macro trends we observed
in some Western countries.
The framework seems to be useful for guiding
research into the above area as it allows for a systematic examination of relevant elements and possible
relationships. In the analysis of recent shifts in EHR
governance we discussed as an illustration of our
framework not all of elements were elaborated in much
detail. Cultural influences on governance regimes
for instance may be identified more explicitly in an
international comparative study.
We suggest that further research is conducted in
order to classify EHR governance regimes with the aid
of our framework. What distinct configurations can be
found in practice, next to the (perhaps artificially constructed) specialized approach we discussed? And
what relations between the various elements exist? The
shifts towards integration and differentiation identified in the quick-scan survey conducted by Soer et al.
(2008) may act as a starting point. Three remarks
should be made here. One, in this study only a
(probably non-representative) sample of countries was
included. Two, the shifts identified were not observed
in each of the sample countries in the same intensity.
Three, the survey focused on the main characteristics of national EHR governance regimes; no in-depth
research was conducted into specific EHRs. However,
many of the trends that we observed are similar to
those reported by for instance Amendola, 2001, De
Marchi, 2003, Heriard-Dubreuil, 2001 and Rothstein
et al., 2006.
REFERENCES
Alcock, R.E. & Busby, J. 2006. Risk migration and scientific
advance: the case of flame-retardant compounds. Risk
Analysis 26(2): 369381.
Amendola, A. 2001. Recent paradigms for risk informed
decision making. Safety Science 40(14): 1730.
Bunting, C. 2008. An introduction to the IRGCs risk
governance framework. Paper presented at the Second RISKBASE General Assembly and Second Thematic
Workshop WP-1b. May 1517. Budapest: Hungary.
De Marchi, B. 2003. Public participation and risk governance. Science and Public Policy 30(3): 171176.
Douglas, M. & Wildavsky, A. 1982. How can we know the
risks we face? Why risk selection is a social process. Risk
Analysis 2(2): 4951.
EEA. 2001. Late lessons from early warnings: the precautionary principle 18962000. Copenhagen: European
Environmental Agency.
376
Kersbergen, K. van & Waarden, F. van. 2001. Shifts in governance: problems of legitimacy and accountability. The
Hague: Social Science Research Council.
Kingdon, J.W. 1995. Agendas, alternatives, and public
policies. New York: Harper Collins College.
Klinke, A. & Renn, O. 2002. A new approach to risk evaluation and management: risk-based, precaution-based,
and discourse-based strategies. Risk Analysis 22(6):
10711094.
Kloprogge, P. & Sluijs, J.P. van der. 2006. The inclusion
of stakeholder knowledge and perspectives in integrated
assessment of climate change. Climatic Change 75(3):
359389.
Neumann, P. & Politser, R. 1992. Risk and optimality. In
F. Yates (ed). Risk-taking Behaviour: 2747. Chicester:
Wiley.
Open University. 1998. Risicos: besluitvorming over veiligheid en milieu (risks: decision-making on safety and
the environment, in Dutch; course book). Heerlen: Open
University.
ORiordan, T. 1985. Approaches to regulation. In:
H. Otway & M. Peltu. Regulating industrial risks. Science, hazards and public protection: 2039. London:
Butterworths.
Rayner, S. & Cantor, R. 1987. How fair is safe enough?
The cultural approach to societal technology choice. Risk
analysis, 7(1): 39.
Renn, O. 2006. Risk governance. Towards an integrative approach. Geneva: International Risk Governance
Council.
Riet, O. van de. 2003. Policy analysis in multi-actor policy settings. Navigating between negotiated nonsense
and superfluous knowledge. Ph.D. thesis. Delft: Eburon
Publishers.
Robinson, P. & MacDonell, M. 2006. Priorities for mixtures health effects research. Environmental Toxicology
and Pharmacology 18(3): 201213.
Rothstein, H., Irving, Ph., Walden, T. & Yearsley, R. 2006.
The risks of risk-based regulation: insights from the
environmental policy domain. Environment International
32(8): 10561065.
Runhaar, H.A.C., Driessen, P.P.J. & Soer, L. 2009. Sustainable urban development and the challenge of policy integration. An assessment of planning tools for
integrating spatial and environmental planning in the
Netherlands. Environment and Planning B, 36(2) (forthcoming).
Sluijs, J.P. van der. 2007. Uncertainty and precaution in environmental management: insights from the UPEM conference. Environmental Modelling and Software 22(5):
590598.
Soer, L., Bree, L. van, Driessen, P.P.J. & Runhaar, H.
2008. Towards integration and differentiation in
environmental health-risk policy approaches: An international quick-scan of various national approaches to
environmental health risk. Utrecht/Bilthoven: Copernicus Institute for Sustainable Development and Innovation/Netherlands Environmental Assessment Agency
(forthcoming).
Stoker, G. 1998. Governance as theory: five propositions.
International Social Science Journal. 50(155): 1728.
Sunstein, C.R. 2002. Risk and reason. Safety, law, and the
environment. New York: Cambridge University Press.
U.K. H.M. Treasury. 2005. Managing risk to the public:
appraisal guidance. London: H.M. Treasury.
U.K. House of Lords. 2006. Economic AffairsFifth Report.
London: House of Lords.
UNESCO COMEST. 2005. The precautionary principle.
Paris: UNESCO.
U.S. EPA. 2004. Risk assessment principles and practices.
Washington: United States Environmental Protection
Agency.
VROM. 2004. Nuchter omgaan met risicos. Beslissen met
gevoel voor onzekerheden (Dealing sensibly with risks;
in Dutch). The Hague: Department of Housing, Spatial
Planning, and the Environment.
Wheeler S.M. & Beatley T. (eds.) (2004). The sustainable urban development reader. London/New York:
Routledge.
377
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The term safety margin has become a keyword when discussing about the safety of the
nuclear plants, but there is still much confusion about the use of the term. In this paper the traditional concept
of safety margin in nuclear engineering is described, and the need is expressed of extending the concept to
out-of-design scenarios. A probabilistic definition of safety margin (PSM) is adopted for scalar safety output at
a scenario-specific level. The PSM is easy to generalize (to initiating events, multiple safety outputs, analytical
margins) and, combined with frequencies of initiators and accidents, makes up the plant risk. Both deterministic
and probabilistic approaches to safety assessment found easy explanation in terms of PSM. The role of the
probabilistic margins in the safety assessment of plant modifications is discussed.
INTRODUCTION
The introduction of safety margins in traditional engineering is a protection design technique aimed at providing some additional protection capability beyond
the one that is considered strictly necessary. The benefit of using safety margins is two-fold. On one hand,
they allow to accommodate tolerances for little known
phenomena, uncertainties in model data, variabilities
in initial or boundary conditions, and so on. On the
other, they result in a significant simplification of
the design methods as they allow to split the design
in several decoupled stages where the applicable criteria are not too closely linked to the details of the
phenomenology considered in the design analyses.
This approach, essentially deterministic, is considered conservative in most cases and, therefore,
provides confidence that the protection is able to cope
with or to mitigate challenging situations, including
some that were not considered in the design analyses.
In addition, it is convenient for developing safety regulations. This idea was also applied from the beginning
in the nuclear industry and, in particular, in the analysis
of Design Basis Transients and Accidents (DBT&A,
from now on referred to as DBT) where the capabilities
of the protection are assessed. A set of well defined,
enveloping scenarios, classified into a few frequency
classes, are taken as design basis for the protection
and a set of safety variables are used as damage indicators or as indicators of challenges to the protection
barriers. For this limited set of design basis scenarios
379
Figure 1.
it is possible to define class-specific acceptance criteria in terms of extreme allowed values of the safety
variables, also called safety limits.
In this context, the concept of safety margin is
applied on a scenario-specific basis and its meaning
can be agreed without much difficulty. However, even
at single scenario level, a great variety of margins
appear and all of them can be properly called safety
margins. Figure 1 tries to represent these margins and
how they relate to each other.
In this figure, the two left-most columns represent
the barrier analysis and the two on the right side represent the analysis of radiological consequences. In the
first column, a particular safety variable in a particular
DBT is represented. Since the DBT is an enveloping
scenario, the extreme value of the safety variable in the
enveloped transients will stay below the value of the
same variable in the DBT which should, indeed, stay
below the acceptance criterion or safety limit. There
will be as many left-most columns as the number of
safety variables times the number of DBT. This is indicated in Figure 1 by the dashed ellipse entitled Other
S.V. and Acc. Crit. In every one of these columns
there will be an Analytical Margin and a Licensing
Margin.
Each safety variable and its corresponding safety
limit are selected to prevent a particular failure mode of
a protection barrier. However, the safety limit is not a
sharp boundary between safety and failure. Overpassing the safety limit means that there are non-negligible
chances for a given failure mode but, in most cases,
there is a margin (the Barrier Margin in figure 1)
between the safety limit and the actual failure. A given
380
According to our previous discussion, the safety margin should be defined as a measure of the distance
from a calculated safety output to some limit imposed
on it. In general, both elements are calculated variables, which are uncertain magnitudes because they
inherit the uncertainty of the inputs to the calculations
and moreover they incorporate the uncertainty from
the predictive models being used. We will adopt in
this paper the classical representation of uncertain
magnitudes as random variables.
Let us consider a transient or accident A in an industrial or technological facility, and a scalar safety output
V calculated for A. V will be, in general, an extreme
value (maximum or minimum) of a safety variable in
a certain spatial domain during the transient. We will
symbolize by V the random variable representing such
safety output as calculated with a realistic model M.
Let us also suppose an upper safety limit L for V, and
assume that L is time-independent. In this case, V will
be the maximum value of a safety variable. This simple setting will be considered through the paper, the
treatment for other settings (e.g. when L is a lower
limit and V is a minimum during the transient) being
completely analogous.
In general L can be considered a random variable
as well, because it can have some uncertainty. For
instance, L can be a damage threshold from some
safety barrier, obtained directly from experimental
measures or from calculations with a model M. The
variable
D LV
(1)
381
(3)
(5)
(6)
V = Vi with probability
(4)
(2)
pi PR(Ai /IE)
PR {V < L/A} =
fL (s)
fV (z; A) dz ds
(7)
fL (s)FV (s; A) ds
(8)
382
(9)
(10)
PSM (V ; IE) =
S
pj PSM (V ; Aj )
(11)
PSM (B; A) =
k=1
PSM (V k ; A)
(13)
k=1
This is another example of how probabilistic margins can merge into another margin.
Now let us focus again on the scalar safety output V
and consider all the initiating events IEi , i = 1, . . . M
that can start accidents challenging V. The frequency
of V exceeding the limit L is:
(V > L) =
S
i (1 PSM (V ; IEi ))
(14)
i=1
where i is the frequency of IEi . In (14) the frequencies of initiators combine with the exceedance
probabilities:
1 PSM (V ; IEi ) =
S
(15)
j=1
j=1
5
That is, the margin for the initiator is a weighted
average of the margins for sequences, the weight being
the conditional probability. This is an example of how
probabilistic margin combine. The same expression
holds for the exceedance probabilities.
Now, let us suppose a safety barrier B having several
failure modes, the i-th failure mode being typified by
a safety output Vi with an upper safety limit Li , i =
1, . . ., F. A safety margin can be assigned to the barrier,
conditioned to the accident A:
PSM (B; A) PR
F
(V < L )/A
k
(12)
k=1
CALCULATION OF PROBABILISTIC
SAFETY MARGINS
383
Conservative methodologies
Vb
fL (s)ds
Vb
PR {V < L/IE} =
fL (s) FV (s; IE) ds > FV (Vb ; IE)
Vb
6.1
(17)
Vb
(16)
Figure 3.
384
M
k=1
(20)
where Vbk is the value of V calculated for the k-th
design basis transient. (20) trivially transforms into
(V > L) <
M
k=1
M
(21)
k=1
Realistic methodologies
(22)
(23)
385
CONCLUSIONS
REFERENCES
ANS (American Nuclear Society) 1983. Nuclear Safety Criteria for the Design of Stationary Pressurized Water
Reactor Plants. ANSI/ANS-51.11983.
Atwood, C.L. et al. 2003. Handbook of Parameter
Estimation for Probabilistic Risk Assessment. Sandia
National LaboratoriesU.S. Nuclear regulatory Commission. NUREG/CR-6823.
386
Boyack, B. et al. 1989. Quantifying Reactor Safety Margins. Prepared for U.S. Nuclear Regulatory Commission.
NUREG/CR-5249.
IAEA. 2001. Safety Assessment and Verification for Nuclear
Power Plants. Safety Guide. Safety Standard Series No.
NS-G-1.2 , 2001.
Izquierdo, J.M. & Caamn, I. 2008. TSD, a SCAIS suitable
variant of the SDTDP. Presented to ESREL 2008.
Martorell, S. et al. 2005. Estimating safety margins considering probabilistic and thermal-hydraulic uncertainties.
IAEA Technical Meeting on the Use of Best Estimate
Approach in Licensing with Evaluation of Uncertainties.
Pisa (Italy), September 1216, 2005.
Mendizbal, R. et al. 2007. Calculating safety margins
for PSA sequences. IAEA Topical Meeting on Advanced
387
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
M. Strmgren
Swedish Rescue Services Agency, Karlstad, Sweden
Division of Public Health Sciences, Karlstad University, Karlstad, Sweden
ABSTRACT: This study gives an overview of approaches for risk management used within the Swedish
Rescue Services Agency (SRSA). The authoritys commission covers a broad spectrum of safety issues. Group
interviews were performed within different sectors of the organisation. The results show that several perspectives
on accidents and different understandings of safety terms exist within the SRSA. How the organisation uses risk
analyses and carries out risk evaluations differs among the sectors. The safety work includes various types
of accidents, injuries and incidents. The SRSA also use a variety of strategies for safety based on tradition,
legislation and political direction. In such an extensive safety authority, it is not unproblematic to coordinate,
govern and regulate safety issues. Different safety paradigms and risk framings have created problems. But these
differences can also give opportunities to form new progressive strategies and methods for safety management.
INTRODUCTION
391
METHOD
3
3.1
392
393
Table 1. The table shows which types of event the different sectors work with. Note that the sector 12 (Supervision) has been
excluded in this table.
Storm
Explosion
Fire
Exposure to heat
Exposure to cold
Drowning
Suffocation
Landslide
Flooding
Collision, crash
Trapped in
Lightning
Cuts
Sector
Fall
Types of events
394
395
certain regulation. Fulfilling requirements from insurance companies or from the organisation itself have
also been stated. Another common category of purpose is that the risk analyses shall constitute some
kind of basis for decisions for different stakeholders
and public authorities. Yet another common category
is the purpose to verify or show something. Examples
of this are to verify that the regulation is fulfilled, the
safety level is adequate or that the risk level is acceptable. Other examples are to show that it is safe, more
actions do not have to be taken, the risk is assessed in
a proper manner or that a certain capacity or skill is
obtained. A less common category of purpose is to display a risk overview or a comprehensive risk picture.
An outline for a generic structuring of the different purposes of a risk analysis based on the received answers
could hence be:
1. Formal requirements
a. Legal based
b. Non-legal based
2. Basis for decision-making
a. Public decisions
b. Non-public decision
3. Verification
a. Design of detail, part of a system or a system
b. Risk level
4. Risk overview/comprehensive risk picture
3.4
thought that adequate safety was not primarily a question of achieving a certain safety level or safety goal;
instead the focus should be on the characteristics of
the safety work.
The principles and directions used for evaluation of
risk in the authoritys operation were surveyed. Fifteen
types of principles and direction were referred to by the
sectors as starting-points when doing risk evaluation.
Some examples are: risk comparison, economical considerations, zero risk target (vision zero), experience
and tradition (practice), lowest acceptable or tolerable risk level, national goals, the principle of avoiding
catastrophes, a third party should not be affected by
the accidents of others, and balanced consideration
and compromises between competing interests. Some
of the sectors did not have a clear picture of how the
principles were utilized in real cases or situations. Also
in a couple of cases the sectors did not know which
principles that governed the evaluation of risk within
their domain.
Based on some of the results presented above, an
outline for classification of risk criteria was made. The
intention with the outline is to display some different aspects that the criteria focus on and hence also
the evaluations. The outline has five main classes
of criteria: 1) Safety actions and system design, 2)
Rights-based criteria, 3) Utility-based criteria, 4)
Comparisons, 5) Comprehensive assessments. More
about the outline for classification of risk, and other
results on how SRSA conceived and do evaluation is
found in Harrami et al. (in press).
4
DISCUSSION
396
CONCLUSIONS
397
FUTURE WORK
398
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Evaluation of risk and safety issues at the Swedish Rescue Services Agency
O. Harrami
Swedish Rescue Services Agency, Karlstad, Sweden
Department of Fire Safety Engineering and Systems Safety, Lund University, Lund, Sweden
U. Postgrd
Swedish Rescue Services Agency, Karlstad, Sweden
M. Strmgren
Swedish Rescue Services Agency, Karlstad, Sweden
Division of Public Health Sciences, Karlstad University, Karlstad, Sweden
ABSTRACT: This study investigates how evaluation of risk and safety are conceived and managed in
different parts of the Swedish Rescue Services Agency (SRSA). Group interviews were performed within
twelve different sectors of the organisation. The results show that some of the representatives do not consider
themselves doing evaluations of risk, even though they take daily decisions and standpoints that incorporate
evaluation of risk. In most sectors profound reflection and discussion about these issues had only been carried
out to a very limited extent. The different sectors had great difficulties to express or describe how to assess
adequate, sufficient, acceptable or tolerable safety. There is a need for SRSA to develop a more substantiated
foundation for evaluation of risk and safety issues to receive better internal and external understanding of the
decisions, a more transparent process, easier and clearer communication of decisions and standpoints, and better
support to decision-makers.
INTRODUCTION
399
METHOD
400
3.1
Dangerous goods
The objective of the regulatory work for the transport of dangerous goods is to ensure safe transport.
The regulatory work is done within the framework of
the UN Economic & Social Council (ECOSOC) and
is based on negotiations between the member states.
39 countries are involved in the work with the rules
for road transport, and 42 countries work with the
rules for rail transport. There is no set safety target
level but a general principle that is used in the assessment is that the more dangerous the substance is the
higher safety the level is required. The rules are a
balance between enabling transportation and ensuring
safety. This means that the diverse conditions in the
member-states (weather, density of population, road
quality, economy etc) may play an important role in
the negotiations. Since the regulatory work is based
on negotiation the political aspects are dominant and
this may also be reflected in the outcome. Cost-effect
analyses are utilized to some extent. The rules mainly
address the design of the technical system and to some
extent organisational issues. It is difficult to assess if
adequate safety is achieved.
3.3
Flammables
Explosives
401
should be protected from the surroundings. The surroundings should be protected from the explosives. It
is very difficult for the sector to assess if the desired
safety level is achieved. The absence of accidents is
interpreted as an indication that the safety level probably is good. The applied criteria are absolute but some
consideration of the cost of actions may be taken.
3.5
Natural disasters
This sector is both comprehensive and situationdependent, and focuses on two aspects of risk evaluation: general planning of the fire and rescue work in
the municipalities (task, size, location, equipment etc.)
and safety for the personnel during a rescue operation.
The evaluations done in connection with the general
planning vary between different municipalities and are
based on different information and statistics. Statistics
on incidents, injuries and accidents are common information sources. During the last years SRSA has been
developing and promoting other methods which some
municipalities have adopted e.g. cost-effect methods
as well as different measures, indicators and keyratios. Evaluations done during a rescue operation are
to a certain degree directed by regulation e.g. prerequisites for conducting a rescue operation and directions
for certain activities (e.g. fire fighting with breathing apparatus and diving). The rescue commander has
to make fast and sometimes difficult assessments and
evaluations based on limited information.
3.9
The work of this sector is largely done in the municipalities, and is based on political decisions that give
directions for the safety work. The priority given to
safety issues differs between municipalities. A common guiding principle is that saving life is prioritised
compared to saving property, and saving property is
402
4
4.1
3.12 Supervision
Supervision is made within the framework of four legislations, and is based on general information about
the operation, risk analysis and other documents. It
is not possible to process and consider everything in
an operation. Strategic choices are made, on what
to study. With improved experience the supervisors
learn how to receive a good picture of the status of
different operations and how to assess the safety
level. Supervision is made up of two main parts: the
administrative organisation and the hands-on operation. The main starting-point for the assessments is
the intentions of the legislation. Hence the interpretations of the legislation become crucial for the safety
assessment.
4.2
403
it can be shown that a product or an activity/operation does not have a higher risk level than
other similar products or activities/operations
all experiences acquired by the involved parties are
used as far as possible
it has been shown that the quantitative safety/risk
level is lower than established criteria
it has been shown that certain required equipment
or functions exist
a process involving certain parties and including
certain stages, has preceded a safety decision
a decision becomes a legal case, goes to trial and
become a precedent
a decision on safety is determined in relation to other
factors (adequate safety is relative)
all the parties are content and nobody complains
about the safety level or actions taken.
Some of the sectors answered that it is not possible
to know or determine if adequate safety is achieved.
One sector thought that adequate safety was not primarily a question of achieving a certain safety level or
safety goal. Instead the focus should be on the characteristics of the safety work. An operation or activity
that works systematically and strives for continuous
safety improvements should be considered to have
adequate safety.
The results show that the sectors have interpreted
the question in several different ways. The sectors
have either answered how requirements in regulation
are satisfied (legal view) or how adequate safety is
assessed from a scientific or from a philosophical
point of view. Also the answers given on this question
could be summarised in three types of main discussions (discourses), where discussion type 1 and 2 were
the most common (Fig. 1).
Type 1: Quantitative risk measures (outcomes and
expectations) were discussed and contrasted to each
other. In most case the discussion also included some
questions or criticisms of these risk measures (to some
referred to as the reality). Type 2: Different safety
actions, necessary functions and ways of managing
safety were discussed and contrasted. In most cases
the discussion also included issues on how to assess
the reasonableness of required safety actions. Type 3:
In a couple of cases the two earlier types of discussions
(type 1 and type 2) were contrasted to each other.
4.3
2
3
Outcomes
&
Expectation
values
Actions
&
Functions
&
Way of work
The reality
Reasonableness
404
4.5
CONCLUSIONS
405
FUTURE STUDIES
406
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
E. Albrechtsen
SINTEF Technology and Society/Norwegian University of Science and Technology, Trondheim, Norway
ABSTRACT: The paper compares how information security in critical infrastructures is regulated in two different sectors and how the regulations can influence organizational awareness. It also compares how organizational
information security measures are applied in the sectors, and discusses how the sectors can learn from each
other. The findings document considerable differences in legal framework and supervision practices, in use
of organizational information security measures and the top management engagement. Enterprises belonging
to the finance sector have more widespread use of organizational security measures, and the respondents are
also more satisfied with the management engagement and the organizations performance according to the legal
requirements. The paper argues that information security audit by authorities can be one important contribution
to information security awareness, and top management commitment to security, and that the sectors can learn
from each other by sharing information on how they deal with information security.
INTRODUCTION
407
Information security has developed from a strict technological discipline to become a multidisciplinary
responsibility for top management (Lobree 2002,
Sundt 2006). Williams (2007) claims that even the
board needs to be assured of effective risk management and the sharing of responsibility for information security by a number of individuals within the
organization.
Information security law places responsibility for
information security on the management and the
boards. Dealing with authorities is a typical management task. Thus, supervision from authorities may be
one way to raise top managements awareness. We
expect that an engaged top management is important
for how the organization adopts information security measures and complies with the law. Studies of
safety have documented that management involvement is important for the safety work within companies
(Simonds 1973, Simonds & Shafai-Sharai 1977). It
is reasonable to believe that this experience could be
transferred to the information security field.
3
METHOD
4
4.1
408
4.2
COMPARATIVE ANALYSES
5.1
The answers
Of the 87 received answers, 21 were from hydro electrical enterprises and 13 from saving banks. Table 1
identifies the distribution of the answers and the
respondents of the questionnaire.
The data shows that within the finance sector more
managers answered the questionnaire compared with
the hydro electric power industry in which responses
from IT personnel dominated.
About half of the respondents report that they outsource some or all IT operation. This corresponds with
the findings in other surveys (Hagen 2007).
5.2
409
Table 1.
Respondnets (N = 34).
Electric power
Finance
Manager
IT
Economy
Security
Advisor
1
17
0
2
1
4
1
2
3
2
Total counts
21
12
Engaged top
management
Info.sec. is frequently
on the agenda
Legal requirements
are satisfied
Supervision increases
the top managements
engagement
Electric
power
Finance
Sig.
3.33
4.62
0.001
2.29
3.08
0.057
4.14
4.69
0.035
3.95
4.25
0.399
Looking at the share of enterprises experiencing security incidents (Table 5), we see that a larger number
of electric power supply enterprises report incidents
typically caused by insiders, compared with financial enterprises. A subsequent hypothesis may be that
there exists a relationship between high organizational
security awareness and low probability for security
breaches by own employees. The data set is unfortunately too small to conduct a Chi-square test of the
hypothesis.
5.3 How could the sectors learn from each other?
Within the Norwegian electricity industry, focus
placed upon information security has increased during the last years, both through national information
security strategy initiatives and research on protection
of critical infrastructure (Nystuen & Hagen, 2003).
The traditional security focus in the sector has been
on national security, physical security and emergency
in case of natural disasters and war. Information security has become important in particular during the last
1015 years, and is now a critical input in process
operation and trade.
In the finance sector, however, information security has been close to core business ever since money
became electronic signals. If the IT systems are
down or the IT services not secure, this would not
Table 3. Percentage who have applied different organizational security measures in electric power sector (N =
21) and financial sector (N = 13).
Electric
power
Finance
76%
80%
62%
52%
48%
38%
62%
100%
92%
100%
92%
54%
61%
54%
Risk analysis
Internal audit
External audit
Table 5.
Electric
power
Finance
Sig.
level
3.05
3.10
2.62
3.77
3.85
3.46
0.003
0.002
0.016
Abuse of IT systems
Illegal distribution of nondisclosure material
Unauthorized deletion/
alteration of data
Unintentional use violating
security
410
Finance
(N = 13)
10
6.1
DISCUSSIONS
The differences in legal framework
and supervision practice
6.2
411
management should be aware of the use of organizational information security measures examined in the
survey, as the measures would influence in their work
in someway. It is our view that the study produces
valid and reliable results, despite the few answers and
the variations in identity of the respondents to the
questionnaire.
CONCLUSIONS
ACKNOWLEDGEMENT
The authors would like to acknowledge Jan Hovden,
Pl Spilling, Hvard Fridheim, Kjetil Srlie, shild
Johnsen and Hanne Rogan for their contribution to the
work reported in this paper.
412
REFERENCES
COBIT, 2008: Available at www.isaca.org/cobit
Hagen, J.M. 2003. Securing Energy Supply in NorwayVulnerabilities and Measures, Presented at the conference:
NATO-Membership and the Challenges from Vulnerabilities of Modern Societies, The Norwegian Atlantic Committee, and the Lithuanian Atlantic Treaty Association,
Vilnius, 4th5th December, 2003.
Hagen, J.M., Albrechtsen, E. & Hovden, J. Unpubl. Implementation and effectiveness of organizational information
security measures, Information Management & Computer
Security, accepted with revision.
Hagen, J.M., Norden. L.M. & Halvorsen, E.E. 2007.
Tilsynsmetodikk og mling av informasjonssikkerhet i
finans og kraftsektoren. In Norwegian. [Audit tool and
measurement of information security in the finance and
power sector] FFI/Rapport-2007/00880.
Hagen, J.M. 2007. Evaluating applied information security
measures. An analysis of the data from the Norwegian
Computer Crime survey 2006, FFI-report-2007/02558
Hole, K.J., Moen, V. & Tjstheim, T. 2006. Case study:
Online Banking Security, IEEE Privacy and Security,
March/April 2006.
Kredittilsynet (The Financial Supervisory Authority of Norway), Risko- og srbarhetsanalyse (ROS) 2004, In Norwegian [Risk and vulnerability analysis 2004].
Lobree, B.A. 2002. Impact of legislation on Information Security Management, Security Magazine Practices,
November/December 2002: 4148.
NS-EN ISO 19011, Retningslinjer for revisjon av systemer
for kvalitet og/eller miljstyring, [Guidelines for auditing systems for qualitiy management and environmental
management].
Nystuen, K.O. & Fridheim, H, 2007, Sikkerhet og srbarhet i
elektroniske samfunnsinfrastrukturerrefleksjoner rundt
regulering av tiltak, [Secure and vulnerable electronic
infrastructuresreflections about regulations and security measures], FFI-Report 2007/00941.
Nystuen, K.O & Hagen, J.M. 2003. Critical Information Infrastructure Protection in Norway, CIP Workshop,
Informatik, Frankfurt, a.M, 29.09-02.10.03, 2003.
Simonds, R.H. & Shafai-Sharai, Y. 1977. Factors Apparently
Affecting Injury Frequency in Eleven Matched Pairs of
Companies. Journal of Safety Research 9(3): 120127.
Simonds, R.H. 1973. OSHA Compliance Safety is good
business, Personnel, July-August 1973: 3038.
Sundt, C, 2006. Information Security and the Law. Information Security Technical Report, 11(1): 29.
Williams, P. Executive and board roles in information security, Network Security, August 2007: 1114.
413
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Countervailing risks have important implications for risk regulation, as they suggest that gains in
environmental and human health may be coming at a significant cost elsewhere, or even that in some situations,
our cures may be doing more harm than good. This paper expands the prevailing explanation of why risk
reducing measures frequently lead to harmful side-effects which render them uneconomic or even perverse.
Our expansion is three-fold, we: highlight how confirmation bias and various pathologies of complex problem
solving may constrain regulators from foreseeing or even considering the harmful side-effects of proposed risk
reduction measures; argue that limited incentives and capacities for regulatory learning constrain the detection
and correction of harmful side-effects of interventions; and contend that the adversarial nature characterising
many risk conflicts systematically gives rise to perverse trade-offs. We conclude that adaptive, stakeholder-based
forms of regulation are best positioned to reform these pathologies of the regulatory state, and to produce a form
of governance best suited for recognising and addressing the need to make risk trade-offs in what are often highly
charged, contested situations.
the often latent second and third-order effects of decisions and actions (Drner, 1996). For various reasons,
we seem ill-suited to this task.
This incongruity has led some of our most prominent social theorists and philosophers to focus on the
unforeseen and unintended consequences of human
actions, in contexts ranging from economics, to politics, to social policy (e.g. Popper, 1961; Merton,
1936; Hayek, 1973). The driving impetus was the
idea that unintended consequences should be a central concern of the social sciences, as they constrain
the ability to predict and therefore control the consequences of social interventions. In the past generation,
the baton has been passed to scholars of risk regulation,
a small but influential group of whom have focussed on
the harmful, unintended consequences of risk reducing measures in the public and environmental health
arenas (e.g. Graham and Wiener, 1996; Sunstein,
1990; Viscusi, 1996). For example, airbags may protect adults but kill children; gas mileage standards
may protect the environment at the cost of thousands
of lives annually, as they encourage manufacturers
to sacrifice sturdiness for fuel-efficiency; drug-lags
stemming from stringent testing requirements may
protect the public from potentially adverse effects of
un-tested pharmaceuticals, whilst at the same time
diminishing the health of those who urgently need
them; bans on carcinogens in food-additives may lead
consumers to use non-carcinogenic products which
nevertheless carry even greater health risks, and so on.
415
These countervailing risks have important implications for risk regulation, as they suggest that recent
gains in environmental and human health may be coming at a significant cost elsewhere, or even that in some
situations, our cures may be doing more harm than
good. This paper expands the prevailing explanation of
why risk reducing measures frequently lead to harmful
side-effects (e.g. Graham and Wiener, 1995; Wiener,
1998), and argues that stakeholder-based forms of
adaptive management are the most effective regulatory arrangements for making optimal risk trade-offs
in what are often highly charged, contested situations
requiring tragic choices to be made under conditions
of ignorance.
416
explores individual and group decision making in simulated environments defined by the characteristics of
complexity, opaqueness, and dynamism (e.g. natural
resource management). Given these characteristics,
such simulations are largely analogous to the dilemmas facing regulators tasked with protecting public
and environmental health.
Unsurprisingly, this programme has revealed that
people generally struggle to deal with complex problems; however, the errors committed by the participants were far from random, instead being indicative
of general weaknesses or pathologies in reasoning
and perception when dealing with complex, opaque,
dynamic systems. Those of greatest relevance to us are
(Drner, 1996; Brehmer, 1992): the common failure
to anticipate the side-effects and long-term repercussions of decisions taken; the tendency to assume that
an absence of immediate negative effects following
system interventions serves as validation of the action
taken; and the habit of paying little heed to emerging needs and changes in a situation, arising from
over-involvement in subsets of problems. Rather than
appreciating that they were dealing with systems composed of many interrelated elements, participants all
too often viewed their task as dealing with a sequence
of independent problems (Drner, 1996; Brehmer,
1992).
One conclusion that can been drawn from this is
that peoples mental models often fail to incorporate
all aspects of complex tasks, and so making inferences
about side-effects and latent consequences of actions
is often a bridge too far (Brehmer, 1992). Although
we are to some extent stuck with the limitations of our
cognitive capacities, it is worth bearing in mind that
mental models are not solely internal psychological
constructs, but are to an extent socially constructed.
It thus seems reasonable to assume that, in the context of risk regulation, the incorporation of a broad
range of values, interests, and perspectives within the
decision making process would help to counteract the
tendency to frame the pros and cons of proposed risk
reduction measures in an unduly narrow, isolated manner (i.e.expand regulators mental models). Moreover,
enhancing the mechanisms and incentives for regulatory agencies to monitor and evaluate the impacts
of their interventions could ameliorate the tendency
to neglect latent outcomes of decisions (i.e.lengthen
regulators mental models). We return to this later.
We hypothesise that a further contributory factor
to the phenomenon of perverse risk trade-offs is the
well established phenomenon of confirmation bias.
This refers to the tendency to seek out and interpret
new information in a manner which confirms ones
preconceived views and to avoid information and interpretations which questions ones prior convictions.
The reader may, quite rightly, point out that there is a
veritable litany of psychological biases that have been
3.2
417
that seek to address the problem of perverse risk tradeoffs should look with one eye towards error prevention,
whilst casting the other towards error detection and
correction.
3.3
We now turn to consider what a sociological perspective can tell us about the phenomenon of perverse
risk trade-offs. Our central point is that the polarised
nature of debates over proposed regulations, both in
the public arena and within the lobbying process,
is a key precondition for the generation of perverse
trade-offs. For simplicity, consider the case where a
regulatory measure for restricting the use of brominated flame retardants is proposed. Paradigmatically,
we would see various groups lobbying the architects of
the regulatory process (i.e.administrators, bureaucrats,
legislators): NGOs, industry groups, think tanks, and
so forth. Those favouring the proposed measure would
of course highlight the forecast gains in environmental
and human health arising from a ban, whilst those in
opposition will draw attention to the economic costs
and potential countervailing risks which may it may
give rise to (e.g.through reducing the level of protection from fires, through promoting a shift to potentially
hazardous chemical substitutes about which little are
known, etc.).
At a fundamental level, these contrasting policy
preferences arise from what are often sharply contrasting beliefs, values, norms and interests of the
disputants. Of course, that people disagree is hardly
revelatory. The problem which arises is that the current
regulatory state systematically encourages adversarial
relations between these groups, through, for example: the legal framing of many environmental conflicts as zero-sum questions of prohibition, authority
and jurisdiction (Freeman and Farber, 2005); often
relying on lobbying as a proxy for stakeholder consultations (when the latter occur, they tend to be
infrequent and superficial); and in their occasional
resort to the court for final resolution of disputes.
This promotion of competitive rather than co-operative
behaviour encourages disputants to exaggerate differences between one another, to ascribe malign
intentions to the positions of others, and to simplify
conflicts through the formation of crude stereotypes
(Fine, 2006; Yaffee, 1997). Thus, in many situations,
disputes over risk tradeoffs can resemble prisoners
dilemmas, where co-operation could lead to a mutually acceptable solution which balances harm against
harm, but a fundamental lack of trust leaves the participants caught in a zero-sum struggle as they fear that
any compromise would not be reciprocated.
Moreover, in such adversarial settings the underlying scientific, technical and economic data on which
risk trade-offs are ostensibly based is often (mis)used
418
interests as interconnected and thus conceive of disputes as joint problems in which each has a stake
(Innes and Booher, 1999). Here, the malign influences of narrow mental models, of confirmation bias,
of absolutism, of adversarial relationships, and of the
omitted voice (all, in some way, related to absolutism),
may be expected to be in large part ameliorated.
Of course, this is not a full-proof solution, and
harmful, unintended consequences will still arise from
regulatory measures derived from even the most holistic process, in part due to the profound uncertainties
characterising many public and environmental health
risk dilemmas, and the temporal nature of social values, interests, and perspectives. It is not uncommon
for public, governmental or scientific perceptions of
the rationality of past risk trade-offs to migrate over
time, leading to calls for corrective measures or even
a sharp reversal of the path already taken, such that
as observers of the decision process we are left with
a sense of dj vu, and a feeling that the governance
of risk resembles a Sisyphean challenge. Thus, it is
crucial that the regulatory state adopt a more adaptive
approach (e.g. McDaniels et al., 1999), in the sense
of viewing decision making iteratively, of placing a
strong emphasis on the role of feedback to verify the
efficacy and efficiency of the policies enacted, and of
appreciating the wisdom of learning from successive
choices.
5
CONCLUSIONS
REFERENCES
Brehmer, B. 1992. Dynamic decision making: human control
of complex systems. Acta Psychologica. 81(3):211241.
Drner, D. 1996. Recognizing and avoiding error in complex
situations. New York: Metropolitan Books.
Dryzek, J. 1987a. Ecological rationality. Oxford: Blackwell.
Dryzek, J. 1987b. Complexity and rationality in public life.
Political Studies. 35(3):424442.
Fine, G.A. 2006. The chaining of social problems: solutions
and unintended consequences in the age of betrayal. Social
Problems. 53(1):317.
Freeman, J. and Farmer, D.A. 2005. Modular Environmental
Regulation. Duke Law Journal. 54:795.
Graham, J.D. and Wiener, J.B. 1995. Risk versus risk: tradeoffs in protecting health and the environment. Harvard
University Press.
Hayek, F.A. 1973. Law, legislation and liberty. London:
Routledge and Kegan Hall.
Hoffstetter, P., Bare, J.C., Hammitt, J.K., Murphy, P.A. and
Rice, G.E. 2002. Tools for comparative analysis of alternatives: competing or complimentary perspectives? Risk
Analysis. 22(5): 833851.
Innes, J.E. and Booher, D.E. 1999. Consensus building and
complex adaptive systems: a framework for evaluating
collaborative planning. Journal of the American Planning
Association. 65(4):412423.
Merton, R.K. 1936. The unanticipated consequences of
purposive social action. American Sociological Review.
1(6):894904.
McDaniels, T.L., Gregory, R.S. and Fields, D. 1999. Democratizing risk management: successful public involvement
in local water management decisions. Risk Analysis.
19(3): 497510.
Popper, K.R. 1961. The poverty of historicism. London:
Routledge and Kegan Hall.
Sjberg, L. 2000. Factors in risk perception. Risk Analysis.
20(1):112.
Sunstein, C.R. 1990. Paradoxes of the regulatory state.
University of Chicago Law Review. 57(2): 407441.
Viscusi, W.K. 1996. Regulating the regulators. University of
Chicago Law Review. 63(4):14231461.
Weale, A. 1992. The new politics of pollution. Manchester:
University of Manchester Press.
Wiener, J.B. 1998. Managing the iatrogenic risks of risk
management. Risk: health, safety and environment. 9(1):
3982.
Yaffee, S.L. 1997. Why environmental policy nightmares
recur. Conservation biology. 11(2): 328337.
ACKNOWLEDGEMENTS
The authors would like to thank the Leverhulme Trust
for funding this research.
419
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
C.A.V. Cavalcante
Federal University of Pernambuco, Brazil
R.W. Dwight
Faculty of Engineering, University of Wollongong, Australia
P. Gordon
Faculty of Engineering, University of Wollongong, Australia
ABSTRACT: This paper considers a hybrid maintenance policy for items from a heterogeneous population.
This class of items consists of several sub-populations that possess different failure modes. There are a substantial
number of papers that deal with appropriate mixed failure distributions for such a population. However, suitable
maintenance policies for these types of items are limited. By supposing that items may be in a defective but
operating state, we consider a policy that is a hybrid of inspection and replacement policies. There are similarities
in this approach with the concept of burn-in maintenance. The policy is investigated in the context of traction
motor bearing failures.
INTRODUCTION
423
where
w(t) = p(1 F1 (t))/(1 FMixed (t)).
The lifetime distribution for a mixture of items
from two sub-populations is illustrated in Figure 1.
Mixtures of this kind do not necessarily have an
increasing failure (hazard) rate function. Some examples that support the last conclusion have been given
by several authors (Glaser, 1980; Gupta and Gupta,
1996; Finkelstein and Esaulova, 2001a; Block et al.,
2003). Jiang and Murthy (1998) discuss mixtures
involving two Weibull distributions. Eight different
behaviours for the failure (hazard) rate function for
the mixed distribution are evident, depending on the
parameter values of the underlying Weibull distributions. For the general case, there are five parameters;
the shape of the failure (hazard) rate is dependent
on the values of the two shape parameters, 1 and
2 , the ratio 2 /1 of the two scale parameters (and
not their individual values) and, finally, the mixing
parameter p.
Other authors have explored mixed failure distributions through their mean residual life function
(Abraham and Nair, 2000; Finkelstein, 2002).
The fitting of Weibull distributions to failure data
requires care. It is often the case that the data possess
underlying structure that is not immediately apparent
due to, for example, inspections, left and right censoring, or heterogeneity. It would be unfortunate to
fit a two-parameter Weibull distribution to failures
that arise from a mixture, and then adopt an agebased for the items based on the fitted two-parameter
Weibull since the implied critical replacement age
would be inappropriate for both sub-populations of
items (Murthy and Maxwell, 1981). A full discussion
probability density
age/years
424
As indicated above, there have been many contributions on the theme of mixed distributions. However,
there has been less discussion of maintenance policies
for items with mixed failure distributions. Here we
review the main contributions in this area.
Murthy and Maxell (1981) proposed two types
of age-replacement policy for a mixture of items
from two sub-populations, 1 and 2. In policy I, it is
assumed that the lifetime distribution of items for each
sub-population is known, but it is not known if
an operating item is of type 1 or 2. In the policy II, it is assumed that the decision maker can,
by some test that costs per item, determine if
an item is of type 1 or 2, and then subsequently
replace items from sub-population 1 at age T1 or
at failure, and replace items from sub-population
2 at age T2 or at failure, which ever occurs
first. Finkelstein (2004) proposed a minimal repair
model generalized to the case when the lifetime
distribution function is a continuous or a discrete
mixture of distributions. As was stated in the introduction, some adaptations of burn-in models have
being proposed by Drapella and Kosznik (2002) and
Jiang and Jardine (2007). The objective of these
models is to find optimum solutions for a combined
burn-in-replacement policy, in order to take into consideration the change in ageing behaviour of items
from a heterogeneous population, represented by a
mixed distribution. The cost advantage of the combined policy over the separate policies of burn-in and
replacement is quite small. The combined policy is
also much more complex, and therefore it is difficult
to decide if the combined policy is superior (Drapella
and Kosznik, 2002). Jiang and Jardine (2007) argue
that preventive replacement is more effective in the
combined policy.
The burn-in process is not always suitable since
a long-burn in period may be impractical. Then,
the early operational failure of short-lived items
may become a possibility. For this reason, in the
next section, we propose a model that is similar to a combined burn-in-replacement policy, but
instead of a burn-in phase we propose a phase of
inspection to detect the weak items. This can then
accommodate, during operation, the evolution of
the ageing behaviour of items from a heterogeneous
population.
THE MODEL
425
failures lead to immediate operational failure, inspection is futile. Note that the model might to be extended
to consider a mixed population of delay-times, a
proportion of which are zero (Christer and Wang,
1995). This effectively relaxes the perfect inspection
assumption because it implies that a proportion of failures cannot be prevented by inspection. We do not
consider this model here however.
The decision variables in the model are K, T and
. KandT are age-related, so that on replacement, the
inspection phase begins again. Thus, the maintenance
model is analogous to age-based replacement. The
as-new replacement assumption implies that we can
use the renewal-reward theorem and hence the longrun cost per unit time as objective function.
Within this framework, the length of a renewal
cycle (time between renewals), V , can take different
values, and
T x
+
(x + h)fH (h)fX (x)dhdx
K 0
+ T
fX (x)dx .
K
(iCI + CR )
i=1
(1 FH (i x))fX (x)dx
(i1)
V = i
(2a)
+
(i 1) < Y < i,
V =T
(i1)
T
+ (KCI +CF )
(i = 1, . . . , K);
K < Y < T ,
if X > K X + H < T ;
i=1
(2b)
i
K
[(i1)CI +CF ]
FH (ix)fX (x)dx
+ (KCI + CR )
(2c)
T
(1 FH (T x))fX (x)dx
K
and
if X > K X + H > T .
(2d)
fX (x)dx .
iCI + CR
(i 1)CI + CF
U=
KCI + CF
KCI + CF
if V = i, i = 1, . . . , K,
if (i 1) < V < i,
i = 1, . . . , K,
if K < V < T ,
if V = T .
E(V ) =
E(V ) =
K
i=1
and
(1 FH (i x))fX (x)dx
(3)
T x
T
(FH (T x))fX (x)dx
E(U ) = CF
(i1)
i
K
ix
i=1 (i1)
426
loglog{ 1/(1-i/N))}
Frequency
4
2
-2
-4
-6
-8
10
time to failure/years
NUMERICAL EXAMPLE
E(U )
E(V )
10
20 30
Figure 3.
motors.
3
4
5
6
time to failure/ years
427
Table 1. Optimum hybrid policy for various values of cost parameters and failure model parameters. Long run cost per unit
. Unit cost equal to the cost of preventive replacement (C = 1). Time unit here taken to be one
time of optimum policy is C
R
year although this is arbitrary.
1
CI
CF
2.5
2
3
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
2.5
3
3
3
2
4
3
3
3
3
3
3
3
3
3
3
3
3
3
5
5
5
5
5
4
3.5
3
5
5
5
5
5
5
5
5
5
5
18
18
18
18
18
18
18
18
12
24
18
18
18
18
18
18
18
18
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0.15
0.1
0.1
0.1
0.1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.25
1
0.5
0.5
0.5
0.5
0.5
0.5
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.025
0.075
0.05
0.05
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
2
20
0.224
0.225
0.222
0.216
0.228
0.253
0.269
0.288
0.313
0.171
0.243
0.198
0.185
0.254
0.202
0.233
0.103
0.310
10.25
10.25
10.25
10.25
10.81
10.45
10.23
10.21
7.51
13.67
10.29
11.44
9.86
10.64
11.25
10.40
14.56
9.33
wear-out phase. As 2 becomes smaller and the subpopulations are less distinct, then K T , and it is
optimum to inspect over the entire life. When the cost
of inspection is varied, the optimum policy behaves as
expectedlower inspection costs lead to more inspections. Also, a longer mean delay time leads to more
inspections and vice versa, implying that inspections
are only effective if there is sufficient delay between
defect arrival and consequent failure. The effect of
varying the mixing parameter can also be observed in
this table.
CONCLUSION
0.85
0.87
0.83
0.60
1.44
0.99
0.88
0.75
0.78
0.84
2.74
1.33
2.96
0.64
0.68
1.75
0
0.48
K
5
5
5
5
7
10
11
13
9
5
1
8
1
7
16
2
0
10
ACKNOWLEDGEMENTS
REFERENCES
Abraham, B. & Nair, N.U. 2000. On characterizing mixtures of some life distributions, Statistical Papers 42(3),
387393.
Ascher, H. & Feingold, H. 1984. Repairable Systems Reliability. New York: Marcel Dekker.
Barlow, R.E. & Proschan, F. 1965. Mathematical Theory of
Reliability. NewYork: Wiley.
428
429
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
M.D. Pandey
Department of Civil Engineering, University of Waterloo, Waterloo, Canada
ABSTRACT: This paper investigates the reliability of a structure that suffers damage due to shocks arriving
randomly in time. The damage process is cumulative, which is a sum of random damage increments due to
shocks. In structural engineering, the failure is typically defined as an event in which structures resistance
due to deterioration drops below a threshold resistance that is necessary for the functioning of the structure.
The paper models the degradation as a compound point process and formulates a probabilistic approach to
compute the time-dependent reliability of the structure. Analytical expressions are derived for costs associated
with various condition and age based maintenance policies. Examples are presented to illustrate computation of
the discounted life-cycle cost associated with different maintenance policies
INTRODUCTION
431
function given as
G(x) = P(Y1 x).
(4)
Z(t) =
Yj .
(5)
j=1
Using the total probability theorem and independence of the sequence Y1 , Y2 , . . . and N (t), we can
write the distribution of cumulative damage as
P(Z(t) x) =
j
j=0
Yi x, N (t) = j .
(6)
i=1
(7)
Model
G (j) (x y) dG(y)
0
x
(8)
(1)
(9)
j=0
(2)
[G (j) (x) G (j1) (x)]Fj (t).
j=1
(10)
Hi (t).
(3)
i=j
Suppose damage exceeding a limit xcr causes a structural failure, this equation can be used to compute
reliability as P(Z(t) xcr ). This is a fundamental
expression in the theory of the compound point process, which can be used to derive probabilities of other
events associated with a maintenance policy.
432
2.2
j = j j
Condition-based maintenance
When the total damage Z(t) exceeds an unacceptable level K, it is referred to as the degradation
failure of structure. The failure prompts a corrective
maintenance action (CM) involving the replacement
or complete overhaul of the component (as good as
new repair). On the other hand, a preventive maintenance action (PM) is triggered when the total damage
exceeds a maintenance threshold level k, k < K. For
sake of generality, it is also assumed that the component will be replaced after certain age T0 , whichever
occurs first. After a maintenance action the structure
is good-as-new. We assume that costs are cK for CM,
c0T for replacement at time T0 and c0k for PM before
time T0 . Let T be the first time at which a maintenance
action has to be done and let C denote the associated
cost. Note that T is a random variable, and T0 is a
decision variable and t is used to indicate an arbitrary
(non-random) point in time. We need to derive the
joint distribution of the pair (T , C) to calculate the life
cycle cost.
Define the event that the total damage exceeds the
PM threshold in shocks j to (j 1) ( j = 1, 2, . . . ) as
Aj =
j1
Yi k <
i=1
j
Yi .
= Aj
j
i + n =
i=1
n
i=1
i +
n
i + n = 1.
i=1
j ,
(T , C) = (Sj , cK ) on ACM
j ,
(T0 , c0T ) on Bn .
j n,
j n,
(12)
= Aj
ACM
j
j
Yi > K .
i=1
3.1 General
Yi K
and
i=1
i=1
i=1
Bj =
i = G (j) (k).
i=1
j
j
j = 1
Ai .
(11)
i=j+1
(Tn , Cn ) = (T , C),
where (T , C) is defined in Section 2. The times at
which the cycles start will be denoted by Sn = T1 +
+ Tn , n 1 and S0 0. Let N (t) denote the total
number of maintenance actions during [0, t]:
N (t) = n Sn t < Sn+1 .
(13)
K(t) =
N (t)
j=1
433
Cj .
(14)
E(C)
1
K(t) =
.
t
E(T )
(15)
PK =
=
N (t)
erSj Cj 1{Sj t} .
(17)
n
j Hn (T0 )
n=1 j=1
(16)
K(t, r) =
P(C = cK ; N (T0 ) = n)
n=1
j Fj (T0 ).
j=1
=1
(21)
j=1
j Fj (T0 ).
(22)
j=1
It follows that
(18)
j Fj (T0 ).
(23)
j=1
3.2
Key results
j=0
T0
Hj (x) dx.
(19)
cK j + c0k j c0T j Fj (T0 ).
j=1
(24)
If we assume c0 = c0k = c0T and cK = c0 + K ,
we obtain a simple expression
E(C) = c0 + K PK .
(25)
It follows immediately from Eq.(11) that the probability that a maintenance action is necessary before
time T0 equals
E(CerT ) = erT0
[c0 + K Cn ] Hn (T0 )
n=0
Pk = P(T < T0 ) = 1
n Hn (T0 ).
(20)
n=0
n=1
434
(c0 Bn + cK Cn )
0
T0
Hn (x)rerx dx.
(26)
where Bn =
n
j=1
j and Cn =
n
j=1
j . It follows that
(c0 Bn + cK Cn )
T0
Hn (x)erx dx.
E(e
)=1
n
n=0
T0
Hn (x)re
rx
dx.
(27)
r0
r0
C(T , r) =
All the expressions derived above are completely general, without any specific assumptions about the form
of the point process N that describes the random
arrival of shocks in time, see section 2. Note that
lim C(T , r) = lim
E(CerT ) = c0 (1 I (T0 ))
n=1
rT
Since ACM
= {Yi > K}, APM
= {Yi K} and An =
1
1
for all n 2, we get
rE(CerT )
E(C)
=
= C(T ).
1 E(erT )
E(T )
rc0 (1 I (T0 ))
I (T0 )
4.2 Model 2
= , j 1, which leads to
In this case k = K and APM
j
E(erT ) = 1
G (j) (K)
MAINTENANCE POLICIES
and
This section reanalyzes the following three maintenance policies discussed by Ito et al., (2005). For
simplicity, we will always assume in this section that
c0T = c0k = c0 and cK = c0 + K .
1. Model 1. The system undergoes PM at time T0 or at
the first shock S1 producing damage Y1 , whichever
occurs first. This means that the PM level k = 0.
2. Model 2. The system undergoes PM at time T0 or
CM if the total damage exceeds the failure level
K, whichever occurs first. This means that the PM
level k = K.
3. Model 3. The system undergoes PM only when the
total damage exceeds the managerial level k. This
means that T0 = .
E(CerT ) = cK 1
K erT0
E(erT ) = 1 I (T0 ),
T0
H0 (x)rerx dx.
Hn (x)rerx dx
n Hn (T0 ).
4.3 Model 3
Since in this case T0 = , it leads to
n=0
Hn (x)rerx dx.
and
E(CerT )
n
[c0 j + cK j ]
=
n=1
T0
where
I (T0 ) =
n
n=0
Model 1
n=0
E(erT ) = 1
4.1
Hj (x)rerx dx,
j=0
T0
j=1
Hn (x)rerx dx.
Using eq. (18), the long-term expected equivalent average cost per unit time follows from these
expressions.
435
R(t) =
(28)
t a1 et dt.
Further
(u) du.
(a, x) =
Fj (T0 ) =
T0
Hj1 (x)(x) dx
R(t)j R(t)
e
,
j!
j = 0, 1, . . .
a
1
(j) j, T0b .
b
(j 1)!
(29)
The probability density of the time Sj at which the jth
shock occurs is
E(T ) =
u > 0,
j 1.
Hi (t),
(31)
Fj (t) =
E(C) = c0 + K PK
= c0 + K
Hj (x) dx
T0
i=j
(30)
G (j) (k)
j=0
R(u)j1
fj (u) =
(u)eR(u)
(j 1)!
= Hj1 (u)(u),
(j) j, ab T0b
(j 1)!
j=1
(32)
and
R(x) =
a b
x .
b
5.1
Then
Hj (x) =
a
1 a b j
x exp xb ,
b
j! b
T0
Example
Hj (x) dx
1 b 1/b 1
a
=
(j + 1/b) j + 1/b, T0b ,
b a
j!
b
436
c0 + K G(K)
1 1, ab T0b
.
1 b 1/b
(1/b) 1/b, ab T0b
b a
190
220
C (T0)
180
C (T ,r)
0
Asymptotic Value
Asymptotic Value
200
170
180
160
160
150
140
140
120
130
100
120
80
110
100
0
0.5
1.5
2.5
3.5
60
0
2.5
3.5
c0 + K G(K)
,
/2
k
(j) j, ab T0b
[1 G(K x)] dG (j1) (x)
(j
1)!
0
j=1
(j) j, ab T0b (j1)
(K) G (j) (K)
G
=
(j 1)!
j=1
r r 2 /4
e
(erf (T0 + r/2) erf (r/2)) ,
2
a
j=0
T0b
j
a
exp T0b G (j) (K).
b
r(c0 + K G(K))(1
I ())
,
I ()
where
I () =
1.5
c0 + K G(K)(1
eT0 )
E(C)
=
E(T )
/2 erf (T0 )
I (T0 ) =
= 0.40,
case a = 2 and b = 2, c0 = 20, K = 180, G(K)
r = 0.04.
0.5
r r 2 /4
e
(1 erf (r/2)) .
2
437
c0 + K
1
b
k
(j1)
(x)
j=1 0 [1 G(K x)] dG
.
b 1/b G(j) (k)
(j
+
1/b)
j=0
a
j!
CONCLUSIONS
we get
E[CerT ] =
n Hn (T0 )
n=0
(c0k j + cK j )E[er Sj ; Sj T0 ].
(33)
j=1
The term E er Sj ; Sj T0 can be expressed in the
distribution functions Fj of Sj as
APPENDIX
E[er Sj ; Sj T0 ] =
= erT0 Fj (T0 ) +
E[CerT ; N (T0 ) = n]
T0
0
T0
Fj (x)rerx dx,
n=0
For n = 0 we have
E[CerT ; N (T0 ) = 0] = c0T erT0 H0 (T0 ),
E[er Sj ; Sj T0 ] =
= c0k
j E[e
r Sj
; N (T0 ) = n]
hn ,
n=j
where
E[CerT ; N (T0 ) = n]
n
hn = erT0 Hn (T0 ) +
T0
Hn (x)rerx dx.
j=1
+ cK
n
j E[e
r Sj
; N (T0 ) = n]
j=1
E[CerT ]
= c0T erT0
Since
n
j E[er Sj ; N (T0 ) = n]
j=1
E[er Sj ; N (T0 ) = n]
n=j
(c0k Bn + cK Cn )hn ,
n=1
where Bn = nj=1 j and Cn = nj=1 j .
Re-arranging the terms, we can also write
E[CerT ] = c0T erT0 H0 (T0 )
j E[er Sj ; N (T0 ) j]
+ erT0
j=1
n Hn (T0 ) +
n=0
n=1 j=1
(c0T n + c0k Bn + cK Cn )Hn (T0 )
n=1
j E[er Sj ; Sj T0 ],
j=1
n=1
438
(c0k Bn + cK Cn )
0
T0
Hn (x)rerx dx.
(34)
REFERENCES
(c0 + K Cn )Hn (T0 )
E[CerT ] = erT0
n=0
+
(c0k Bn + cK Cn )
T0
Hn (x)rerx dx,
(35)
n=1
T0
Hn (x)rerx dx =
T0
(1 H0 (x))rerx dx
= 1 erT0
T0
H0 (x)rerx dx,
(1 n )
n=1
=1
n
T0
T0
Hn (x)rerx dx
Hn (x)rerx dx.
(36)
n=0
n=0
T0
Hn (x) dx,
439
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
S. Martorell
Dpto. Ingeniera Qumica y Nuclear, Universidad Politcnica Valencia, Spain
ABSTRACT: An important number of studies have been published in the last decade in the field of RAM+C
based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the
optimization brings the decision maker insights concerning how uncertain the RAM+C results are and how
this uncertainty does matter as it can result in differences in the outcome of the decision making process. In
the literature several methods of uncertainty propagation have been proposed. In this context, the objective of
this paper focuses on assessing how the choice of input parameters uncertain distribution may affect output
results.
INTRODUCTION
441
w = t (1 ) +
M 2
R(t) = exp
t mM +
2
t mM + M
R(t) = exp
w =t
m1
(1 )k mk1
w = t m1
(4)
M 2
R(t) = exp
t (1 ) +
2
2
t (1 ) + M2
R(t) = exp
(2)
k=0
(5)
(8)
(1)
(3)
(7)
The general expression of reliability function associated to linear and Weibull failures distribution
respectively are given by:
(6)
Now, substituting Eqns (5) and (6) into Eqns (1) and
(2) the reliability functions of PAS and PAR models
are obtained as continuous functions of time.
Thus, expressions corresponding to the PAS
approach considering linear and Weibull distributions
are given by Eqns (7) and (8):
Reliability function
R(w) = exp w2
2
w
R(w) = exp
M
2
(9)
(10)
(12)
(13)
(14)
442
e(C) + (1 ) e(R)
subject to : R Rr ;
C Cr
(19)
C Cr
Cr Co
(20)
e(R) =
Rr R
Ro Rr
(21)
(15)
h =
( M + L (1 ))
2
(17)
h =
(M + 2L (1 )) (M)
L (1 ) (2)
(18)
h =
M1
()
(1 (1 ) )
(16)
MULTI-OBJECTIVE OPTIMIZATION
PROCEDURE
A Multiobjective Optimization Problem (MOP) considers a set of decision variables x, a set of objective
functions f(x) and a set of constraints g(x) based on
decision criteria. In our problem, the MOP consists
in determining the maintenance intervals, over the
replacement period, on each component of the equipment, which maximize the equipment reliability, R,
while minimize its cost, C, subject to restrictions
generated by an initial solution (Ci , Ri ), usually determinate by the values of current maintenance intervals
implemented in plant.
Applying the so called weighted sum strategy, the
multiobjective problem of minimizing the vector of
objective functions is converted into a scalar problem
by constructing a weighted sum of all the objective
functions. In particular, if we have two objective
443
U1
Lp
Up
(22)
(23)
j=0
N j
(1 )Nj
j
(24)
0.90
0.95
0.99
kp
0.90
0.95
0.99
1
2
3
4
1
2
3
4
1
2
3
4
22
38
52
65
45
77
105
132
230
388
531
667
29
46
61
76
59
93
124
153
299
473
628
773
44
64
81
97
90
130
165
198
459
662
838
1001
Table 1 shows the number of runs needed to determine tolerance intervals for some usual couples of
coverage/confidence levels.
5
APPLICATION CASE
The application case is focused on the optimization process of preventive maintenance associated to
motor-operated safety valves of a Nuclear Power Plant.
The equipment consists of two main components
(actuator (A) and valve (V)) in serial configuration.
Reliability (, , and ) and maintenance effectiveness
() parameters have been previously estimated using
the Maximum Likelihood Estimation (MLE) method.
So, the problem considers how the uncertainty associated to reliability and to maintenance effectiveness
parameters affect on maintenance optimization process based on system reliability and cost (R+C)
criteria.
Table 2 shows the distribution probability, the
imperfect maintenance model and the reliability data
for the actuator and valve necessary to quantify the
equipment reliability. These values represent mean
values obtained in the estimation process. In addition, single cost data necessary to quantify the yearly
equipment cost are showed in Table 3.
Additionally, maximum likelihood estimators have
the property to be distributed asymptotically. Thus,
it is possible obtain the parameters joint distribution
which is given by:
C)
(, , A , , V ) N(,
(25)
the varianceBeing
the mean vector and C
covariance matrix. By using the MLE method the
following values are obtained:
444
Table 2.
Reliability data.
Distribution
IM model
Table 3.
Actuator
Valve
Weibull
PAS
0.8482
7.4708
15400
Linear
PAR
0.7584
1.54E-9
Component
cca
[C
=]
cma
[C
=]
coa
[C
=]
Actuator
Valve
3120
3120
300
800
1900
3600
7.4707
15397
= 0.8482
1.73e 9
0.7584
N(15397, 40808)
N(1.7343e 9, 2.7619e 20)
(26)
and
0.1572
5.5646
= 1.6944e 3
C
0
0
N(7.4707, 0.1572)
A N(0.8482, 2.2587e 4)
5.5646
4.0808e + 4
2.3730
0
0
V N(0.7584, 8.4261e 4)
1.6944e 3
0
0
2.3730
0
0
2.2587e 4
0
0
(27)
0
2.7619e 20 2.9647e 12
0
2.9647e 12 8.4261e 4
The optimization process is performed under reliability and cost criteria y = {R,C} and the maintenance
intervals for each component act as decision variables.
The equipment reliability and associated cost have
been quantified using the analytical models previously
introduced. A Sequential Quadratic Programming
(SQP) method is used as algorithm to optimization.
(Biggs, 1975).
Both, equipment reliability and cost functions are
considered to be deterministic in the sense that when
all necessary input data for the model are specified
they provide only one value for every output. However,
as inputs of the equipment reliability and cost models fluctuate according to distribution law reflecting
uncertainty on parameters and equipment reliability
and cost will fluctuate in repeated runs. In this case a
multivariate normal distribution whose parameters are
given by Eqns. (25) and (27) is used to characterize
uncertainty.
Following distribution free tolerance intervals
approach discussed in section 4 to address uncertainty,
Wilks equation results in 153 runs to achieve levels of 0.95/0.95 for coverage/confidence in two sided
0,861
0,860
0,859
0,858
3000
3100
3200
3300
3400
3500
3600
Figure 1. R-C plot of uncertain results considering dependency and parameters normally distributed (Tolerance limits
for cost function).
445
Figure 2. R-C plot of uncertain results considering dependency and parameters normally distributed (Tolerance limits
for reliability function).
Figure 3. R-C plot of uncertain results considering independency and normal distributed parameters (Tolerance limits
for cost function).
Figure 4. R-C plot of uncertain results considering independency and normal distributed parameters (Tolerance limits
for reliability function).
Figure 5. R-C plot of uncertain results considering independency and parameters uniform distributed (Tolerance limits
for cost function).
Figure 6. R-C plot of uncertain results considering independency and parameters uniform distributed (Tolerance limits
for reliability function).
446
CONCLUDING REMARKS
447
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J. Zurek,
M. Zieja & G. Kowalczyk
Air Force Institute of Technology
T. Niezgoda
Military University of Technology
ABSTRACT: The forecasting of reliability and life of aeronautical hardware requires recognition of many
and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical
components of aircraft as an armament system proves of outstanding significance to reliability and safety of the
whole system. The aging process is usually induced by many and various factors, just to mention mechanical,
biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces)
reliability and life of aeronautical equipment.
INTRODUCTION
groups (Zurek
2005):
those with strongly correlated changes in values
of diagnostic parameters with time or amount of
operation,
those with poorly correlated changes in values
of diagnostic parameters with time or amount of
operation, and
ones showing no correlation changes in values
of diagnostic parameters with time or amount of
operation.
For the items representing the first group one can
predict the instance of time when the diagnostic parameters boundary condition occurs. One can also predict
the time instance of the items safe shut down and then
plan appropriate maintenance actions to be carried out.
(1)
i = 1, 2, 3, . . . , n
(2)
Xi Xi
(3)
449
g
zi
(4)
(5)
(10)
E[C] =
(11)
Cd
g
zi zi
(6)
(7)
u(zi , t)
u(zi , t) 1 2 u(zi , t)
= b
+ a
t
zi
2
zi2
(9)
(12)
(zi B(t))2
1
e 2A(t)
2A(t)
(13)
where:
t
t
adt = at
A(t) =
0
(8)
B(t) =
bdt = bt
0
450
the diagnostic parameter can be presented using density functions of changes in the diagnostic parameter
(Tomaszek 2001):
dependence (20):
=
t 2 f (t)zig dt (E [T ])2
Q(t, zi ) =
1
2 at
(zi bt)2
2at
dz
(14)
Hence
Zi
=
2
g
Q(t, zi )
t
(15)
f (t) =
1
2 at
(zi bt)2
2at
dz
(16)
azi +
g 2
g 2
z
b zi
5a2
+ 4 i2
3
b
4b
2b
(21)
The presented method of determining the distribution of time of exceeding the boundary condition
by the diagnostic parameter allows of finding the
density function) of time of reaching the boundary state. On the basis thereof one can determine
reliability of a given item of aeronautical equipment, the health/maintenance status of which is estimated by means of the diagnostic parameter under
consideration:
t
R(t) = 1
Zi
f (t)zig dt
(22)
Using properties of the differentiation and integration, the following dependence is arrived at:
g
f (t)zig =
(20)
(zi bt)2
zi + bt
1
e 2at
2t
2 at
g
(17)
tf (t)zig dt
The value of time, for which the right side of equation (23) equals to the left one, determines life of
an item of aeronautical equipment under conditions
defined with the above-made assumptions.
Hence
g
(23)
(18)
E [T ] =
f (t)zig dt
Q(t)zig =
z
a
a
z
zi
+ i + 2 = i + 2
2b 2b 2b
b
2b
(19)
We also need to find the dependence that determines the variance of distribution of time of exceeding
the boundary condition by the diagnostic parameter. In general, this variance is determined with
Airborne storage batteries are those items of aeronautical equipment that show strong correlation between
changes in values of diagnostic parameters and time or
amount of operating time. Capacitance Q is a diagnostic parameter directly correlated with aging processes
that take place while operating airborne storage batteries, one which explicitly determines the expiry date
thereof (Fig. 1). The presented method allows of estimating the reliability and residual life of airborne
storage batteries using diagnostic parameters recorded
451
in the course of operating them. A maximum likelihood method was used in order to estimate parameters
a and b in the equation (17). Gained with the hitherto
made calculations for the airborne storage batteries
12-SAM-28 are the following characteristics of the
density function of time of exceeding the boundary
condition by values of the diagnostic parameter f (t)
and the reliability function R(t). They are shown in
Figs. 2 and 3.
With the above-presented method applied, the following values of life T and residual life Tr have been
gained for particular storage batteries 12-SAM-28 (see
Table 1).
1,2
596
280
330
R(t)
0,8
159
170
0,6
525
112
0,4
180
0,2
574
109
0
0
10
20
30
40
50
60
70
80
90
100
CONCLUSIONS
Capacitance - Q[Ah]
27
574
26
180
25
112
24
525
23
170
159
22
330
21
Table 1.
Battery No.
T [months]
Tr [months]
1
2
3
4
5
6
7
8
9
10
109
574
180
112
525
170
159
330
280
596
27.7
29.2
34.54
41.19
50.71
47.8
26.13
23.73
29.12
20.85
13.76
11.28
17.5
23.47
32.17
30.38
10.61
6.98
12.32
5.7
280
20
596
19
18
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Service life - t[months]
Figure 1. Characteristic curves of the changes in capacitance values for storage batteries 12-SAM-28.
0,06
596
0,05
280
330
f(t)
0,04
159
170
0,03
525
112
0,02
180
574
0,01
109
0
0
10
20
30
40
50
60
70
80
90
100
throughout the operational phase thereof. Determination of how the values of diagnostic parameters and
deviations thereof increase enables determination of
time interval, within which a given item remains fit
for use (serviceable).
Dependence for the rate of changes in value of
the diagnostic parameter, i.e. equation (7), is of primary significance in this method. The method will
not change substantially if other forms of this dependence (i.e. equation (7)) are used. These different
forms may result in changes of coefficients in the
Fokker-Planck equation (10), which in turn will result
in changes of the dependences for both an average
value and variance of the density function of changes
of the diagnostic parameter. The method offers also
a capability of describing aging and wear-and-tear
processes within a multi-dimensional system. The
above-presented method allows of:
assessment of residual life of some selected items of
aeronautical equipment with the required reliability
level maintained,
452
REFERENCES
Zurek,
J. 2006. Zywotno
sc smiglowcw. Warszawa.
Zurek,
J. & Tomaszek, H. 2005. Zarys metody oceny
niezawodnosci statku powietrznego z uwzglednieniem
uszkodzen sygnalizowanych i katastroficznych (naglych).
Warszawa.
453
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
P.E. Labeau
Universit Libre de Bruxelles (U.L.B), Service de mtrologie Nuclaire, Belgium
ABSTRACT: All systems are subject to aging. When a component is aging, its global performances are decreasing. In order to reduce the effect of aging on a system, preventive maintenance actions can be performed. Yet the
rejuvenation of a unit that can be achieved thanks to these interventions is most of the time not total, and does not
correspond to the classical as-good-as-new assumption. Imperfect maintenance models have been proposed in the
literature in order to embody the partial efficiency of preventive actions. This paper reviews the approaches that
are available in the literature to model imperfect maintenance. It also proposes to extend and modify some of them,
in order to obtain a simple and mathematically exploitable model, associated with a more realistic time-dependent
behavior of the component than that corresponding to the application of previous imperfect maintenance models.
INTRODUCTION
All systems are subject to aging. When a component is aging, its global performances are decreasing.
As a consequence of aging, the failure probability is
increasing and the productivity of the operated system
is lower. In order to reduce the aging effect on a system,
preventive maintenance actions can be performed. The
most easily modeled preventive maintenance is the
replacement of the system: after maintenance, the system is As Good As New (AGAN). The maintenance
policies based on preventive replacement of the system have been studied from a long time, starting from
Barlow & Proschan 1965.
The reality is often quite different: preventive maintenance interventions are made to reduce the aging
effect or to delay the apparition of these effects but
these actions are not perfect. The goal of imperfect preventive maintenance is to maintain the performances
of the system within an acceptable range. Also, the
efficiency and the cost of each maintenance intervention will be dependent on which type of actions is
undertaken. For example, in a car, oil, brake pads or
tires have to be periodically changed, but none of these
actions will make a new car. The way they affect the car
state, the cost of each action and also the optimal periods to carry them out will be different from one action
to the other. We also know that, despite all possible
maintenance actions, the car ages. When the performances of the system tend to go outside the acceptable
range mentioned above, it becomes soon or later more
MODELS REVIEW
455
dt0
1
P(Nt+dt Nt = 1| Ht )
dt
behavior:
(t) =
t 1
( ) .
(2)
+
+
TM = TM (TM TM 1 ).
(3)
(4)
(1)
456
+
Tm = Tm Tm .
(5)
M
t 1
(1 )j ((Mt j)TPM )
(6)
j=0
min(m1,M
t 1)
(1 )j ((Mt j)TPM )
j=0
(7)
The ARI1 and ARI models are then particular cases of the ARIm model. In the ARI family of
models, the value = 0 means that the intervention is ABAO but, when = 1, the intervention is
not AGAN because the failure rate evolution with
time is different from the evolution of the initial
failure rate of the component. This behavior is at
the same time an advantage and a drawback of the
model: there is a part of the aging, related to the
working time, that is unavoidable, but the replacement of a component is not included in the model
of impact. Figure 1 below illustrates the behavior
of different models including ARI1 and ARI for a
given initial failure rate. We can see that the ARI
models result in a piecewise vertical translation of the
failure rate value and hence keep the wear out trend.
Figure 1, below, gives an example of the resulting failure rate for different maintenance efficiency
models, including ARI1 and ARI models.
2.1.2 Nagakawa model
Nakagawa 1986, 1988 give a model where each PM
resets the value of the failure rate to 0. But after a PM,
the failure rate evolution is increased by an adjustment
factor bigger than 1. We have:
Mt
+
Tm = (t Mt TPM )
(8)
failure mode, with the failure rate (t), and the nonmaintainable failure mode, with the failure rate h(t).
They propose to take into account dependence between
these two competing failure modes. The failure rate
of the maintainable failure mode is the sum of the
initial failure rate (t) plus a positive value p(t)h(t).
They suppose that without imperfect PM the maintainable failure mode is AGAN and the non-maintainable
failure mode remains unchanged.
Under this model the failure rate of the component
at time t is given by:
t = h(t) + (t Mt TPM ) + p(t Mt TPM )h(t)
(9)
(10)
457
T+M = TM TM ,
with 0 1 and t = (t ).
(11)
min(m1,M
t 1)
j=0
with 0 1 and t = (t ).
If = 1, we have an AGAN intervention and, if
= 0, we have an ABAO intervention.
The model from Canfield 1986, the Kijima1 model
(Kijima 1988) and the Proportional Age Setback
(PAS)(Martorell et al., 1999) are all equivalent to the
ARA1 model.
The model from Malik 1979, the Kijima2 model
(Kijima 1988) and the Proportional Age Reduction
(PAR)(Martorell et al., 1999) are all equivalent to the
ARA model.
It is obvious that with a failure rate of the form (2)
and = 2, the ARA models are equivalent to the ARI
models.
Note also that, with a constant efficiency and a
constant maintenance time interval, the ARA1 model
gives:
t = t Mt TPM
(13)
Mt
(1 )i TPM
(14)
i=1
1
TPM .
(15)
M
(1 )i = TPM
i=1
1
1
1 (1 )
1
TPM
(16)
458
MODEL EXTENSION
3.1
if t
t 1
)
(t) = 0 + (
if t >
(17)
1 e0 t
t
1 e0 t( )
if t
if t >
(18)
M
TPM = 0 + M TPM
(20)
i=0
(21)
(19)
(22)
459
(23)
if t
t 1
)
t = 0 + (
if t >
(24)
F(t ) =
1 e0 (tts )
1 e0 (tts )( )
t
e0 (tts )( )
s+
( )
if t
if t >
and s+
(25)
if s+ >
460
461
Canfield RV., 1986, Cost optimization of periodic preventive maintenance, IEEE Trans Reliab; 35:7881.
Clavareau J. and Labeau P.E., 2006 Maintenance and
replacement policies under technological obsolescence
Proc. of ESREL06. Estoril (Portugal), 499506.
Doyen L., Gaudoin O., 2004, Classes of imperfect repair
models based on reduction of failure intensity or effective
age, Rel. Engng. Syst. Safety; 84:4556.
Fouathia O., Maun J.C., Labeau P.E., and Wiot D., 2005,
Cost-optimization model for the planning of the renewal,
inspection, and maintenance of substation facilities in
Belgian power transmission system, Proc. of ESREL
2005. Gdansk (Poland); 631637.
Kijima M., Morimura H., Suzuki Y., 1988, Periodical
replacement problem without assuming minimal repair,
Eur. J. Oper. Res; 37:194203.
Kumar D. and Klfesjo B., 1994, Proportional hazard model:
a review, Rel. Engng. Syst. Safety; 44:17788.
Lin D., Zuo MJ, Yam RCM., 2000, General sequential
imperfect preventive maintenance models, Int J Reliab
Qual Saf Eng; 7:25366.
Malik MAK., 1979 Reliable preventive maintenance
scheduling, AIEE Trans; 11:2218.
Martorell S., Sanchez A., Serradell V., 1999, Agedependent reliability model considering effects of maintenance and working conditions , Rel. Engng. Syst. Safety;
64:1931.
Nakagawa T., 1986 Periodic and sequential preventive
maintenance policies, J. Appl. Probab; 23:53642.
Nakagawa T., 1988 Sequential imperfect preventive maintenance policies, IEEE Trans Reliab; 37:2958.
Percy D.F. and Kobbacy K.A.H, 2000, Determining economical maintenance intervals, Int. J. Production Economics; 67:8794.
Samrout M., Chtelet, E., Kouta R., and N. Chebbo,
2008, Optimization of maintenance policy using the
proportional hazard model, Rel. Engng. Syst. Safety;
doi:10.1016/j;ress.2007/12.006.
Wu S., Clements-Croome D., 2005, Preventive maintenance
models with random maintenance quality, Rel. Engng.
Syst. Safety; 90:99105.
Zequeira R.I., Brenguer C., 2006, Periodic imperfect preventive maintenance with two categories of competing
failure modes, Rel. Engng. Syst. Safety; 91:460468.
REFERENCES
Barlow R., Proschan F., 1996, Mathematical theory of
reliability, 1965, SIAM, Philadelphia.
Brown M., Proschan F., 1983, Imperfect repair, J. Appl.
Prob.; 20:851860.
462
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Consider a system subject to two modes of failures: maintainable and non-maintainable. Whenever the system fails, a minimal repair is performed. Preventive maintenances are performed at integer multiples
of a fixed period. The system is replaced when a fixed number of preventive maintenances have been completed.
The preventive maintenance is imperfect and the two failure modes are dependent. The problem is to determine
an optimal length between successive preventive maintenances and the optimal number of preventive maintenances before the system replacement that minimize the expected cost rate. Optimal preventive maintenance
schedules are obtained for non-decreasing failure rates and numerical examples for power law models are given.
INTRODUCTION
(1)
(2)
463
the effect of the wear-out of the system (due to the nonmaintainable failures) in the occurrence of the maintainable failures. Practical applications of this model
are showed in (Zequeira and Brenguer 2006) where
some examples of dependence between maintainable
and non-maintainable failures are explained.
In this work, the preventive maintenance actions
are performed at times kT , k = 1, 2, . . . and the system is replaced whenever it reaches an age of NT after
last renewal. When the system fails, a minimal repair
is performed. Costs are associated with the preventive maintenance actions, with the repairs and with the
replacements. The objective is to determine an optimal length between preventive maintenances and the
optimal number of preventive maintenances between
replacements of the system.
2
FORMULATION
We consider a maintenance model under where corrective and preventive maintenances take place according
to the following scheme.
1. Before the first preventive maintenance, the
maintainable failures arrive according to a nonhomogeneous Poisson process (NHPP) {N1 (t), t
0} with intensity function r1,0 (t) and cumulative
failure intensity function
H1 (t) =
r1,0 (u)du,
t 0.
r2 (u)du,
t 0.
P[N (t) = n] = E
A non-maintainable failure is corrected by a minimal repair and the repair time is negligible. We
assume that r2 (t) is continuous and non-decreasing
in t.
3. The system is preventively maintained at times kT ,
where k = 1, 2, . . . and T > 0. The preventive
maintenance actions only reduce the failure rate of
the maintainable failures and the failure rate of the
non-maintainable failures remains undisturbed by
the successive preventive maintenances.
4. The non-maintainable failures affect the failure rate
of the maintainable failures in the following way.
Denoting by r1,k , k = 0, 1, 2, . . . the failure rate
1
(t)n e(t) ,
n!
n = 0, 1, 2, . . . .
(3)
kT
= aN2 (kT ) H1 (t kT ),
(4)
464
T
= min {A(T , N ) > Cr Cm } ,
Nopt
A(T , N ) = C1 H1 (T ) NgN (T )
(9)
for z = 0, 1, 2, . . . , the expected number of maintainable failures between the k-th preventive maintenance and the (k + 1)-th preventive maintenance for
k = 0, 1, 2, . . . is given by
E [N1 (kT , (k + 1)T )]
= H1 (T ) exp((a 1)H2 (kT )).
(5)
H1 (T ) exp((a 1)H2 (kT ))
T
If N exists such that A(T , N ) = Cr Cm , then Nopt
is not unique. Furthermore, if T1 T2 then
T1
T2
Nopt
Nopt
.
(6)
B(T , N ) = C1
N 1
+ C1
gk (T ) r1,0 (T )T H1 (T )
k=0
(10)
k=0
NT
C2 H2 (NT ) + Cr + (N 1)Cm
,
+
NT
gk (T )
H2 (kT )z
exp{H2 (kT )},
z!
N 1
N
1
k=0
(8)
N 0
N 1
gk (T ) {H1 (T )(a1)kTr2 (kT )}
k=0
(11)
OPTIMIZATION
The problem is to find the values T and N that minimize the function C(T , N ) given in (6). In other words,
to find the values Topt and Nopt such that
C(Topt , Nopt ) = inf {C(T , N ), T > 0, N = 1, 2, 3, . . . }.
(7)
Theorem 1 and 2 show the optimization problem in
each variable. The proof of these results can be found
in (Castro 2008).
Theorem 1.1 Let C(T , N ) be the function given by
(6). For fixed T > 0, the finite value of N that
T
minimizes C(T , N ) is obtained for N = Nopt
given by
lim r2 (t) = ,
N
< .
then Topt
465
Cm
,
1
C(Topt
, 1)
t 0,
(12)
and,
T
be the value of N that optimizes C(N , T ).
and let Nopt
Assuming that either r1,0 or r2 is unbounded, then the
problem of optimization of C(T , N ) given in (7) has
finite optimal solutions and
C(Topt , Nopt ) = min min C(T , N ) .
(13)
T
1N Nopt
T >0
Proof.
The proof of this result can be found in
(Zhang and Jardine 1998). Note that for the model
presented in this paper, from Theorem 1 if T1 T2
T1
T2
then Nopt
Nopt
. This condition is necessary for the
proof of the result.
Remark 1.1 Analogously to (Zhang and Jardine
1
1998), C(Topt
, 1) in (12) may be replaced by any
i
, i) for i {2, 3, . . . }.
C(Topt
4
NUMERICAL EXAMPLES
t 0,
i = 1, 2,
T 0,
C2 = 7.5,
Cr = 200,
Cm = 75.
54
200
N=4
180
N=30
N=20
N=16 N=14
N=11N=10 N=9
N=8
53.5
N=12
160
N=7
140
53
N=6
N=1
C(T,N)
C(T,N)
120
N=5
100
N=4
52.5
N=6
N=3
80
N=2
N=7
52
60
N=8
40
N=9
51.5
20
N=10
51
0
10
15
20
25
30
35
40
2.5
3.5
4.5
5.5
466
12
1. To find the value Topt
that verifies
50
12
min C(T , 12)) = C(Topt
, 12).
T >0
0
A(0.4869,N) Cr+Cm
Cm
25
= 0.4869.
=
12
51.3431
C(Topt
, 12)
0.4869
that verifies
3. To find the value Nopt
Figure 4.
N 1
30
40
50
60
70
80
90
N
, N ) for difFigure 4 shows the values of C(Topt
N
ferent values of N . The values Topt are obtained
using a root search algorithm for the function
68
66
64
62
60
58
56
54
B(T , N ) given by (11) and for the different values of N . By inspection, one obtains that the
optimal values for the optimization problem are
Topt = 3.575 and Nopt = 10 with an expected cost
rate of C(3.575, 10) = 51.281.
1N 86 T >0
20
0.4869
min{C(0.4869, N )} = C(0.4869, Nopt
).
C(Topt , N)
10
CONCLUSIONS
In a system with two modes of failures and successive preventive maintenance actions, we have studied the problem of finding the optimal length T
between successive preventive maintenances and the
optimal number of preventive maintenances N 1
before the total replacement of the system. The two
modes of failures are dependent and their classification depends on the reduction in the failure rate
after the preventive maintenance actions. For fixed
T , an optimal finite number of preventive maintenances before the total replacement of the system
is obtained. In the same way, when N is fixed, the
optimal length between successive preventive maintenances is obtained. If the failure rates are unbounded,
this value is finite. To analyze the optimization
problem in two variables, except some special cases,
we have to use numerical results but one can reduce
the search of the optimal values to a finite case.
52
50
ACKNOWLEDGEMENTS
10
20
30
40
50
60
70
80
90
N
Figure 3.
100
467
REFERENCES
Ben-Daya, M., S. Duffuaa, and A. Raouf (2000). Maintenance, Modeling and Optimization. Kluwer Academic
Publisher.
Castro, I.T. (2008). A model of imperfect preventive maintenance with dependent failure modes. European Journal
of Operational Research to Appear.
Lin, D., J. Ming, and R. Yam (2001). Sequential imperfect
preventive maintenance models with two categories of
failure modes. Naval Research Logistics 48, 173178.
Nakagawa, T. (2005). Maintenance Theory of Reliability.
Springer-Verlag, London.
468
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
C. Brenguer
Universit de Technologie de Troyes/CNRS, Troyes, France
ABSTRACT: The paper deals with the maintenance optimization of a system subject to a stressful environment.
The behavior of system deterioration can be modified by the environment. Maintenance strategies, based not only
on the stationary deterioration mode but also on the stress state, are proposed to inspect and replace the system
in order to minimize the long-run maintenance cost per unit of time. Numerical experiments are conducted
to compare their performance with classical approaches and thus highlight the economical benefits of our
strategies.
INTRODUCTION
One way of capturing the effect of a random environment on an items lifelength is to make its failure rate
function (be it predictive or model) a stochastic process. Ones of the most important approaches are the
well-known proportional hazard rate (Cox 1972) or the
cumulative hazard rate (Singpurwalla 2006; Singpurwalla 1995) which consist to model the effect of the
environment with the introduction of covariates in the
hazard function. However, the estimation of the different parameters are quite complex for systems even
if efficient methodologies exists for the component
level, see Accelerated Life Testing (Bagdonavicius
and Nikulin 2000; Lehmann 2006). Other reliability works on non-parametric methods (Singpurwalla
2006) allow to specify the likelihood by taking into
account all the information available. Nevertheless,
the direct application of these models for maintenance
optimization leads to very complex and inextricable
mathematical equations.
In this context, the objective of this paper is to
develop a maintenance decision framework for a continuous deteriorating system which is influenced by
environmental condition. We consider a system subject to a continuous random deterioration such as
corrosion, erosion, cracks . . . (van Noortwijk 2007).
The deterioration process is influenced by environmental conditions such as, e.g., humidity rate, temperature or vibration levels. First, a specific deterioration
model based on the nominal deterioration characteristics (without any stress), the random evolution of the
environment and the impact of the stress on the system
are developed. Then, different maintenance strategies
469
The condition of the system at time t can be summarized by a scalar aging variable Xt (Deloux et al.,
2008; Grall et al., 2006; Grall et al., 2002) which
varies increasingly as the system deteriorates. Xt can
be e.g. the measure of a physical parameter linked to
the resistance of a structure (length of a crack, . . . ).
The initial state corresponds to a perfect working state,
i.e. X0 = 0. The system fails when the aging variable is
greater than a predetermined threshold L. The threshold L can be seen as a deterioration level which must
not be exceeded for economical or security reasons.
The deterioration process after a time t is independent
of the deterioration before this time. In this paper, it
is assumed that (Xt )(t0) is a gamma process and the
increment of (Xt )(t0) on a unit of length t , Xt , follows a gamma probability distribution function with
shape parameter t and scale parameter :
ft, (x) =
1
t xt1 ex Ix0
(t)
X(t)
(1)
(2)
Y(t)
m(t)
2
(3)
0
Non-stressed
system
Stressed
system
470
impacted by this environment. The covariate influence can be reduced to a log-linear regression model
if Yt = y; Xy (t) (0 e y t, ) (Bagdonavicius and
Nikulin 2000; Lehmann 2006) where measure the
influence of the covariate on the degradation process.
Thus, it is assumed that the system is subject to an
increase in the deterioration speed while it is under
stress (i.e. while Yt = 1), the system deteriorates
according to its nominal mode while it is not stressed
(while Yt = 0). The parameters of the degradation process when the system is non-stressed are 0 t (0 = )
and and when the system is under stress 1 t and
with 1 = 0 e . In average the shape parameter
is 0 (1 + r (e 1)). 0 and can be obtained by
using the maximum likelihood estimation. can be
assimilated to a classical accelerator factor and can be
obtained with accelerated life testing method.
The Figure 1 sketches the evolution of the stress
process and its influence on the system deterioration.
3
This section presents the maintenance decision framework. First, the structure of the maintenance policy is presented to define when an inspection or a
replacement should be implemented. The mathematical expressions of the associated long-run maintenance cost per unit of time are developed to optimize
the maintenance decision regarding the system state
behavior.
3.1
E(C(S))
C(t)
=
t
E(S)
(4)
X(t)
corrective
replacement area
preventive
replacement area
Failure due to an
excessive deterioration
level
L
471
cu E(Du (S))
E(S)
25
30
(5)
20
cix E(S)
+ cp + (cc cp )P(Xs > L)
E(S)
15
C ( , ) =
35
where E(W ) is the expected value of the random variable and S is the length of a regenerative cycle. The cost
C(S) is composed of the inspections, replacements and
unavailability costs and can be written:
10
0.0
0.2
0.4
0.6
0.8
1.0
r(t). If the new inspection period leads to an inspection time lower than the present time, the system is
inspected immediately with the following decision
(t, r(t) ).
Even if Policy 1 allows to minimize the maintenance
cost criterion, it is not easily to implement this policy
in industry, thus we propose another policy, Policy 2.
Policy 2 is the discrete case of Policy 1. In order to
reduce the number of updates of the decision parameters, we propose to introduce thresholds (l1 , . . . , ln ) on
r(t). The decision framework for Policy 2 is defined
as follows:
if r(t) = 0, the decision parameters associated are
(0 , 0 ).
if r(t) = 1, the decision parameters associated are
(1 , 1 ).
if r(t) (lk , lk+1 ), the decision parameters associated are (lk+1 , lk+1 ) if t < k+1 and (t, lk+1 )
otherwise.
Figure 4 illustrates the updating of the decision
parameters depending on r(t) and the number of
thresholds. In this example, only one threshold on
r(t) is considered: l1 . A t = 0, the system is
assumed to be in a new state and the system is stressed
(Y (t) = 1), thus the decision parameters are (1 , 1 )
(see T 1 in the Figure 4). These decision parameters remain unchanged until r(t) l1 . As soon as
r(t) > l1 , see T2 on Figure 4, decision parameters
are updated to (l1 , l1 ). At T 3, r(t) returns above l1
and the new decision parameters (1 , 1 ) leads to a
passed time, thus the deterioration level is inspected
immediately (cf. the two arrows from the curve r(t)).
472
Proportion of time
elapsed in the stress
state: r(t)
1
l1
End
Step 2: estimation of the cost criterion: the long
run maintenance cost corresponds to the mean of
the Nh costs.
0
Y(t)
0
0
l1
l1
1
0
T1
T2
T3
Policy 0
Policy 1
0.55
Cos t
0.65
NUMERICAL RESULTS
0.60
1
1
0.0
0.2
0.4
0.6
0.8
1.0
473
0.625
0.620
0.615
Co s t
Policy 0
Policy 1
Policy 2
Figure 6. Optimized cost variation when the mean proportion of the time elapsed in the stress state varies.
These curves are obtained for arbitrarily fixed maintenance data and operation costs: 0 = 0.1, = 7,
= 0.3, L = 2, cix = 5, cc = 100, cp = 30, cu = 25.
Figure 5 illustrates the benefit of the non-periodic
inspection strategy. In fact, the non-periodic inspection strategy (full line) is always the policy which
minimizes the long-run maintenance cost per unit of
time. When, in average, the proportion of time elapsed
in the non-stressed state, r , tends to 0 (never stressed)
or 1 (always stressed), Policy 1 tends to Policy 0. The
economic benefit of Policy 1 compared to Policy 0
varies from 0% (when r = 0 and r = 1) to 5%.
Finally, the non-periodic scheme takes the advantage here to propose an adaptive inspection interval to
the real proportion of time elapsed in the stress state
which allow significantly better performance than the
periodic scheme.
4.2
CONCLUSION
In this paper different maintenance decision frameworks for a continuous deteriorating system which
evolves in a stressful environment have been proposed.
The relationship between the system performance and
the associated operating environment has been respectively modeled as an accelerator factor for degradation
and as a binary variable. A cost criterion has been
numerically evaluated to highlight the performance
of the different maintenance strategies and the benefits to consider the opportunity to update the decision
according to the history of the system.
Even if the last proposed structure for maintenance
decision framework has shown interesting performance, a lot of research remains to be done. First, the
mathematical model of the cost criterion in this case
has to be evaluated. A sensitivity analysis when maintenance data varies should be performed. Moreover,
during an inspection, the maintenance decisions for
the next inspection are based only on the time elapsed
by the system in the failed state, but it could be interesting to use the information given by the degradation
level as well and so propose a mix of condition - and
stress - based framework for inspections.
APPENDIX A
Let f (x, t) be the probability density function of the
deterioration increment at time t of a system subject
to aging and stress:
f (x, t) =
The curves in figure 6 are the respective representation of the optimized cost criterion for the Policy 0
and 2. They are obtained when the number of thresholds in the case of Policy 2 varies from 1 to 10. The
curve for Policy 2 is the smoothed estimators of the
respective costs obtained by simulations. These curves
are obtained for arbitrarily fixed maintenance data and
operations costs to following values: 0 = 0.1, = 7,
= 0.3, L = 2, cix = 5, cc = 100, cp = 30,
cu = 25, r = 0.4. Figure 6 illustrates the benefit
of the non-periodic strategy compared to the periodic strategy. The economic performance of Policy 2
increases with the number of thresholds and varies
between 0.012% and 2.5% compared to Policy 1.
When the number of thresholds increases Policy 2
tends to Policy 1.
xt1
e
(t)
(6)
474
(7)
(8)
0
+
0
(9)
with
P(X (x ) < |X (x 1) < ) =
y
f (y, (x 1) )f (z, x )dzdy
0
(10)
P(Xs > L), the probability of a corrective replacement on a cycle S is given by:
x (Rm ((x 1) ) Rm (x ))dx (11)
P(Xs > L) =
REFERENCES
Asmussen, S. (1987). Applied Probability and Queues, Wiley
Series in Probability and Mathematical Statistics. Wiley.
Bagdonavicius, V. and M. Nikulin (2000). Estimation in
Degradation Models with Explanatory Variables. Lifetime
Data Analysis 7, 85103.
Castanier, B., C. Brenguer, and A. Grall (2003). A sequential condition-based repair/replacement policy with nonperiodic inspections for a system subject to continuous
wear. Applied Stochastic Models in Business and Industry 19(4), 327347.
Cox, D. (1972). Regression models and life tables. Journal
of the Royal Statistics B(34), 187202.
Deloux, E., B. Castanier, and B. C. (2008). Maintenance policy for a non-stationary deteriorating system. In Annual
Reliability and Maintainability Symposium Proceedings
2008 (RAMS 2008), Las Vegas, USA, January 2831.
Gertsbakh, I. (2000). Reliability Theory With Applications to
Preventive Maintenance. Springer.
Grall, A., L. Dieulle, C. Brenguer, and M. Roussignol (2002). Continuous-Time Predictive-Maintenance
Scheduling for a Deteriorating System. IEEE Transactions on Reliability 51(2), 141150.
Grall, A., L. Dieulle, C. Brenguer, and M. Roussignol (2006). Asymptotic failure rate of a continuously
monitored system. Reliability Engineering and System
Safety 91(2), 126130.
Lehmann, A. (2006). Joint modeling of degradation and
failure time data. In Degradation, Damage, Fatigue and
Accelerated Life Models In Reliability Testing, ALT2006,
Angers, France, pp. 2632.
Rausand, M. and A. Hoyland (2004). System Reliability
Theory-Models, Statistical Methods, and Applications
(Second ed.). Wiley.
Singpurwalla, N. (1995). Survival in Dynamic Environments.
Statisticals Science 10(1), 86103.
Singpurwalla, N. (2006). Reliability and risk A Bayesian
Perspective. Wiley.
van Noortwijk, J. (2007). A survey of the application of
gamma processes in maintenance. Reliability Engineering
and System Safety doi:10.1016/j.ress.2007.03.019.
Wang, H. (2002). A Survey of Maintenance Policies of
Deteriorating Systems. European Journal of Operational
Research 139, 469489.
475
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper presents the application of the particle filtering method, a model-based Monte Carlo
method, for estimating the failure probability of a component subject to degradation based on a set of observations.
The estimation is embedded within a scheme of condition-based maintenance of a component subject to fatigue
crack growth.
INTRODUCTION
When the health state of a component can be monitored, condition-based policies can be devised to
maintain it dynamically on the basis of the observed
conditions (Marseguerra et al., 2002 and Christer et al.,
1997). However, often in practice the degradation
state of a component can not be observed directly,
but rather must be inferred from some observations,
usually affected by noise and disturbances.
The soundest approaches to the estimation of the
state of a dynamic system or component build a posterior distribution of the unknown degradation states
by combining a prior distribution with the likelihood
of the observations actually collected (Doucet, 1998
and Doucet et al., 2001). In this Bayesian setting,
the estimation method most frequently used in practice is the Kalman filter, which is optimal for linear
state space models and independent, additive Gaussian noises (Anderson & Moore, 1979). In this case,
the posterior distributions are also Gaussian and can
be computed exactly, without approximations.
In practice, however, the degradation process of
many systems and components is non-linear and the
associated noises are non-Gaussian. For these cases,
approximate methods, e.g. analytical approximations
of the extended Kalman (EKF) and Gaussian-sum filters and numerical approximations of the grid-based
filters can be used (Anderson & Moore, 1979).
Alternatively, one may resort to particle filtering
methods, which approximate the continuous distributions of interest by a discrete set of weighed particles
representing random trajectories of system evolution
in the state space and whose weights are estimates of
the probabilities of the trajectories (Kitagawa 1987 and
Djuric et al., 2003 and Doucet et al., 2000).
In this paper, the particle filtering method for
state estimation is embedded within a condition-based
maintenance scheme of a component subject to fatigue
(1)
477
(2)
p (x0:k | z0:k) =
Ns
1
i
(x0:k x0:k
)
Ns i=1
p (x0:k | z0:k) =
(0:k | z0:k)
p (0:k | z0:k)
(0:k | z0:k)
p (x0:k | z0:k ) =
NS
1
i
wi (x0:k x0:k
)
Ns i=1 k
(9)
(4)
where:
wki =
p(xk | z0:k1)p(zk | xk)dxk
(8)
(7)
i
, i =1, 2, . . . , Ns is a set of independent
where x0:k
random samples drawn from p(x0:k | z0:k ).
Since, in practice, it is usually not possible to
sample efficiently from the true posterior distribution p(x0:k | z0:k ), importance sampling is used, i.e.
i
the state sequences x0:k
are drawn from an arbitrarily chosen distribution (x0:k | z0:k ), called importance
function (Kalos and Whitlock 1986). The probability
p(x0:k | z0:k ) is written as:
(3)
(6)
p(x0:k | z0:k) =
(5)
i
i
)p(x0:k
)
p(z0:k | x0:k
i
p(z0:k )(x0:k | z0:k )
(10)
p (x0:k | z0:k) =
478
Ns
i=1
i
w ki (x0:k x0:k
)
(11)
wki
Ns
j
wk
(12)
j=1
wki =
i
i
)p(x0:k
)
p(z0:k | x0:k
i
(x0:k
| z0:k )
= wki p(z0:k )
(13)
For on-line applications, the estimate of the distribution p(x0:k | z0:k ) at the k-th time step can be obtained
from the distribution p(x0:k1 | z0:k1 ) at the previous
time step by the following recursive formula obtained
by extension of equation (4) for the Bayesian filter p(xk | z0:k ) (Doucet et al., 2001 and Arulampalam
2002):
p (x0:k | z0:k) =
i
i
i.e. (xk | x0:k1
| z0:k ) = p(xk | xk1
) and the nonnormalized weights (16) become (Tanizaki 1997 and
Tanizaki & Mariano 1998):
i
wki = wk1
(16)
p zk | xki
= p (x0:k1 | z0:k1)
(14)
k
j=1
(15)
the following recursive formulas for the nonnormalized weights wki and wki can be obtained:
wki =
i
| z0:k)
p(x0:k
i
(x0:k
| z0:k)
i )
p(zk | xki )p(xki | xk1
p(zk | zk1)
i
i
(xki | x0:k1
| z0:k)(x0:k1
| z0:k1)
K = x
(18)
n
dx
= e C x
dt
i
p(zk | xki )p(xki | xk1
)
1
i
i
(xk | x0:k1 | z0:k) p(zk | zk1)
i
wki = wki p(z0:k) = wk1
i
p(zk | xki )p(xki | xk1
)
i
(xki | x0:k1
| z0:k)
(17)
i
p(x0:k1
| z0:k1)
i
= wk1
CONDITION-BASED MAINTENANCE
AGAINST FATIGUE CRACK
DEGRADATION
(15)
The choice of the importance function is obviously crucial for the efficiency of the estimation.
In this work, the prior distribution of the hidden Markov model is taken as importance function,
(19)
(20)
479
zk
xk
= 0 + 1 ln
+ k
d zk
d xk
(21)
where:
zk
d zk
k = 0 + 1 ln
by the knowledge of the crack propagation stochastic process and a set of available measurements {z}k
related to it, taken at selected times prior to k. The best
time to replacement lmin is the one which minimizes
the expression (Christer et al., 1997):
(22)
xk
d xk
(23)
then, Yk N (k , 2 ) is a Gaussian random variable with conditional cumulative distribution function (cdf):
FYk (yk | xk) = P(Yk < yk | xk) =
yk k
(24)
zk
| xk
d zk
zk
1
k
ln
=
d zk
2
zk
k
ln
d zk
e
fZk (zk | xk ) =
2
(28)
where:
= cp (1 P (k + l)) + cf P (k + l)
(25)
l denotes the remaining life duration until replacement (either preventive or upon failure),
k + l denotes the preventive replacement time instant
scheduled on the basis of the set of observations
{z}k collected up to time k,
d denotes the critical crack threshold above which
failure is assumed to occur (d < d),
cp denotes the cost of preventive replacement,
cf denotes the cost of replacement upon failure,
p (k + i) = P (xk+i > d | x0:k+i1 < d , {z}k ))
denotes the conditional posterior probability of
the crack depth first exceeding d in the interval
(k + i 1, k + i), knowing the component had not
failed up to time k + i 1 and given the sequence of
observations {z}k available up to time k,
P(k + l)denotes the probability of the crack depth
exceeding d in the interval (k, k + l).
d
zk (d zk )
(26)
The maintenance actions considered for the component are replacement upon failure and preventive
replacement. For a wide variety of industrial components, preventive replacement costs are lower than failure replacement ones, since unscheduled shut down
losses must be included in the latter.
At a generic time step k of the components life, a
decision can be made on whether to replace the component or to further extend its life, albeit assuming the
risk of a possible failure. This decision can be informed
p (k + 1)
l=1
p (k + 1) +
=
i=2
i1
(1 p (k + j)) p (k + i) l > 1
j=1
(29)
and
expected current life cycle (k, l) =
k +1
(k + l) (1 P (k + l)) + (k + 1) p (k + 1)
=
l
i1
+ (k + i) (1 p (k + j)) p (k + i)
480
i=2
j=1
(30)
m:xkm >d
n:xkn >d
wkm
wkn
(31)
Time step
(k)
Minimum E[cost
per unit time]
kmin
lmin
100
200
300
400
500
31.7
31.7
29.5
28.0
28.5
527
527
535
568
535
427
327
235
168
35
Figure 1.
CONCLUSIONS
In this work, a method for developing a conditionbased maintenance strategy has been propounded. The
method relies on particle filtering for the dynamic
481
estimation of failure probabilities from noisy measurements related to the degradation state.
An example of application of the method has
been illustrated with respect to the crack propagation
dynamics of a component subject to fatigue cycles and
which may be replaced preventively or at failure, with
different costs.
The proposed method is shown to represent a
valuable prognostic tool which can be used to drive
effective condition-based maintenance strategies for
improving the availability, safety and cost effectiveness of complex safety-critical systems, structures and
components, such as those employed in the nuclear
industry.
REFERENCES
Anderson, B.D. & Moore, J.B. 1979. In Englewood Cliffs.
Optimal Filtering. NJ: Prentice Hall.
Arulampalam, M.S. Maskell, S. Gordon, N. & Clapp, T.
2002. A Tutorial on Particle Filters for Online
Nonlinear/Non-Gaussian Bayesian Tracking. IEEE Trans.
On Signal Processing 50(2): 174188.
Bigerelle, M. & Iost, A. 1999. Bootstrap Analysis of FCGR,
Application to the Paris Relationship and to Lifetime
Prediction. International Journal of Fatigue 21: 299307.
Christer, A.H. & Wang, W. & Sharp, J.M. 1997. A state space
condition monitoring model for furnace erosion prediction and replacement. European Journal of Operational
Research 101: 114.
Djuric, P.M. & Kotecha, J.H. & Zhang, J. & Huang, Y. &
Ghirmai, T. & Bugallo, M. F. & Miguez, J. 2003. Particle
Filtering. IEEE Signal Processing Magazine 1937.
Doucet, A. 1998. On Sequential Simulation-Based Methods for Bayesian Filtering. Technical Report. University
of Cambridge, Dept. of Engineering. CUED-F-ENGTR310.
Doucet, A. & Freitas, J.F.G. de & Gordon, N.J. 2001. An
Introduction to Sequential Monte Carlo Methods. In A.
Doucet, J.F.G. de Freitas & N.J. Gordon (eds), Sequential
Monte Carlo in Practice. New York: Springer-Verlag.
482
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Anatoly Lisnianski
The Israel Electric Corporation Ltd., Haifa, Israel
ABSTRACT: This paper considers corrective maintenance contracts for aging air conditioning systems, operating under varying weather conditions. Aging is treated as an increasing failure rate. The system can fall into
unacceptable states for two reasons: through performance degradation because of failures or through an increase
in demand of cold. Each residence in acceptable state, each repair and each entrance to an unacceptable state are
associated with a corresponding cost. A procedure for computing this reliability associated cost is suggested,
based on the Markov reward model for a non-homogeneous Poisson process. By using this model an optimal
maintenance contract that minimizes the total expected cost may be found. A numerical example for a real world
air conditioning system is presented to illustrate the approach.
INTRODUCTION
Many technical systems are subjected during their lifetime to aging and degradation. Most of these systems
are repairable. For many kinds of industrial systems,
it is very important to avoid failures or reduce their
occurrence and duration in order to improve system
reliability and reduce corresponding cost.
Maintenance and repair problems have been widely
investigated in the literature. Barlow & Proshan
(1975), Gertsbakh (2000), Wang (2002) survey and
summarize theoretical developments and practical
applications of maintenance models.
With the increasing complexity of systems, only
specially trained staff with specialized equipment can
provide system service. In this case, maintenance service is provided by an external agent and the owner
is considered a customer of the agent for maintenance
service. In the literature, different aspects of maintenance service have been investigated (Almeida (2001),
Murthy & Asgharizadeh(1999)).
Reliability theory provides a general approach for
constructing efficient statistical models to study aging
and degradation problems in different areas. Aging is
considered a process which results in an age-related
increase of the failure rate. The most common shapes
of failure rates have been observed by Meeker and
Escobar (1998), Finkelstein (2002).
This paper presents a case study where an aging
air conditioning system with minimal repair is considered. The Markov reward model is built for computing
2
2.1
(1)
where
OC is the system operating cost accumulated during
the system lifetime;
483
i =j
j = 1, 2, . . . , K
(2)
3
3.1
NUMERICAL EXAMPLE
The system description
Consider an air conditioning system, placed in a computer center and used around the clock in varying
temperature conditions. The system consists of five
identical air conditioners. The work schedule of the
system is as follows. For regular temperature conditions two air-conditioners must be on-line and three
others are in hot reserve. For peak temperature conditions four air-conditioners have to be on-line and one
is in hot reserve. The number of the air conditioners
that have to be on-line define the demand level.
We denote:
c is the system operations cost per time unit.
cr is the repair cost paid for every order of the
maintenance team;
cps is a penalty cost, which is paid, when the
system fails.
An aging process in air-conditioners is described
via the Weibull distribution with parameters =
1.5849 and = 1.5021. Therefore (t) = 3t 0.5021 .
Service agents can suggest 10 different Corrective
Maintenance Contracts, available in the market. Each
contract m is characterized by repair rate and corresponding repair cost (per repair) as presented in the
Table 1.
The operation cost cop , is equal to $72 per year. The
penalty cost cp , which is paid when the system fails,
is equal to $500 per failure.
484
Table 1.
Maintenance
contract
M
MTTR
(days)
Repair cost
($ per repair)
crm
1
2
3
4
5
6
7
8
9
10
3.36
1.83
1.22
0.91
0.73
0.61
0.52
0.46
0.41
0.37
36
40
46
52
58
66
74
84
94
106
1
1
= 0.066 hours1
=
Tc td
24 9
(3)
where
= 584 year 1 ,
N =
1
= 0.111 hours1 = 972 year 1
td
0
0
0
0
0
|
N
0
N
0
0
0
0
|
0
0
N
0
0
0
|
0
0
N
0
0
|
0
0
0
0
0
N
0
|
0
0
0
0
0
N
|
0
0
0
0
| d 0
| 0
d
0
0
0
0
| 0
0
d
0
0
0
| 0
0
0
d
0
0
| 0
0
0
0
d
0
| 0
0
0
0
0
d
| a77 4(t) 0
0
0
0
|
a88
4(t) 0
0
0
| 0
2
a99
3 (t) 0
0
| 0
0
3
a10,10 2 (t) 0
| 0
0
0
4
a11,11 (t)
| 0
0
0
0
5
a12,12
a11
a22
a33
a44
a55
a66
= (2(t) + d )
= (2 (t) + + d )
= (2 (t) + 2 + d )
= (2 + 3 + d )
= ( (t) + 4 + d )
= (5 + d )
a77 = (4 (t) + N )
a88 = (4 (t) + + N )
a99 = (3 (t) + 2 + N )
a10,10 = (2 (t) + 3 + N )
a11,11 = ( (t) + 4 + N )
a12,12 = (5 + N )
485
Main
Main
Reserved
Available
Reserved
Available
Reserved
Available
Main
Main
Main
Reserved
Available
Main
4
d
Main
Main
Reserved
On-line
Reserved
Available
Reserved
Available
Main
2
(g=4) = (w=2)
Main
Reserved
On-line
Main
Main
Main
Main
Main
Main
Main
Main
Reserved
On-line
Main
Main
Main
Reserved
On-line
Main
Main
Main
(g=4) = (w=4)
N
2
d
Main
Main
Reserved
On-line
Reserved
On-line
Reserved
Available
3
(g=3) > (w=2)
Main
Reserved
On-line
3
d
Main
Reserved
On-line
Main
Reserved
On-line
Reserved
On-line
Main
10
(g=2) = (w=2)
Main
Main
Reserved
On-line
Reserved
On-line
Reserved
On-line
Main
11
5
d
Main
Main
Reserved
On-line
Reserved
On-line
Reserved
On-line
12
Figure 1.
Reserve
On-line
|
|
|
|
|
|
|
|
|
|
|
|
Main
2cop
crm
0
0
0
0
0
0
0
0
0
0
0
2cop
crm
0
0
0
0
0
0
0
0
0
0
0
2cop
crm
0
0
0
0
0
0
0
0
0
0
0
2cop
crm
0
0
0
0
0
0
0
0
0
0
cp
cop
crm
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
|
|
|
|
|
|
|
|
|
|
|
|
0
0
0
0
0
4cop
crm
0
0
0
0
0
0
0
0
0
0
0
0
4cop
crm
0
0
0
0
0
cp
0
0
0
0
cp
3cop
crm
0
0
0
0
0
0
0
0
0
0
0
2cop
crm
0
0
0
0
0
0
0
0
0
0
0
0
crm
0
0
0
0
0
0
0
0
0
0
0
0
486
x 10
3.5
3
2.5
2
1.5
1
CALCULATION RESULTS
4
6
8
Maintenance Contract Level
10
CONCLUSIONS
The case study for the estimation of expected reliability associated cost accumulated during system
lifetime is considered for an aging system under minimal repair. The approach is based on application
of a special Markov reward model, well formalized and suitable for practical application in reliability engineering. The optimal corrective maintenance
dV1 (t)
= 2cop (2 (t) + d ) V1 (t) + 2 (t) V2 (t) + d V7 (t)
dt
dV2 (t)
= 2cop + crm + V1 (t) (2 (t) + + d ) V2 (t) + 2 (t) V3 (t) + d V8 (t)
dt
dV3 (t)
= 2cop + 2crm + cp d + 2V2 (t) (2 (t) + 2 + d )V3 (t) + 2 (t) V4 (t) + d V9 (t)
dt
dV4 (t)
= 2cop + 3crm + 2cp (t) + 3V3 (t) (2 (t) + 3 + d ) V4 (t) + 2 (t) V5 (t) + d V10 (t)
dt
dV5 (t)
= cop + 4crm + 4V4 (t) ( (t) + 4 + d ) V5 (t) + (t) V6 (t) + d V11 (t)
dt
dV6 (t)
= 5crm + 5V5 (t) (5 + d ) V6 (t) + d V12 (t)
dt
dV7 (t)
= 4cop + N V1 (t) (4 (t) + N ) V7 (t) + 4 (t) V8 (t)
dt
dV8 (t)
= 4cop + crm + 4cp + N V2 (t) + V7 (t) (4 (t) + + N ) V8 (t) + 4 (t) V9 (t)
dt
dV9 (t)
= 3cop + 2crm + N V3 (t) + 2V8 (t) (3 (t) + 2 + N ) V9 (t) + 3 (t) V10 (t)
dt
dV10 (t)
= 2cop + 3crm + N V4 (t) + 3V9 (t) (2 (t) + 3 + N ) V10 (t) + 2 (t) V11 (t)
dt
dV11 (t)
= 4crm + N V5 (t) + 4V10 (t) ( (t) + 4 + N ) V11 (t) + (t) V12 (t)
dt
dV12 (t)
= 5crm + N V6 (t) + 5V11 (t) (5 + N ) V12 (t)
dt
487
(4)
488
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The paper presents a new analytical algorithm which is able to carry out exact reliability quantification of highly reliable systems with maintenance (both preventive and corrective). A directed acyclic graph
is used as a system representation. The algorithm allows take into account highly reliable and maintained input
components. New model of a repairable component undergoing to hidden failures, i.e. a model when a failure
is identified only at special deterministically assigned times, is analytically deduced within the paper. All considered models are implemented into the new algorithm. The algorithm is based on a special new procedure
which permits only summarization between two or more non-negative numbers that can be very different. If the
summarization of very small positive numbers transformed into the machine code is performed effectively no
error is committed at the operation. Reliability quantification is demonstrated on a real system from practice.
1
INTRODUCTION
2
2.1
489
2.2
+
e(+)t
+ +
1 e(+)t , t > 0
=
+
(e
+ PC 1 +
e ) , > 0,
(2)
where is a time which has passed since the last
planned inspection, Pc is the probability of a nonfunctional state of an element at the moment of
inspection at the beginning of the interval to the next
inspection.
Proof Let Tp is an assigned period of inspections or
examinations of the functional state of an element. Let
us further indicate
PA (t) . . . probability that an element is at time t in
the correct functional state,
PB (t) . . . probability that an element is at the time
t in a failure state,
PC (t) . . . probability that an element is at the time
t in a repair state.
In the first interval, when t < 0, TP ), the situation is qualitatively equivalent like an element that can
not be repaired. In the interval between two inspections
the situation is however different. As long as the given
element is failed, it converts into the state of repair. As
long as we set a time variable as the time which has
passed since the moment of the last inspection, then
this situation can be noted as
PB ( = 0) = 0
The element can be at the moment of inspection
either in the state of repair with a probability Pc , which
can be described as
P(t) = 1
(1)
PC ( = 0) = PC
And finally
PA ( = 0) = PA
is a probability that an element at the time of the
inspection is in the correct function. It is apparent that
PA + PC = 1.
490
PA ( ) PA ( ) d + PC ( ) d = PA ( + d )
PB ( ) + PA ( ) d = PB ( + d )
PC ( ) PC ( ) d = PC ( + d )
PA ( ) + PA ( ) PC ( ) = 0
PB ( ) PA ( ) = 0
PC ( ) + PC ( ) = 0
The solution of this set is:
PC
PC
e
e
PA ( ) = PA +
PC
PC
e
e
PB ( ) =
PA +
Example 1.1 If we counted hypothetically on a computer with three-digit decimal numbers, then for the
value of q = 0.00143, we would instead of a correct
value p = 0.99857 have only p = 0.999.
In return for q = 1 p, we would get: q = 1 p =
0.00100.
It is apparent that it gets to a great loss of accuracy
if we counted p instead of q.
Seeing that probabilities of a non-function state of
a highly reliable system is very small, we have to
concentrate on numerical expression of these probabilities. For these purposes it is necessary to undergo
a reorganization in a computer calculation and set certain rules which do not have the influence on accuracy
of the computation at the numeration process.
+ PA + PC
PC ( ) = PC e
Then the probability that an element is not in the
state of correct function inside the interval at the time
will be:
P( ) = PB ( ) + PC ( )
PC
e
PC
e + PC e
PA +
= PA + PC +
= (1 PC ) (1 e )
(e e )
+ PC 1 +
1+
(e e )
e
1 e() , > 0
=1
P(t) = 1 et ,
that was supposed as an unavailability coefficient,
is a failure rate. Similarly, for other models of system
elements the computation of an expression
1 ex , for x 0
(3)
491
to work with the whole relevant sub-graph. Combinatorial character for the quantification will stay
nevertheless unchanged.
3.4 The error-free sum of different non-negative
numbers
The first step to the solution of this problem is to find a
method for the accurate sum of many non-negative
numbers.
The arithmetic unit of a computer (PC) works in a
binary scale. A positive real number of todays PC contains 53 valid binary numbers, see Figure 2. A possible
order ranges from approximately 1000 to 1000.
The line indicated as order means an order of a
binary number.
The algorithm for the accurate quantification of
sums of many non-negative numbers consists from a
few steps:
1. The whole possible machine range of binary positions (bites) is partitioned into segments of 32
positions for orders, according to the following
scheme in Figure 3: The number of these segments
will be approx.:
2000
= 63
32
2. Total sum is memorized as one real number, which
is composed from 32 bite segments. Each from
these segments has additional 21 bites used as
transmission.
3. At first a given non-zero number of the sum that
must be added is decomposed according to before
assigned firm borders (step 1) mostly into three
parts containing 32 binary numbers of the number
at most, according to the scheme in Figure 4. The
individual segments are indexed by numbers 163.
4. Then the individual parts of this decomposed number are added to the corresponding members of the
sum number, as in Figure 5.
5. Always after the processing of 220 numbers (the
limit is chosen so that it could not lead to overflowing of the sum number at any circumstances)
order: 1000
Figure 2.
32
Figure 3.
492
-1000
..
Figure 1.
1 ? ?
53 places
31
-1
-2
..
-32 -33 -34
..
Figure 4.
32 bits
P( ) = (1 PC ) ( ) + PC ( ),
Figure 5.
A+
Figure 6.
q1 q2 (1 qk ) .
Clearance process.
0
0
Figure 7.
The corresponding
part of the number
493
q8 q9 q10 + (1 q8 ) q9 q10
+ q8 (1 q9 ) q10 + q8 q9 (1 q10 )
+ q8 (1 q9 ) (1 q10 )
+ (1 q8 ) q9 (1 q10 )
+ (1 q8 ) (1 q9 ) q10
1
2
Error of the first result reaches to about half numbering mantissa of the result number what can be
considered as relevant error. In spite of the fact that the
example is very hypothetical, it clearly demonstrates
importance of the algorithm.
10
Figure 8.
The algorithm is based on summation of many nonnegative numbers that can be very different. The output
of the summation is one number in full machine accuracy, which we call error-free sum. Importance of the
error-free algorithm can be explained on the following
very hypothetical example:
Let us have a system given by the acyclic graph
similar to the one in Fig. 1 with 30 input edges. In such
a case the sum for the probability of a non-function
state of the highest SS node can be composed of about
one billion (230 ) summands. If we have to summarize
(by a software on a common PC) one number of a
value 220 with additional 230 numbers all of them
Figure 9.
494
CONCLUSION
495
ACKNOWLEDGEMENT
REFERENCES
Marseguerra M. & Zio E. 2001. Principles of Monte Carlo
simulation for application to reliability and availability
analysis. In: Zio E, Demichela M, Piccinini N, editors.
Safety and reliability towards a safer world, Torino, Italy,
September 1620, 2001. Tutorial notes. pp. 3762.
Tanaka T, Kumamoto H. & Inoue K. 1989. Evaluation of a
dynamic reliability problem based on order of component
failure. IEEE Trans Reliab 1989;38:5736.
Baca A. 1993. Examples of Monte Carlo methods in reliability estimation based on reduction of prior information.
IEEE Trans Reliab 1993;42(4):6459.
Bri R. 2008. Parallel simulation algorithm for maintenance
optimization based on directed Acyclic Graph. Reliab Eng
Syst Saf 2008;93:85262.
Choi JS, Cho NZ 2007. A practical method for accurate
quantification of large fault trees. Reliab Eng Syst Saf
2007;92:971-82.
Bri, R. 2007. Stochastic Ageing ModelsExtensions of the
Classic Renewal Theory. In Proc. of First Summer Safety
and Reliability Seminars 2007, 2229 July, Sopot: 2938,
ISBN 978-83-925436-0-2.
Bri, R. & Drbek, V. 2007. Mathematical Modeling of both
Monitored and Dormant Failures. In Lisa Bartlett (ed.),
Proc. of the 17th Advances in Risk and Reliability Technology Symposium AR2 TS, Loughborough University:
376393.
Dutuit, Y. & Chatelet E. 1997. TEST CASE No. 1, Periodically tested paralel system. Test-case activity of European
Safety and Reliability Association. ISdF-ESRA 1997. In:
Workshop within the European conference on safety and
reliability, ESREL 1997, Lisbon, 1997.
496
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
M.C. SantAna
Agncia Nacional de Sade, Rio de Janeiro, Brasil
V.C. Damaso
Centro Tecnolgico do Exrcito, Rio de Janeiro, Brasil
ABSTRACT: Reliability analyses of repairable systems are currently modelled through punctual stochastic
processes, which intend to establish survivor measures in a failure x repair scenario. However, these approaches
do not always represent the real life-cycle of repairable systems. In order to have a better and most coherent
reality modelling, one has the Generalized Renewal Process (GRP). With this approach, reliability is modelled
considering the effect of a non-perfect maintenance process, which uses a better-than-old-but-worse-than-new
repair assumption. Considering the GRP approach, this paper presents an availability modelling for operational
systems and discusses an optimisation approach based on a simple genetic algorithm (GA). Finally, a case is
presented and the obtained results demonstrate the efficacy of combining GRP and GA in this kind of problems.
INTRODUCTION
maximize the system availability. Indeed, the availability of many components that comprise the entire
system should be considered. In this case, a superimposed stochastic process combining all stochastic
processes concerning each component can be used.
Whatever the optimization method chosen, the
probabilistic modeling is of crucial importance and
should be given appropriately. The present work aims
to base the availability modeling of repairable systems
on a generalized renewal process (Kijima & Sumita,
1986).
2
497
(1)
Virtual Age
negligible, compared with the mean-time-betweenfailures (MTBF), the point processes are used as
probabilistic models of the failure processes. The commonly adopted point processes in PSA are as follows:
(i) homogeneous Poisson process (HPP), (ii) ordinary
renewal processes (ORP) and (iii) non-homogeneous
Poisson process (NHPP). However, these approaches
do not represent the real life-cycle of a repairable
system (Modarres, 2006). Rather, they have some
assumptions that conflict with reality. In HPP and ORP,
the device, after a repair, returns to an as-good-as-new
condition, and in a NHPP the device, after a repair,
returns to an as-bad-as-old condition.
Kijima & Sumita (1986) introduced the concept of
generalized renewal process (GRP) to generalize the
three point processes previously mentioned. With this
approach, reliability is modeled considering the effect
of a non-perfect maintenance process, which uses
a better-than-old-but-worse-than-new repair assumption. Basically, GRP addresses the repair assumption
by introducing the concept of virtual age, which
defines a parameter q that represents the effectiveness
of repair.
V1
t2
tn
Real Age
Figure 1. Visualization of virtual age (Adapted from
Jakopino, 2005).
F(T + y) F( y)
1 F( y)
(3)
i1
q
F(ti , , , q) = 1 exp
tj
j=1
3
(2)
V2
t1
Vn
ti + q
i1
j=1 tj
(4)
AVAILABILITY MODELING
OF REPAIRABLE SYSTEMS
498
At1 (t) =
t1
et R1 (t |V0 )dt + 1,
R(t|V0 )e
0,
t0
t0 t < t1
t1 t < t1 + man
(5)
A =
1
Tmiss
ti
n
A(t)dt
i1 +man
MAINTENANCE DISTRIBUTION
AND GENETIC MODELING
(6)
where Ati (t) is the availability between (i 1)-th and
i-th maintenance stops, R(t|Vi1 ) = 1 F(t|Vi1 ),
and F(t|Vi1 ) is done as in equation 4. man is the
maintenance time.
(8)
At1 (t) =
ti
ti t < ti + man
(7)
i=0 t
0 < 1,
(9)
(10)
499
Heat
exchanger 1
Motor 1
Motor 2
V-1
Comp.
1/
(days)
(days)
V0 (days)
V-2
Pump 1
Pump 2
Pump 3
Motor 1
Motor 2
Motor 3
Valve 1
Valve 2
Valve 3
Valve 4
Heat Ex 1
Heat Ex 2
3
3
3
2
2
2
1
1
1
1
5
5
1,12
1,11
1,12
2,026
1,62
2,026
0,73
0,5
0,5
0,56
1,47
1,47
338,5
180
338,5
76
128
76
35,32
105,13
105,13
54,5
321
321
0,51
0,3
0,51
0,146
0,7
0,146
0,54
0
0
0,02
0,35
0,35
75
25
75
165
145
165
150
200
200
45
272
272
Pump 1
Heat
exchanger 2
Pump 2
V-3
V-4
Motor 3
Figure 2.
Pump 3
Tf (1 )
.
1 n+1
(11)
Table 2.
APPLICATION EXAMPLE
Components
Motor 1
Motor 2
Motor 3
Heat Ex. 1
Heat Ex. 2
CONCLUDING REMARKS
500
Pump 1
Motor 1
1.000
1.00
0.995
0.80
0.990
Availability
0.90
Availability
0.70
0.60
0.985
0.980
0.50
0.40
0.975
0.30
0.970
0
0.20
100
200
300
400
500
400
500
400
500
Time (days)
0.10
0.00
0
100
200
300
400
500
Figure 6.
Time (days)
Figure 3.
Pump 2
1.000
0.995
Availability
Motor 2
1.00
0.90
0.80
Availability
0.70
0.990
0.985
0.980
0.60
0.50
0.975
0.40
0.30
0.970
0.20
100
200
300
0.10
Time (days)
0.00
0
100
200
300
400
500
Time (days)
Figure 4.
Figure 7.
Motor 3
1.00
Availability
0.90
Availability
0.80
0.70
0.60
0.50
0.40
0.990
0.985
0.980
0.30
0.975
0.20
0.10
0.970
0.00
0
100
200
300
400
500
100
200
300
Time (days)
Time (days)
Figure 5.
Figure 8.
algorithm should find the best proportionality factors which will distributethe times for preventive
maintenance stop along the mission period.
These results are satisfactory and have contributed
to demonstrate the efficacy of the proposed approach
in this particular problem.
501
Valve 1
1.000
Heat Exchanger 1
1.00
0.995
0.80
0.985
0.70
Availability
Availability
0.90
0.990
0.980
0.975
0.60
0.50
0.40
0.30
0.970
0
100
200
300
400
0.20
500
0.10
Time (days)
0.00
Figure 9.
100
200
300
400
500
Time (days)
Figure 13.
Valve 2
1.000
Heat Exchanger 2
0.990
1.00
0.90
0.985
0.80
0.980
Availability
Availability
0.995
0.975
0.970
0
100
200
300
400
500
0.70
0.60
0.50
0.40
0.30
Time (days)
0.20
0.10
Figure 10.
0.00
100
200
300
400
500
Time (days)
Valve 3
Figure 14.
1.000
Availability
0.995
0.990
REFERENCES
0.985
0.980
0.975
0.970
0
100
200
300
400
500
400
500
Time (days)
Figure 11.
Valve 4
1.000
Availability
0.995
0.990
0.985
0.980
0.975
0.970
0
100
200
300
Time (days)
Figure 12.
502
APPENDIX I
00101100 01101011 00110010 10000111 00011110 10011001 00010001 00101001 10010001 00111001 00000010 00110011 00000011 10000100 00010011
11101100 00101100 1101011 10011101 01100100 11011101 00111110 11011100 11101101 1101001 01111110 10101100 1101110 11011101 10111110
Genotype:
Phenotype:
Motor 1
Motor 2
n number of foreseen interventions
proportionality factor
d maintenance scheduling displacement
Figure I1.
Motor 3
Pump 1
503
Pump 2
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Maintenance planning is a subject of concern to many industrial sectors as plant safety and
business depend on it. Traditionally, the maintenance planning is formulated in terms of a multi-objective
optimization (MOP) problem where reliability, availability, maintainability and cost (RAM+C) act as decision
criteria and maintenance strategies (i.e. maintenance tasks intervals) act as the only decision variables. However
the appropriate development of each maintenance strategy depends not only on the maintenance intervals but also
on the resources (human and material) available to implement such strategies. Thus, the effect of the necessary
resources on RAM+C needs to be modeled and accounted for in formulating the MOP affecting the set of
objectives and constraints. In Martorell et al., (2007), new RAM + C models were proposed to address explicitly
the effect of human resources. This paper proposes the extension of the previous models integrating explicitly
the effect of material resources (spare parts) on RAM+C criteria. This extended model allows accounting
for explicitly how the above decision criteria depends on the basic model parameters representing the type of
strategies, maintenance intervals, durations, human resources and material resources. Finally, an application
case is performed on a motor-driven pump analyzing how the consideration of human and material resources
would affect the decision-making.
INTRODUCTION
Often, RAMS+C (Reliability, Availability, Maintainability, Safety plus Cost) take part of the relevant criteria for the decision-making of concern to maintenance
planning.
The RCM (Reliability Centered Maintenance) is a
good example of a systematic methodology to establish an efficient maintenance plan to cope with all
of the equipment dominant failure causes (Figure 1).
Typically, the main objective in applying the RCM
methodology has been to find out the best set of maintenance strategies that provide appropriate balance of
equipment reliability and availability and associated
costs.
In pursuing the RCM goal the decision-maker must
face at least two main problems: 1) there is not a unique
maintenance plan (i.e. set of maintenance strategies as
technical resources) to cope with all the dominant failure cause as shown in Ref. (Martorell et al., 1995), 2)
frequencies suggested for performing specific maintenance tasks are based on deterministic criteria and they
are normally far away of being optimized. Then, one
challenge for the decision-maker is to select the most
suitable set of maintenance strategies with optimal
frequencies based on reliability, availability and cost
505
Failure
Cause #i
Maintenance strategy #j
Task #k
Task #1
Task #2
Dominant
Failure Causes
Working conditions
Maintenance strategy #2
Maintenance Plan
H um a n
Re sou r ces
M a ter ia l
Res ou rc es
fs
w
fN
dN
RAM+C MODELS
HN
NonSchedu led
ur
Availability
(degradation mechanisms)
Critical Equipment
Failure
Cause #2
Scheduled
Cost
ds
M aintainability
Strategies
(Technical Resources)
Maintenance strategy #1
Maintainability
Human Resources
Reliability
Failure
Cause #1
Hs
Maintainability
Material Resources
506
[1 + (1 ) ( f RP 1)]
2f
(1)
(2 )
2f
(2)
Maintainability models
(4)
(5)
H
(P NP + E NE ) [NP + NE ]
(6)
Delay
TD
Task
Starts
Man-Hours
H
Task
Ends
TDPH
Human Delay
TDM
Figure 3.
507
Material Delay
(7)
Availability models
(8)
uS = fS dS
(9)
uN = fN dN G[AOT ]
AOT
dN
(12)
cN = 8760. fN c1N
(13)
(14)
(10)
(11)
(15)
508
Neq
NP S
(fA TPA )
(16)
AP
(17)
ST
xi
ST-xi-R
ST-xi
L
Figure 4. Case 1: Original inventory is enough to supply
the demand of spare parts during the period L.
ST
xi
R
xi-ST
L2
L1
L
509
ST
+ R TDM
i=0
(xi + R) P(x = xi )
(18)
(20)
(21)
(xi R) P(xTDM = xi )
(22)
i=R
ST
((ST R) xi ) P(x = xi )
i=0
(23)
where cop is the opportunity cost per spare part, as a
consequence of that the capital invested in its purchase
is not available for other uses, and cdp represents the
component depreciate cost.
Uij, k
(24)
Cij, k
(25)
kj
Cij =
kj
510
uij
I (hrs)
ij
C=
cij
(27)
ij
APPLICATION EXAMPLE
Table 3.
# Cause
Code
Description
c1
c2
IAL
DEM
c3
c4
c5
c6
MBW
PIW
SPD
MIS
Inadequate lubrication
Damaged electric
or electronic module
Motor bearing wear
Pump impeller wear
Set point drift
Misalignment
511
c1
c2
c3
c4
c5
c6
26000
13000
Y
Y
N
N
Y
N
N
N
N
Y
N
N
26000
26000
Parameter
Efficiency
Delay for
unscheduled tasks
Delay for
scheduled tasks
Cost
Neq
Persons (N)
K (law of decreasing
effectiveness)
Table 4.
Table 1.
Failure causes
(26)
Own
personnel
External
personnel
0.9
0h
1
3h
0h
0h
20000 C
= /year
100
[0, 1, 2, 3]
30 C
= /hour
[0, 1, 2, 3]
0.25
Parameter
Value
RP
Spare part cost, csp
Emergency cost per order, cu
Fixed ordering cost, cfo
Percentage p
TDM
Reorder point R
Opportunity cost, cop
Depreciate cost, cdp
40 years
1328 C
=
1992 C
=
132 C
=
20%
720
1
66 C
=
33C
= /year
Without stocks
Without Reorder point
Reorder point (R)
0,0510
0,0505
Unavailability
0,0500
0,0495
0,0490
(1,0,1)
(1,0,1)R
(Np, Ne, Stock)
(2,0,0)
(1,0,2)R
(1,2,0) (1,0,3)R
(1,3,0)
(1,4,0)
(1,1,1)
(3,4,0)
(2,4,0)
0,0485
(1,1,2)R
(1,1,3)R
0,0480
0,0475
(4,4,0)
(1,2,1)
(1,1,2)
(1,2,2)
(1,2,3)R
(1,0,3)
(1,2,4)R
(2,2,3)
(2,2,4)R
(3,2,4)R
(3,2,3)
(4,2,3)
(3,2,4)
(4,2,4)
(4,2,4)R
0,0470
0,0465
2000
4000
6000
8000
10000
12000
14000
16000
Cost
Figure 6.
CONCLUDING REMARKS
512
REFERENCES
Axster, S. 2006. Inventory Control. Springer, United States
of America.
Crespo, A. 2007. The maintenance management frame work:
models and methods for complex systems maintenance.
Springer series in reliability engineering.
Kaufmann, A. 1981. Mtodos y Modelos Investigacin de
Operaciones. CIA. Editoral Continental, S.A. de C.V,
Mexico.
Leland T. Blank, Anthony Tarquin. 2004. Engineering
Economy. McGraw-Hill Professional. United States.
Martorell, S., Muoz, A., Serradell V. 1995. An approach
to integrating surveillance and maintenance tasks to prevent the dominant failure causes of critical components.
Reliability engineering and systema safety, Vol. 50,
179187.
Martorell S., Sanchez A., Serradell V. 1999. Age-dependent
reliability model considering effects of maintenance and
working conditions. Reliability Engineering & System
Safety, 64(1):1931
Martorell, S., Carlos, S., Sanchez, A., Serradell, V.
2001. Simultaneous and multi-criteria optimization of
TS requirements and maintenance at NPPs. Annals of
Nuclear Energy, Vol. 29, 147168.
513
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: We look the use of expert judgement to parameterize a model for degradation, maintenance
and repair by providing detailed information which is then calibrated at a higher level through course plant
data. Equipment degradation provides signals by which inferences are made about the system state. These may
be used informally through the use of red/yellow/green judgements, or may be based on clear criteria from
monitoring. Information from these signals informs the choices made about when opportunities for inspection
or repair are taken up. We propose a stochastic decision model that can be used for two purposes: a) to gain
an understanding of the data censoring processes, and b) to provide a tool that could be used to assess whether
maintenance opportunities should be taken or whether they should be put off to a following opportunity or
scheduled maintenance. The paper discusses the competing risk and opportunistic maintenance modeling with
expert judgement and the broad features of the model. Numerical examples are given to illustrate how the process
works. This work is part of a larger study of power plant coal mills reliability.
INTRODUCTION
In this paper we take a highly (subjective) decisionoriented viewpoint in setting up a model to capture
the interaction between system degradation and opportunistic maintenance/inspection. This model is primarily a decision model, rather than a reliability model,
although reliability provides the context. The model
considers the uncertainty about the state of the system
from the maintainer/operator point of view, and models how information gained about the system (which
includes the usage time, but also other pieces of
information) will change that uncertainty.
The model we have is a complicated stochastic
model that we have to simulate because of the lack of
convenient closed form solution, but it is designed to
fit closely to an expert judgement elicitation process,
and hence be relatively easy to quantify through a combination of expert judgement and plant data. Because
of the difficulties of using messy plant data, we give
preference to a combination of expert judgement and
plant data, as the most cost efficient way of quantifying
the model. In other words, the model quantification
process uses expert information to give fine details
of the model, while using plant data to calibrate that
expert data against actual experience. We have used
a strategy of reducing the difficulty of expert judgements by eliciting relative risks as we believe these to
be easier to elicit than absolute risks.
Background
515
Competing risks
opportunities often depend on the duration and economic dependence for set up costs which may require
a compromise in some circumstances.
The concept of opportunistic maintenance and competing risk modeling is important within an industrial
power plant. The role of expert judgments within
the maintenance area may give an insight and better understanding of the interrelationships between
events which could strongly support our modeling
later. Expert judgment in maintenance optimization
is discussed in Noortwijk, et al. (1992). Their elicited
information is based on discretized lifetime distribution from different experts. That is, they performed a
straightforward elicitation of failure distribution quantiles, rather than the kind of information being sought
for the model we build here.
BASE MODEL
exp
Si
Si1
i (t)dt .
(1)
516
j1
(3)
(Si+1 Si )i +(tSj )j
i=1
(2)
where i is the failure rate after shocks i, and Sj is the
largest event time less than t.
2.2
A simple quantity that measures the degree of censoring is SX (0) = P(X < Z). This is simply the
probability that the next event will be a failure rather
than a maintenance action.
517
P X1 > t, X2 > t|S01,... Sm(1)1 , S02,... Sm(2)2
(5)
(6)
Hence the conditional survival probability factorizes because of the additivity assumption on i,j .
Under the assumption that the two shock processes are
independent, we can then say that the unconditional
survival probabilities also factorize.
Hence when the two failure mode processes are
considered to be truly independent, then they can be
modelled as two different cases of the base model, and
the failure rates added together. However, when they
are not considered independent then can capture this
in one of two ways:
1. The shock time processes are dependent
2. The failure intensities are not additive.
The simplest way in which the shock processes
could be dependent is for there to be common shocks
for both failure modes. For the purposes of this paper
we shall not consider more complex forms of dependency between the shock processes. Regarding failure
intensities, we would typically expect that the failure intensities are additive for early shocks, and then
may become superadditive for late shocks, that is
i,j > i + j
Remark 1.1 When there is more than one failure
mode, there is a modelling issue relating to the meaning of the failure intensities. While in the single failure
mode model we can simply consider the failure intensity to cover failures arising from any reason, when
there are two or more failure modes we have to distinguish failure intensities arising from the different
failure modes. This will avoid double counting of any
residual failure intensity not ascribable to those two
failure modes, but also ends up not counting it at all.
Therefore in this case, if there are significant failure
intensities from residual failure causes, then it is best
to explicitly assess these alongside the main failure
modes, so that the failure intensities can be properly
added.
ELICITATION
MODEL CALIBRATION
518
exp .
j1
(Si+1 Si )i + (t Sj )j
i=1
0.8
Failure time
0.6
Minimum
0.4
RESULTS
0.2
0
0
Figure 1.
519
10
1.2
1
Taken
opportunity
0.8
Failure time
0.6
Minimum
0.4
0.2
0
0
Figure 2.
10
1.2
1
Taken
opportunity
0.8
Failure time
0.6
Minimum
0.4
0.2
0
0
Figure 3.
10
15
20
Table 1.
values.
ACKNOWLEDGEMENTS
alpha
P(X < Z)
0.4
0.6
0.8
1
1.2
1.4
1.6
0.230
0.256
0.313
0.346
0.402
0.470
0.599
REFERENCES
Finally we give a table showing how the probability of observing failure depends on the scaling
parameter .
This confirms empirically the theoretical result
given above, and shows that the scaling parameter is a
first order model parameter.
6
CONCLUSIONS
520
521
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
INTRODUCTION
NOTATION
523
= g(yi |u, j)fi1 (u + ti ti1 |j, Vi1 )du
0
r
d=1
g(yi |u, d)
0
(3)
DEFINITIONS
E[Xi |Xi > 0, i ] =
Pr(x Xi x + dx | Xi > 0, i )
r
(4)
x
(1)
j=1
(5)
We parameterise f0 (x | j) and g(y|x, j) using maximum likelihood estimation and the nj CM histories
from components that failed according to the jth failure type. The likelihood is the product of the relevant
conditional probabilities for all the histories;
L( j ) =
nj bk
(dyki )1 Pr(yki Yki yki + dyki |
k=1
i=1
(2)
where, j is the unknown parameter set. After inserting the relevant pdfs and re-arranging, the likelihood
fi (x|j, Vi ) =
524
nj
bk
L( j ) =
g(yki |u, j)fk,i1 (u + tki tk,i1 |
i=1 0
k=1
(7)
(2)
(x + ti ti1 |Vi1 )
g (2) (yi |x)fi1
(2)
g (2) (yi |u)fi1
(u + ti ti1 |Vi1 )du
(8)
The comparison is achieved using an average mean
square error (AMSE) criterion. At the ith CM point
for the kth component, the MSE is
MSE ki =
(9)
(10)
(11)
nj
AMSE =
MSE jki /N
bk
N
k=1 i=1
MSE ki /N
525
Standardised 1st PC
1.5
fi (x|j, Vi ) =
i
(x + ti )j 1 e(j (x+ti ))
0.5
z=1
0
0
200
400
600
800
1000
1200
1400
1600
1800
0.5
(u + ti
j
)j 1 e(j (u+ti ))
i
du
z=1
u=0
-1
(15)
1.5
Time
j j j xj 1 e(j x)
u=0
z=1
u=0
(12)
i1
du
r
z=1
where, x > 0 and j , j > 0 for j = 1, 2. For linearly independent principal components, we have the
combined conditional pdf
g(yi |x, j) =
g(yic |x, j)
d=1
u=0
(13)
z=1
c=1
u=0
i1
du
(16)
z=1
(14)
(yj (x))2
2j2
526
1 (x)
(x)
A1 + B 1 x
A1 + B1 /x
A1 + B1 exp{C1 x}
A1
B1
C1
1
1.628
0.006281
0.393
0.031
18.882
0.866
2.326
4.386
0.002746
0.334
ln L( 1 )
AIC
63.036
132.072
165.753
337.506
41.852
91.704
2 (x)
ln L( 2 )
AIC
A + B exp{Cx}
A
B
C
0.735
0.002052
0.696
0.168
14.367
0.894
1.308
3.454
0.004233
0.333
ln L( )
AIC
236.721
479.442
292.852
591.704
71.27
150.54
A2 + B2 x
A2 + B2 /x
0.606
0.001624
0.618
0.401
11.761
0.851
88.109
182.218
A + B/x
A2
B2
C2
2
A + Bx
118.21
242.42
A2 + B2 exp{C2 x}
(17)
1.228
3.268
0.003995
0.279
13.531
35.062
1 = 0.001606 and 1 = 5.877 respectively. A number of forms are considered for the function 1 (x) in
equation (14) that describes the relationship between
the RL and the observed (and transformed) CM information under the influence of failure type 1. The
estimated parameters and the AIC results are given
in table 1 where, the objective is to minimise the AIC.
The selected function is 1 (x) = A1 + B1 exp
(C1 x) where, A1 = 2.326, B1 = 4.386, C1 =
0.002746 and the standard deviation parameter is 1 =
0.334. In addition, the prior probability that failure
type 1 will occur for a given component is estimated
as p0 (1) = 7/10 = 0.7.
(y(x))
2
(18)
(x + ti )1 e((x+ti ))
6.2.3 Failure type 2
For failure type 2, the prior Weibull RL pdf is parameterised as 2 = 0.000582 and 2 = 19.267. The
estimated parameters and selection results for the
function 2 (x) from equation (14) are given in table 2.
The selected function is 2 (x) = A2 +
B2 exp(C2 x) where, A2 = 1.228, B2 = 3.268,
C2 = 0.003995 and 2 = 0.279. The prior probability
that failure type 2 will occur is p0 (2) = 0.3.
(u + ti )1 e((u+ti ))
u=0
i
z=1
i
du
z=1
(19)
6.3 Comparing the models
The models are compared using new component data.
The first component is known (in hindsight) to have
527
1
2
0.8
Probability
Probability
0.8
0.6
0.4
0.6
2
0.4
0.2
0.2
100
200
300
400
Time
500
600
700
1
0
500
1000
1200
Actual RL
Actual RL
1500
1000
FM Model RL
Estimate
800
Residual Life
Residual Life
1500
Time
General Model RL
Estimate
600
400
1200
FM Model RL
Estimate
900
General Model
RL Estimate
600
200
300
0
0
100
200
300
400
Time
500
600
700
800
0
0
500
1000
1500
Time
DISCUSSION
528
ACKNOWLEDGEMENT
The research reported here has been supported by the
Engineering and Physical Sciences Research Council
(EPSRC, UK) under grant EP/C54658X/1.
REFERENCES
Makis, V. and Jardine, A.K.S. (1991) Optimal replacement
in the proportional hazards model, INFOR, 30, 172183.
Zhang, S. and Ganesan, R. (1997) Multivariable trend analysis using neural networks for intelligent diagnostics of
rotating machinery, Transactions of the ASME Journal of
Engineering for Gas Turbines and Power, 119, 378384.
Wang, W. and Christer, A.H. (2000) Towards a general condition based maintenance model for a stochastic dynamic
system, Journal of the Operational Research Society, 51,
145155.
Wang, W. (2002) A model to predict the residual life of rolling
element bearings given monitored condition information
to date, IMA Journal of Management Mathematics, 13,
316.
Vlok, P.J., Wnek, M. and Zygmunt, M. (2004) Utilising statistical residual life estimates of bearings to quantify the
influence of preventive maintenance actions, Mechanical
Systems and Signal Processing, 18, 833847.
Banjevic, D. and Jardine, A.K.S. (2006) Calculation of reliability function and remaining useful life for a Markov
failure time process, IMA Journal of Management Mathematics, 286, 429450.
Carr, M.J. and Wang, W. (2008a) A case comparison of
a proportional hazards model and a stochastic filter for
condition based maintenance applications using oil-based
condition monitoring information, Journal of Risk and
Reliability, 222 (1), 4755.
Carr, M.J. and Wang, W. (2008b) Modelling CBM failure
modes using stochastic filtering theory, (under review).
529
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The present work proposes a two-stages modeling framework which aims at representing both
a complex maintenance policy and the functional and dysfunctional behavior of a complex multi-component
system in order to assess its performances in terms of system availability and maintenance costs. A first stage
consists in a generic model of component, developed to describe the component degradation and maintenance
processes. At the second stage, a system of several components is represented and its behavior is simulated when
given operating profile and maintenance strategy are applied, so as to estimate the maintenance costs and the
system availability. The proposed approach has been validated for a simplified turbo-pump lubricating system.
1
1.1
INTRODUCTION
Industrial context
1.2
Scientific context
Numerous maintenance performances and costs models have been developed during the past several
decades, see e.g. (Valdez-Flores & Feldman 1989),
with different interesting objectives reached.
However, they remain difficult to adapt to multicomponent systems and complex maintenance strategies such as those developed and applied in the
RCM context, since most of them are devoted
to simple maintenance strategies (periodic maintenance, condition-based maintenance, age-based maintenance, . . .) with a finite number of actions and
defined effects (perfect inspections, perfect replacements, minimal repair, . . .) or applied on single-unit
systems, see e.g. (Dekker 1996, Moustafa et al., 2004).
Other approaches, based on stochastic simulation,
already allow taking into account more complex maintenance strategies but they generally aim at developing
simulation techniques such as Monte Carlo simulation,
or optimization procedures, see e.g. (Marseguerra and
Zio 2000).
Finally, only a few part of maintenance simulation works give an interest to components degradation
531
and failure phenomena and to the effects of maintenance actions. All these observations have led to the
definition of a general and overall description of all
the aspects related to a system and its components
behavior and the different possible maintenance tasks
applied.
System failure
behaviour model
PERFORMANCES
EVALUATIONS
2.1
System operation
model
Generic component
model
System-level
maintenance model
MODEL OF COMPONENT
Overall model
532
Operating
profile
System
Operation
Degradation
Mechanism
Environment
Influencing Factors
Degradation
Mechanisms
Failure Modes
System Dysfunction
Level 0
Symptoma
Influencing
Factors
Preventive
Maintenance
Corrective
Maintenance
Maintenance
Effects on system
Failure Mode
occurence
Level n
Degradation
Mechanism
3.2
Level i
Failure
rates
Symptom
Unsignificant
Maintenance:
Repair
Significant
Symptom
observation
Maintenance:
Detection
Component behavior is defined by the way degradation mechanisms can evolve and may lead to some
failure mode occurrence. This evolution is due to the
influence of various factors such as operating conditions, environment, failure of another component,
etc.
As shown on Figure 3, each degradation mechanism
evolution is described through various degradation
levels and at each level an increasing failure probability represents the random apparition of the different
possible failure modes that can appear. The degradation level can be reduced after maintenance actions
performance.
Such a representation is motivated by the fact that
the different degradations that can affect a component
can be detected and observed by specific preventive
maintenance tasks which may lead to preventive repair
on a conditional way. Thus, it seems more appropriate
to describe each mechanism than to describe the global
degradation state of the component.
Due to the large amount of degradation mechanisms and to try to get a generic model which could
be applied in a large part of situations, it appears
convenient to propose various alternatives of degradation mechanism evolution representation in order to
choose among them the most adapted to a given situation, depending on the component, the mechanisms
and obviously the information data available. Different classical approaches can be considered (statistical,
semi-parametric, life-time model,. . .).
Another important aspect related to component
degradation mechanisms is that of symptom, i.e. a
phenomenon that may appear due to one or more mechanism evolutions and whose detection gives information about component degradation without directly see
the degradation itself. If possible, this type of detection can be made by less costly tasks which do not need
the component to be stopped, and so present some real
advantages.
As for degradation evolution, symptom apparition
and evolution can be modeled through some levels and
by means of thresholds that represent its significance,
i.e. the fact that it testifies a degradation evolution.
533
Table 1.
Task
Activation
Effects
Corrective maintenance
3.3
Repair
Failure mode
occurrence
Unavailability
failure repair
Time period
elapsed
Unavailability
Time period
elapsed
Overhaul
Time period
elapsed
Test
Failure observed
during stand-by
period
Symptom >
detection threshold
degradation >
repair threshold
Preventive
Repair
No unavailability
symptom
detection
Unavailability
degradation
detection
Unavailability
failure repair
Unavailability
degradation
repair
still in evolution with an increasing probability of failure mode occurrence). Finally, tests are performed
are expensive but efficient tasks that are performed
on stand-by components to detect an eventual failure before the component activation, but can have bad
effects on it.
534
4.2
The three system-level models and the componentlevel model interact together in order to represent
completely the system behavior, its unavailability and
expenditures, knowing the behavior of its components
and the maintenance tasks that are carried out.
Component-level models give information on components states (failure, unavailability for maintenance)
and on maintenance costs to the three other systemlevel models which evolve according to this input data
and possibly sent feedback data.
As shown on Figure 1, the system operation model
sends information to the component-level models to
active a stand-by component or stop an auxiliary component that has become useless after the repair of the
main component.
The system maintenance model can send data to
the component-level model to force the maintenance
of a component coupled together with a component
already in maintenance.
4.3
Cost(Strategy) =
lim
TMiss
TMiss
(1)
535
Cl1
Po1
Po2
Ca
Cl2
Failure Mode 1
(Unscheduled
shutdown)
Failure Mode 2
(Impossible starting)
Degradation
Mechanism A
(Bearing Wear)
Degradation
Mechanism B
(Oxydation)
Symptom 1
(Vibrations)
Symptom 2
(Temperature)
case study was to compare different possible strategies in terms of global cost, composed of different
type of tasks, depending on their periodicity.
Expert opinions and information data have been
collected to define components and system characteristics as well as those of the maintenance tasks
possibility performed, to let the modeling approach
been applied and simulated.
Indeed, for each component, degradation mechanism and maintenance tasks basic parameters such as
those in table 2 have to be specified.
Obviously, maintenance tasks are also described
in terms of periodicity and decision rule criteria,
that is which degradation levels can be observed and
when preventive repair are decided to be performed.
These characteristics defined the maintenance strategy
applied and simulated.
To model the particular case presented, for each
component, the main degradation mechanisms have
been characterized in terms of levels of degradation
and relative failure rate for the various possible failure modes, possible symptoma and their probability
or delay of apparition and evolution until some significant thresholds. We also defined the evolution transitions from one degradation level to the successive one
and finally, the influencing factors that have effects
on the mechanisms evolution. In the present case
study, mechanism evolution has been modelled using
a Weibull Life-Time distribution law, whose parameters were depending on the mechanisms described and
the information available from the experts, to compute
the time of the transition from one degradation level
to the successive one.
In particular the following statements, described
on Figures 6 and 7, have been defined regarding
the different relationships between the degradation
mechanisms, associated symptoma and failure modes
considered for each component:
Concerning Sensor Ca, only very rare random
occurrence of an electronic failure has been considered, and no symptom nor degradation mechanism.
Failure Mode 3
(No opening)
Failure Mode 4
(External leaks)
Degradation
Mechanism C
(Axis blocking)
Degradation
Mechanism D
(Joint wear)
Symptom 3
(Deposits)
536
Table 2.
Levels
Basic evolution
description
Influencing factors
impact
Minimal and
maximal
degradation
thresholds
Representation chosen
and parameters values
(ex: Weibull law
parameters)
Modification of the
basic parameters
depending on the
influencing factors
Failure
Failure modes that
can occur.
Failure rates.
Symptoma
Symptoma that
appears. Eventual
delay or probability.
Effectiveness
Error risk
(non detection
and false alarm)
Parameters
Cost
Duration
Resources
Effectiveness
Repair type
(AGAN, ABAO,
or partial)
537
Parameters
Cost
Duration
Resources
1400
4500
1200
4000
1000
3500
3000
800
2500
600
2000
400
1500
1000
200
500
0
Maintenance strategies
0
Maintenance tasks periodicity increasing
the fact that all the external inspections tasks represent some non detection and false alarm error risks
which are even more important regarding degradation
mechanism C. It is indeed impossible to detect its evolution through some symptom detection and it has been
assumed that it is done with a very poor efficiency due
to the distance between the degradation evolution and
the task performance.
Thus, it is interesting to compare the advantage
of performing both the different type of tasks. By
so doing, it is indeed possible to control the system
components degradation evolution indirectly when
possible, with external inspection devoted to some
symptom detection, and also on a more direct way
with overhauls. Those are more efficient in terms of
degradation detection and, when performed with a
higher periodicity, the global maintenance costs can
be reduced.
Figure 10 presents the minimal global costs for
different strategies composed as follow:
external inspections supported by overhauls to
detect degradation mechanism A, B and D, ie
those detectable through some symptom observation, with overhauls preformed with a higher
periodicity than external inspections one,
overhauls to detect the evolution of degradation
mechanism C, since it is not convenient in terms
of efficiency to observe it through external inspections.
Again, strategies were differing in terms of tasks
periodicity and are here compared to the minimal cost
538
corresponding of the strategy composed only by overhauls to show that some appropriate combinations
render possible to reduce the global maintenance costs.
6
CONCLUSIONS
REFERENCES
Brenguer, C., Grall, A., Zille, V., Despujols, A. &
Lonchampt, J. 2007. Modeling and simulation of complex maintenance strategies for multi-component sys-
539
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Redundancy Allocation Problems (RAPs) are among the most relevant topics in reliable system design and have received considerable attention in recent years. However, proposed models are usually
built based on simplifying assumptions about the system reliability behavior that are hardly met in practice.
Moreover, the optimization of more than one objective is often required as, for example, to maximize system reliability/availability and minimize system cost. In this context, a set of nondominated solutionssystem
designs with compromise values for both objectivesare of interest. This paper presents an ACO approach
for multiobjective optimization of availability and cost in RAPs considering repairable systems subjected to
imperfect repairs handled via Generalized Renewal Processes. The dynamic behavior of the system is modeled
through Discrete Event Simulation. The proposed approach is illustrated by means of an application example
involving repairable systems with series-parallel configuration.
INTRODUCTION
Suppose that a system is composed of several subsystems in series and that each one of them may
have a number of redundant components in parallel.
The determination of the quantity of redundant components in order to maximize the system reliability
characterizes a redundancy allocation problem.
The increase of the redundancy level usually
improves not only system reliability, but also increases
the system associated costs. With the purpose of incorporating costs limitations, RAPs are often modeled
as single optimization problems with the objective of
maximizing system reliability subject to costs constraints. However, in real circumstances, one may
desire to consider the associated costs not as a constraint but as an additional objective to be attained.
In these situations, in which multiple objectives are
taken into account, it is necessary a multiobjective
optimization approach in modeling RAP.
RAPs are essentially combinatorial since the aim
is to find optimal combinations of the components
available to construct a system. The complexity of
such problems may considerably increase as the number of components grows leading to situations where
the application of classical methods such as integer programming (Wolsey 1998) is prohibitive in
light of the time required to provide results. Alternatively, heuristic methods such as Genetic Algorithms
(GAsGoldberg 1989, Michalewicz 1996) and Ant
Colony Optimization (ACODorigo & Sttzle 2004)
541
attempts to imitate system behavior by randomly generating discrete events (e.g. failures) during simulation
time (mission time). In addition, the DES flexibility
permits the introduction of many system real aspects,
such as the taking into account the availability of
maintenance resources during mission time.
1.1
Previous works
542
2
2.1
PRELIMINARIES
of Equation 1 yields:
(1)
Vn = q
n
Xi
(2)
i=1
F(vi1 + xi ) F(vi1 )
1 F(vi1 )
(3)
(4)
When TTR are not negligible if compared to the
component operational time, they may be considered in the evaluation of its failure-repair process. In
this situation, there are two stochastic processesone
regarding the failure process and the other related to
the repair process. The superimposing of these two
stochastic processes results in an alternating process
that characterizes the component state (either operational or unavailable due to repair action). However,
when the system is composed of several components
each one having an alternating process, the analytical
handling of the entire system failure-repair process
becomes infeasible. In these cases, DES may be used
as an alternative to overcome such difficulties.
2.2
Multiobjective optimization
Value
Repair type
q=0
0<q<1
q=1
Perfect
Imperfect
Minimal
(5)
Subject to
gi (x) = 0, i = 1, . . ., p
543
(6)
hi (x) 0, i = p + 1, . . ., m
(7)
and
(8)
MULTIOBJECTIVE ACO
based on the single objective Ant System for the Traveling Salesman Problem (TSP) proposed by .Dorigo
et al., (1996). The following subsections are dedicated
to the discussion of the proposed multiACO.
3.1 Input data
Initially it is necessary to identify the number of
subsystems in series (s), the maximum (ni,max ) and
minimum (ni,min ) number of redundant components in
ith subsystem, i = 1, . . ., s. Each subsystem can be
composed by different technologies, which may have
different reliability and cost features. Hence, it is also
required the quantity of available component types (ci ,
i = 1, . . ., s) that can be allocated in each subsystem. With this information, the ants environment is
constructed.
Moreover, information about components TTF and
TTR distributions and the components related costs
need to be specified. The ACO specific parameters
nAnts, nCycles, , , and Q are also required and
they are discussed in Subsection 3.3.
3.2 Environment modeling
The environment to be explored by the ants is modeled as a directed graph D = (V , A), where V is the
vertices set and A is the arcs set. D has an initial vertex
(IV) and a final vertex (FV) and is divided in phases
that are separated by intermediate vertices. An intermediate vertex indicates the ending of a phase and also
the beginning of the subsequent phase and therefore
is common to adjacent phases. In this work, a phase
is defined as the representation of a subsystem and
vertices within a phase represent either its extremities
or the possible components to be allocated in parallel in such subsystem. The quantity of vertices of the
ith phase is equal to ni,max cti plus the two vertices
indicating its beginning and its ending.
Vertices are connected by arcs. Firstly, consider a
problem with a unique subsystem. Hence, there is only
one phase with IV and FV, but intermediate vertices
are not necessary. IV is linked to all vertices within
the existent phase (except FV), which in turn are connected with each other and also with FV. Now suppose
a problem involving two subsystems. Then, an intermediate vertex plays the role of FV for the first phase
and also the role of IV to vertices within the second
phase. All vertices in second phase (except the vertex indicating its beginning) are linked to FV. These
instructions can be followed in the case of s subsystems. Arcs linking vertices within ith phase belong to
such phase.
For the sake of illustration, Figure 1 shows an
example of a directed graph representing an ants environment, where s = 2, n1,min = 1, n1,max = 2, ct1 = 2,
n2,min = 1, n2,max = 1, ct2 = 4.
544
vw =
nAnts
vw,k
(11)
k=1
3.3
SC = Cmax Ak /Ck
pkvw
(9)
uWk
where vw (t) is the pheromone quantity on the arc linking vertices v and w at time t, vw is the visibility
of the same arc, is the relative importance of the
pheromone quantity, and is the relative importance
of vw .
In this work, the amount of pheromone is initially
set to 1 / ni,max ct i for each arc within the ith phase.
A cycle is finished when all ants reach FV. When that
occurs, the pheromone quantity on arcs is updated (i.e.,
multiACO is an ant-cycle algorithmsee Dorigo et
al., (1996), Dorigo & Sttzle (2004)). Let m be the
highest number of visited vertices in a cycle. Then,
at time t + m, the pheromone quantity in each arc is
updated in accordance with the rule:
vw (t + m) = (1 )vw (t) + vw
(10)
(12)
MTTFw
cai,max
MTTFw + MTTRw
caw
(13)
545
3.4
Dominance evaluation
The desired result of multiACO is a set of nondominated solutions (N ), which may contain all compromise system designs found during the algorithm run.
Therefore each ant is evaluated for each objective and
has a number of associated objective values equal to
the quantity of objectives (each objective is treated
separately).
The set N is updated at the end of each cycle.
Firstly, however, a set of candidate solutions (CS) to
be inserted in N is obtained by the assessment of the
dominance relation among ants within the cycle under
consideration. If ant k is dominated by other ants in
the current cycle, then it is not introduced in CS. Otherwise, if ant k is nondomidated in relation to all ants
in the present cycle, then it is inserted in CS. Next, it is
necessary to evaluate the dominance relation of each
element in CS in relation to solutions already stored
in N . Suppose that ant k is in CS, then: (i) if ant k is
dominated by elements in N , then ant k is ignored; (ii)
if ant k dominates solutions in N , then all solutions
dominated by ant k are eliminated from N and a copy
of ant k is inserted in N .
3.5
The algorithm
Figure 2 shows the pseudocode of the proposed multiACO. The required input data was discussed in
Subsection 3.1.
Figure 2.
546
EXAMPLE APPLICATION
mi
s
caij xij
(14)
i=1 j=1
Figure 3. Pseudocode
estimation.
for
the
system
availability
zk [=1(available); =0(unavailable)] and tk be the component state, system state and time at the kth step,
respectively. Moreover, let ck be a counter of the number of times that the system is available by the end
of the kth step, hi () and mi () be the time to failure
and repair time probability density functions of the
component i, and A(tk ) be the system availability at
time tk . If nC is the number of system components, a
DES iteration can be written in pseudocode as shown
in Figure 3.
In a nutshell, the algorithm described above may be
thought in the following way: while the process time
ti is lower than the mission time tk , the following steps
are accomplished: (i) the time to failure i of the component i is sampled from hi (); (ii) ti is increased by i ;
(iii) the condition (ti tk ) means component i ends
kth step on an available condition (zik = 1); otherwise,
component i failed before the kth step; (iv) in the latter, the repair time xi is sampled from mi (t) and ti is
increased by it; if (ti tk ) the component iends kth
step under a repair condition and therefore unavailable
(zik = 0); (v) upon assessing the states of the nC components at the kth step, the system state zk is assessed
via the corresponding system BDD; (vi) finally, the
counter ck is increased by zk . The aforementioned procedure is repeated M times, a sufficiently large number
of iterations, and then the availability measure A(tk )
at kth step is estimated dividing the values ck by M .
A number of random variables are obtained via DES
and fed back to multiACO with the aim of calculating
the objectives (that is, system mean availability and
system total cost): system operating/unavailable time
xij
mi
s
coij toijk
(15)
where coij is the operating cost per unit time for the jth
component type of the ith subsystem and toijk is the
operating time of the kth copy of that component.
CCM =
xij
mi
s
ccmij nijk
(16)
(17)
547
Figure 4.
Table 2.
S1
S2
S3
ni,min
ni,max
cti
1
1
1
3
6
4
5
5
3
Components are supposed to have their failure process modeled according to a GRP. More specifically,
it is assumed a Kijima Type I model with TTF given
by Weibull distributions with different scale (, in
time units), shape () and rejuvenation (q) parameters.
On the other hand, TTR are exponentially distributed
with different parameters () per component type.
In addition, components are subjected to imperfect
repairs. As soon as a component fails, the repair does
not start immediately and it is necessary to check
the availability of required maintenance resources. If
resources are available, a random time representing
the logistic time for resource acquisition is generated
according to an Exponential distribution with = 1,
i.e., the failed component must wait up to such time
to go under repair. Otherwise, the failed component
waits in queue for the required maintenance resources.
All system designs in the set of nondominated solutions are optimal in accordance with the multiobjective
approach. However, the decision maker may select
only one system design to be implemented. In order to
guide such selection, he can make a return of analysis
investment, that is, observe the gain in system mean
availability in relation to the required investment in
the corresponding system design. Mathematically the
return of investment (ROI ) is:
ROI = (Ak Ak1 )/(Ck Ck1 )
548
(18)
Table 3.
S1
S2
S3
fTTF
fTTR
ca
co
ccm
1
2
3
4
5
1
2
3
4
5
1
2
3
Exp(1.5)
Exp(1.2)
Exp(0.9)
Exp(1.0)
Exp(1.1)
Exp(0.9)
Exp(1.2)
Exp(0.8)
Exp(1.1)
Exp(1.1)
Exp(0.5)
Exp(0.6)
Exp(0.6)
9900
9400
8500
8800
8200
7000
8700
7800
9100
7500
5500
5800
5200
198
188
170
176
164
140
174
156
182
150
110
116
104
990
940
850
880
820
700
870
780
910
750
550
580
520
Figure 5.
Figure 6.
Table 4.
Solution
Mean availability
Cost
ROI
B
C
D
E
0.763128
0.816275
0.990152
0.991839
348357
355391
773650
906926
7.555.106
1.200.108
paper presented an attempt to tackle RAPs considering the more realistic repairable system behavior
consisting of imperfect repairs. This was achieved by
coupling multiobjective ACO and DES. In this context,
the dynamic behavior of potential system designs was
evaluated by means of DES, providing the decision
maker a better comprehension of the incurred costs
during mission time due to different mean availability
values.
The proposed multiobjective ACO algorithm can
certainly be improved by applying more sophisticated
pheromone updating rules and coupling it with a local
search algorithm with the aim of obtaining more accurate solutions. Moreover, some improvements can be
done in the reliability modeling. For example, the
amount of maintenance resources could be a decision variable itself, and hence the decision maker
could have the compromise system designs taking into
consideration the required quantity of maintenance
resources over mission time.
CONCLUDING REMARKS
Although there are some works in the literature applying ACO in RAPs, they often make simplifications
concerning system reliability behavior that are usually not satisfied in practical situations. Therefore this
REFERENCES
Banks, J., Carson, J.S., Nelson, B.L. & Nicol, D.M. 2001.
Discrete event system simulation. Upper Saddle River:
Prentice Hall.
549
Bowles, J.B. 2002. Commentarycaution: constant failurerate models may be hazardous to your design. IEEE
Transactions on Reliability 51(3): 375377.
Busacca, P.G., Marseguerra, M. & Zio, E. 2001. Multiobjective optimization by genetic algorithms: application to
safety systems. Reliability Engineering & System Safety
72: 5974.
Cantoni, M., Marseguerra, M. & Zio, E. 2000. Genetic
algorithms and Monte Carlo simulation for optimal plant
design. Reliability Engineering & System Safety 68:
2938.
Chiang, C.-H. & Chen, L.-H. 2007. Availability allocation
and multiobjective optimization for parallel-series systems. European Journal of Operational Research 180:
12311244.
Coello, C.A.C., Veldhuizen, D.A.V. & Lamont, G.B. 2002.
Evolutionary algorithms for solving multiobjective problems. New York: Kluwer Academic.
Deb, K. 1999. Evolutionary algorithms for multicriterion
optimization in engineering design. In: Proceedings of
Evolutionary Algorithms in Engineering and Computer
Science (EUROGEN99).
Dorigo, M., Maniezzo, V. & Colorni, A. 1996. Ant system:
optimization by cooperating agents. IEEE Transactions
on Systems, Man and Cybernetics 26(1): 2941.
Dorigo, M. & Sttzle, T. 2004. Ant colony optimization.
Massachusetts: MIT Press.
Doyen, L. & Gaudoin, O. 2004. Classes of imperfect repair
models based on reduction of failure intensity or virtual
age. Reliability Engineering & System Safety 84: 4556.
Goldberg, D.E. 1989. Genetic algorithms in search, optimization, and machine learning. Reading: Addison-Wesley.
Ippolito, M.G., Sanseverino, E.R. & Vuinovich, F. 2004.
Multiobjective ant colony search algorithm for optimal
electrical distribution system strategical planning. In:
Proceedings of 2004 IEEE Congress on Evolutionary
Computation. Piscataway, NJ.
Juang, Y.-S., Lin, S.-S. & Kao, H.-P. 2008. A knowledge
management system for series-parallel availability optimization and design. Expert Systems with Applications
34: 181193.
Kijima, M. & Sumita, N. 1986. A useful generalization of renewal theory: counting process governed by
550
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
I. Frenkel
Center for Reliability and Risk Management, Industrial Engineering and Management Department,
Sami Shamoon College of Engineering, Beer Sheva, Israel
ABSTRACT: This paper considers reliability measures for aging multi-state system where the system and
its components can have different performance levels ranging from perfect functioning to complete failure.
Aging is treated as failure rate increasing during system life span. The suggested approach presents the nonhomogeneous Markov reward model for computation of commonly used reliability measures such as mean
accumulated performance deficiency, mean number of failures, average availability, etc., for aging multi-state
system. Corresponding procedures for reward matrix definition are suggested for different reliability measures.
A numerical example is presented in order to illustrate the approach.
INTRODUCTION
MODEL DESCRIPTION
551
If, for example, the state K with highest performance level is defined as the initial state, the value
VK (t) should be found as a solution of the system (1).
It was shown in Lisnianski (2007) and Lisnianski
et al., (2007) that many important reliability measures
can be found by the determination of rewards in a
corresponding reward matrix.
j =i
(1)
In the most common case, MSS begins to accumulate rewards after time instant t = 0, therefore, the
initial conditions are:
Vi (0) = 0,
i = 1, 2, . . ., K
(2)
(4)
K
K
dVi (t)
= rii +
aij (t)rij +
aij (t)Vj (t)
dt
j=1
j=1
i = 1, 2, . . ., K
(3)
I (t) =
1, if G(t) g0 ,
0 otherwise.
(5)
552
of acceptable states:
A(t) = Pr{I (t) = 1} =
Pi (t)
(6)
ig0
rjj =
w gj ,
0,
(7)
w gj > 0,
w gj 0.
(10)
T
A(t)dt
if
if
Vi (T ) = E
T
(W (t) G(t))dt
(11)
(8)
(9)
553
(12)
NUMERICAL EXAMPLE
14
0
a=
0
41
0
24
0
42 (t)
0
0
34
43
14
24
34
(41 + 42 (t) + 43 )
(13)
)
In order to find the MSS average availability A(T
according to introduced approach we should present
the reward matrix r in the following form.
0
0
r = r ij =
0
0
0
0
0
0
0
0
1
0
0
0
0
1
(14)
g4 = 360
43
Figure 1.
24
42 (t)
w = 300
g1= 0
34
g3 = 325
g2 = 215
dV1 (t)
dt
dV2 (t)
dt
dV3 (t)
dt
dV4 (t)
dt
41
= 14 V1 (t) + 14 V4 (t)
= 24 V2 (t) + 24 V4 (t)
= 1 34 V3 (t) + 34 V4 (t)
= 1 + 41 V1 (t) + 42 (t) V2 (t) + 43 V3 (t)
(41 + 42 (t) + 43 ) V4 (t)
14
(15)
554
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
(16)
Average Availability
0.998
0.996
0.994
300
0
r = r ij =
0
0
0.992
0.99
0.988
0.986
0
2
3
Time (years)
(18)
0 0 0
85 0 0
0 0 0
0 0 0
dV1 (t)
dt
dV2 (t)
dt
dV3 (t)
dt
dV4 (t)
dt
60
50
40
30
(19)
20
10
0
0
2
3
Time (years)
= 14 V1 (t) + 14 V4 (t)
= 24 V2 (t) + 24 V4 (t)
= 34 V3 (t) + 34 V4 (t)
= 41 + 42 + 41 V1 (t) + 42 (t) V2 (t)
+ 43 V3 (t) (41 + 42 (t) + 43 ) V4 (t)
(17)
61320
52560
43800
35040
26280
17520
8760
0
0
2
3
Time (years)
555
unacceptable states should be forbidden and all unacceptable states should be treated as absorbing state.
The state space diagram may be presented as follows.
According to the state space diagram in Figure 5
transition intensity matrix a can be presented as
follows:
0
0
a=
41 + 42 (t)
dV0 (t)
=0
dt
dV3 (t)
= 1 34 V3 (t) + 34 V4 (t)
dt
dV4 (t)
= 1 + (41 + 42 (t)) V0 (t) + 43 V3 (t)
dt
(41 + 42 (t) + 43 ) V4 (t)
(22)
0
0
,
34
34
43 (41 + 42 (t) + 43 )
(20)
0
1
0
0
0
1
(21)
g4=360
dV0 (t)
=0
dt
dV3 (t)
= 34 V3 (t) + 34 V4 (t)
dt
dV4 (t)
= 41 + 42 (t) + (41 + 42 (t)) V0 (t)
dt
+ 43 V3 (t) (41 + 42 (t) + 43 ) V4 (t)
34
(24)
The system of differential equations must be sold
under initial conditions: Vi (0) = 0, i = 0, 3, 4.
0.12
g3=325
w=300
0
Figure 5. State space diagram of generated system with
absorbing state.
0.1
0.08
0.06
0.04
0.02
0
0
Figure 6.
556
0.2
0.4
0.6
Time (Years)
0.8
1
0.8
0.6
0.4
0.2
0
0
Figure 7.
[0, T ].
0.1
0.2
0.3
Time (years)
0.4
0.5
The results of calculation the MSS reliability function according the formulae (12) are presented in
Figure 7.
From all graphs one can see age-related unit reliability decreasing compared with non-aging unit. In the
last two figures graphs for mean time to failure and
reliability functions for aging and non-aging unit are
almost the same, because of the fact that first unit failure usually occurs within short time (less than 0.5 year
according to Figure 7) and aging impact is negligibly
small for such short period.
4
CONCLUSION
Barlow, R.E. & Proshan, F. 1975. Statistical Theory of Reliability and Life Testing. Holt, Rinehart and Winston:
New York.
Carrasco, J. 2003. Markovian Dependability/Performability
Modeling of Fault-tolerant Systems. In Hoang Pham (ed),
Handbook of Reliability Engineering: 613642. London,
NJ, Berlin: Springer.
Finkelstein, M.S. 2002. On the shape of the mean residual
lifetime function. Applied Stochastic Models in Business
and Industry 18: 135146.
Gertsbakh, I. 2000. Reliability Theory with Applications to
Preventive Maintenance. Springer-Verlag: Berlin.
Gertsbakh, I. & Kordonsky, Kh. 1969. Models of Failures.
Berlin-Heidelberg-New York: Springer.
Hiller, F. & Lieberman, G. 1995. Introduction to Operation
Research. NY, London, Madrid: McGraw-Hill, Inc.
Howard, R. 1960. Dynamic Programming and Markov
Processes. MIT Press: Cambridge, Massachusetts.
Lisnianski, A. 2007. The Markov Reward Model for a
Multi-state System Reliability Assessment with Variable
Demand. Quality Technology & Quantitative Management 4(2): 265278.
Lisnianski, A., Frenkel, I., Khvatskin, L. & Ding Yi. 2007.
Markov Reward Model for Multi-State System Reliability Assessment. In F. Vonta, M. Nikulin, N. Limnios, C.
Huber-Carol (eds), Statistical Models and Methods for
Biomedical and Technical Systems. Birkhaser: Boston,
153168.
Lisnianski, A. & Levitin, G. 2003. Multi-state System Reliability. Assessment, Optimization and Applications. World
Scientific: NJ, London, Singapore.
Meeker. W. & Escobar, L. 1998. Statistical Methods for
Reliability Data. Wiley: New York.
Trivedi, K. 2002. Probability and Statistics with Reliability,
Queuing and Computer Science Applications. New York:
John Wiley & Sons, Inc.
Valdez-Flores, C & Feldman, R.M. 1989. A survey of
preventive maintenance models for stochastically deteriorating single-unit systems. Naval Research Logistics 36:
419446.
Wang, H. 2002. A survey of Maintenance Policies of
Deteriorating Systems. European Journal of Operational
Research 139: 469489.
Wendt, H. & Kahle, W. 2006. Statistical Analysis of Some
Parametric Degradation Models. In M. Nikulin, D. Commenges & C. Huber (eds), Probability, Statistics and
Modelling in Public Health: 26679. Berlin: Springer
Science + Business Media.
Zhang, F. & Jardine, A.K.S. 1998. Optimal maintenance
models with minimal repair, periodic overhaul and complete renewal. IIE Transactions 30: 11091119.
REFERENCES
Bagdonavicius, V. & Nikulin, M.C. 2002. Accelerated life
models. Boca Raton: Chapman & Hall/CRC.
557
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Weibull models appear to be very flexible and widely used in the maintainability field for
ageing models. The aim of this contribution is to systematically study the ability of classically used one-mode
Weibull models to approximate the bathtub reliability model. Therefore, we analyze lifetime data simulated
from different reference cases of the well-known bathtub curve model, described by a bi-Weibull distribution
(the infant mortality is skipped, considering the objective of modeling ageing). The Maximum Likelihood
Estimation (MLE) method is then used to estimate the corresponding parameters of a 2-parameter Weibull
distribution, commonly used in maintenance modeling, and the same operation is performed for a 3-parameter
Weibull distribution, with either a positive or negative shift parameter. Several numerical studies are presented,
based first on large and complete samples of failure data, then on a censored data set, the failure data being
limited to the useful life region and to the start of the ageing part of the bathtub curve. Results, in terms of
quality of parameter estimation and of maintenance policy predictions, are presented and discussed.
1
INTRODUCTION
Symbol 0 0 denotes the failure rate associated to random failures. Symbols >0, >0 and
559
a
a
t
a
a 1
t
a
t
a
a
(2)
a
(3)
Fa (a )
a
a
a
= 1 e1
1e
1 0.3679 = 0.6321
(4)
b
b
t b
b
b 1
tb
b
fc (t) =
c
c
tc
c
c 1
e
4
c
c
tc
c
c
c
H (t)
(6)
H (t b ) (5)
Cc
Cp
(7)
Cp + Cc E(N (Tp ))
Tp
(8)
tp
Finally, the total expected cost per unit time and per
preventive cost is given by
560
(Tp )
1
r
=
+ E(N (Tp )).
Cp
Tp
Tp
(10)
4.2
The AGAN maintenance strategy consists of replacing the component with a new one after it has failed
or when it has been in operation for Tp time units,
whichever comes first. The total expected total cost per
unit time per preventive cost of a component depends
on a summation of contributions of preventive and
curative costs:
(Tp )
R(Tp ) + r(1 R(Tp ))
.
=
Tp
Cp
R(t) dt
(11)
4.3
(12)
(13)
parameter 1/0 represents the Mean Time To Failure (MTTF) of the random failure mode taken
alone,
561
(14)
(15)
R() is chosen between 0.9 and 0.4 because for values over 0.9 the useful life would be too short and
values under 0.4 are not worth to be analysed. This
is because, for a constant failure rate, the residual
reliability at t = MTTF is equal to 0.3679. Due to
this, R() = 0.4 is a good limit as in practice a piece
of equipment will not be operated longer.
In order to express what reality is being considered in each case, it can be specified by the
3-uple (0 , , R()). For example, the reality
(0 = 1, = 3, R() = 0.8) can be expressed as
(0 , , R()) = (1, 3, 0.8).
In total, 5 5 6 = 150 samples of data have been
analysed, in each of the following two situations:
1. In the first situation, we used a complete set of
N = 1000 failure data taken from the Model of
Reality, see eq. 1, with the purpose of estimating
the quality of the approximation obtained with the
single-mode Weibull laws.
2. In the second situation we used a set of N = 100
observation data, however in this case suspended
data are also considered. The MTTF point is chosen as the predetermined maintenance interval Tp .
All values greater than this period will be considered as suspended data. This is an approximation of
the industrial reality where failure data are usually
limited.
6
562
Weibull model correspond to 35 82% of the values of Tp obtained with the reference models. When
the 3-parameter Weibull model with a negative location parameter is assumed, the situation is better: The
estimated values of Tp reached 43-100 percent of the
reference Tp . In 7 cases (i.e. in 4.66% of all assumed
cases), the value of Tp was correctly estimated.
In the second situation (i.e. with censored data), the
results estimated from the 2- and 3- parameter Weibull
models are also not desirable: In 36.66% of the estimations done we obtain a value of the shape parameter
<1 (a rejuvenation!). This means that there is no
minimum of the expected cost per unit time and per
preventive cost function. It is thus not interesting to
schedule a systematic preventive maintenance; this is
completely wrong in comparison the reference model.
In 43.34% of the estimations, the estimated value of the
shape parameter obtained lies in the interval [1, 1.2].
This values also does not correspond to the reference
model and the expected cost per unit time and per preventive cost becomes almost constant. Finally, in 20%
of the analyses, the estimated value of the shape parameter is between 1.2 and 1.62. This fact does not enable
us to model correctly the wear-out effects given our
reference model.
As it was said in Section 5.2, in order to express
what reality is being considered in each case, the
parameters that determine it can be expressed by the
3-uple (0 , , R()). Let us analyse the behavior of
two selected realities more deeply.
6.1
563
CONCLUSIONS
were analyzed. We presented various numerical experiments showing that a random sampling from the
well-known bathtub reliability model and subsequent
MLE of Weibull models with 2 or 3 parameters lead
to potentially dangerous shortcuts.
For huge, complete sets of failure data, the estimated shape parameters are almost in all cases inside
interval [1.17, 1.9] for the Weibull model with a positive location parameter or close to 6, when the Weibull
model with a negative location parameter is assumed.
These estimations do not correspond at all to wear-out
failures represented in our reference model by shape
parameters within the interval [2, 4]. This paradox in
the Weibull estimation leads to a conservative maintenance policy, which involves more cost and can be
even risky: Too short maintenance periods can reduce
the reliability of the system, because not all of these
interventions finish successfully. Moreover, this estimation drawback is observed in cases with a large set
of failure data (1 000).
For limited sets with censored data, the estimated
values of the shape parameter are often close to
one or even less than one and estimated predetermined maintenance intervals are not connected to
the reference model. It could be very interesting to
repeat tests contained in this contribution and also
to provide sensitivity analysis of estimated parameters with goodness-of-fit tests (Meeker and Escobar
1995).
The message of our contribution should be kept
in mind during the Weibull parameter estimation
process, in which also real failure data are analyzed:
Although the Weibull models are flexible and widely
used, we would like to point limitations of ability
of one-mode Weibull models with 2 or 3 parameters
564
REFERENCES
Kececioglu, D. (1995). Maintainability, Availability and
Operational Readiness Engineering Handbook. Prentice
Hall.
Meeker, W. Q. and L. A. Escobar (1995). Statistical Methods
for Reliability Data. John Wiley & Sons.
Murthy, D. N. P., M. Xie, and R. Jiang (2004). Weibull Models.
John Wiley & Sons.
Vansnick, M. (2006). Optimization of the maintenance of
reciprocating compress or based on the study of their performance deterioration. Ph. D. thesis, UniversitLibre de
Bruxelles.
565
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this paper, a maintenance model for a deteriorating system with several modes of degradation
is proposed. The time of change of mode and parameters after the change are unknown. A detection procedure
based on an on-line change detection/isolation algorithm is used to deal with unknown change time and unknown
parameters. The aim of this paper is to propose an optimal maintenance versus detection policy in order to
minimize the global maintenance cost.
INTRODUCTION
567
SYSTEM DESCRIPTION
The system to study in this paper is an observable system subject to accumulation of damage. The system
yi (ts)1 e
1
fi (ts),i (y) =
(i (t s))
ii (ts)
1{y0} .
(1)
568
(2)
MAINTENANCE POLICY
In the framework of this paper, the considered deteriorating systems cannot be continuously monitored.
Hence, the deterioration level can only be known
at inspection times. We shall denote by (tk )kN the
sequence of the inspection times defined by tk+1 tk =
t (for k N) where t is a fixed parameter. If the
deterioration level exceeds the threshold L between
two inspections, the system continues to deteriorate
until the next inspection. If the deterioration exceeds
the threshold L between two inspections, then the system continues to deteriorate until the next inspection
and can undergo a breakdown. The failure of such a
system being able to have disastrous consequences, a
preventive maintenance action has to take place before
and the inter-inspection time t has to be carefully
chosen in order to be able to replace the system before
the failure. The preventive replacement leads to restore
the system in a good as new state. The maintenance decision scheme provides possible preventive
replacements according to the measured amount of
deterioration.
There are two possible maintenance actions at each
inspection time tk :
the system is preventively replaced if the deterioration level is close to L,
if a failure has occurred (Xtk > L) then the system
is correctively replaced.
A detection method can be used to identify the
unknown time of change of degradation mode T0 .
The on-line change detection algorithm presented in
section 4 is used to detect the change mode time
T0 when the parameters after the change belong to
the set presented in (2). The maintenance decision is
based on a parametric decision rule according to the
569
in complex systems. A first attempt to use an optimal on-line abrupt change detection in the framework
of maintenance policy is presented in (Fouladirad,
Dieulle, and Grall 2006), (Fouladirad, Dieulle, and
Grall 2007). The on-line change detection algorithms
permit to use online available information on the deterioration rate to detect the occurred abrupt change
time. These algorithms take into account the information collected through inspections, so they treat with
on-line discrete observations (i.e. system state at times
(tk )kN ). In (Fouladirad, Dieulle, and Grall 2007) is is
supposed that a prior information on the change time
is available. In this case an adequate on-line detection
algorithm which takes into account the available prior
information on the change time is proposed.
The authors in (Fouladirad, Dieulle, and Grall
2006), (Fouladirad, Dieulle, and Grall 2007) considered the case of two deteriorating modes (one
change time) and known parameters after the change.
In this paper, the aim is to propose an adequate
detection/isolation method when the accelerated mode
parameters can take unknown values. These values
belong to a known set defined in (2).
We collect observations (Xk )kN at inspection times
k N. Let be Yk for k N the increments of the
degradation process. Therefore Yk for k N follows a
gamma law with density fi = fi t,i according to the
degradation mode Mi , i = 1, 2. We shall denote fl =
f2l t,2l , l = 1, . . . , K, the density function associated
to the accelerated mode when (2 , 2 ) = (2l , 2l ).
We shall denote by N the alarm time at which
a -type change is detected/isolated and , =
1, . . . , K, is the final decision. A change detection/isolation algorithm should compute the couple
(N , ) based on Y1 , Y2 , . . . . We shall denote by Pr 0
the probability knowing that no change of mode has
occurred, Pr lT0 the probability knowing that the change
of mode has occurred at T0 . Under Pr lT0 the increments Y1 , Y2 , . . . , YT0 1 have each the density function
f1 and a change at T0 has occurred and YT0 is the
first observation with distribution fl , l = 1, . . . , K.E0
(resp. ElT0 ) is the expectation corresponding to the
probability Pr 0 (resp. Pr lT0 ).
The mean time before the first false alarm of a j
type is defined as follow:
= max l ,
(3)
k1
l = sup esssup
T0 1
min EiT0 (N =j ) = a
min
(4)
(6)
where E0T0 = E0 .
Let us recall the detection /isolation algorithm initially proposed by (Nikiforov 1995). We define the
stopping time N l in the following manner:
N l = inf N l (k),
k1
(7)
(8)
= argmin{N , . . . , N
1
(9)
}
(10)
fl ln
ln(a)
as a
(11)
min lj
(12)
fl
d 0 j = l K
fj
(13)
lj =
(5)
1lK
1lK
570
E(C(T ))
E(C(t))
=
t
E(T )
(14)
NUMERICAL IMPLEMENTATION
In this section we apply the maintenance policy presented in this paper to the case of a system with
two degradation modes and four possible accelerated
modes (K = 4).
The proposed maintenance policies are analyzed by
numerical implementations. Throughout this section,
the values of the maintenance costs are respectively
Ci = 5, Cp = 50, Cc = 100 and Cu = 250. For
the numerical calculations it is supposed that in the
nominal mode M1 , 1 = 1 and 1 = 1. Hence,
the maintenance threshold Anom is equal to 90.2.
The previous value is the optimal value which minimizes the long run maintenance cost for a single mode
deteriorating system in mode M1 . For this optimization (from Monte Carlo simulations), we use a single
degradation mode results with t = 4. The couple
(Anom , t) = (90.2, 4) is the optimal couple which
minimizes the long run maintenance cost for a single
mode deteriorating system in mode M1 . T0 is simulated by a uniform law from Monte Carlo method. To
evaluate each maintenance policy, four different accelerated modes are considered. So the parameters of the
accelerated mode belong to the following set:
(2 , 2 ) {(2, 1), (1, 3), (2, 2), (1, 7)}
where
6.1
Parameter optimization
Table 1.
mode.
2
2
Aac
2
1
85.6
1
3
74.6
2
2
73.7
1
7
51.6
571
Maintenance Cost
Detection threshold
False alarm rate
Correct isolation rate
1.98
1
0.9
0.87
1.99
1
0.89
0.03
1.99
1
0.89
0.04
2.00
1
0.88
0.05
Maintenance Cost
Detection threshold
False alarm rate
Correct isolation rate
1.99
5
0.016
1
2.22
7
0.02
0
2.37
6
0.014
0
2.67
15
0.24
0
Maintenance Cost
Detection threshold
False alarm rate
Correct isolation rate
1.98
12
0.018
1
2.09
2
0.3
0.19
2.17
2
0.31
0.2
2.34
2
0.27
0.3
Costs
1.97
2.21
2.36
2.66
In table 3 the properties of the maintenance versus detection/isolation algorithm corresponding to the
value of h which leads to the lowest false alarm rate
are exposed. It can be noticed that maintenance costs
are very close to costs when only one threshold is used
(without detection procedure). The use of the detection
algorithms when a low false alarm is requested doesnt
improve the quality of the maintenance policy. In this
configuration, a maintenance policy without detection
procedure seems to be adequate. The results in table
4corresponds to the properties of the maintenance versus detection/isolation algorithm corresponding to the
value of h which leads to the highest correct isolation.
In this table except the case 1 (2 = 2 and 2 = 1)
the highest correct isolation is not very high but the
corresponding false alarm rate and maintenance costs
are acceptable. The maintenance cost is still lower than
the maintenance policy without detection procedure.
In the first case (2 = 2 and 2 = 1) the correct isolation rate is always very high. This should be due to the
global optimization of the detection threshold h. This
optimization is more sensitive to the properties of the
first case where the two modes are very close. It is
possible that if in the optimization procedure, for each
second mode l = 1, . . . , K, a detection threshold hl in
equation (7) is used, the result of the correct isolation
could be different. But this method requests a complex
optimization procedure and the feasibility is arguable.
If the only criteria is the result of the maintenance
(low maintenance cost) we can neglect the value of
false alarm rate and false isolation. But if in the
maintenance procedure the properties of the detection algorithms are of great importance we can not
base our choice only on the maintenance cost and we
should take into account the properties of the detection
algorithm.
In figure 1 the maintenance properties corresponding to the accelerated mode (2 = 1, 2 = 3) are
depicted. To illustrate the results the threshold h varies
in [0, 15]. The maintenance cost is stable around 2.2
and reaches its minimum value 1.99 for h = 0. The
probability of corrective maintenance is very low and
the probability of preventive maintenance is very high.
We can say that there is mostly a preventive policy.
In figure 2 the detection algorithm properties corresponding to the accelerated mode (2 = 2, 2 = 2)
are depicted. To illustrate the results the threshold h
varies in [0, 15]. The false alarm rate is very high for
the small values of h and it decreases as h grows. For
572
Cost versus h
3.0
2.8
2.6
2.4
2.2
2.0
1.8
1.6
1.4
1.2
1.0
10
15
10
15
7
0
Figure 1.
10
In this paper we have proposed a maintenance policy combined with an on-line detection method. This
policy leads to a low maintenance cost. We took into
account the possibility that the system could switch
on to different accelerated modes. By considering this
possibility the proposed detection algorithm is also
an isolation algorithm. We have noticed that the lowest false alarm rate and false isolation rate does not
always necessarily corresponds to the lowest maintenance cost. The proposed algorithm has generally low
correct isolation rate and some times high false alarm
rate. The aim in the future works is to improve these
properties in order to obtain a low cost maintenance
versus detection policy which can easily isolate the
real accelerated mode.
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
10
CONCLUSION
15
15
mean delay vs h
20
18
16
14
12
10
8
REFERENCES
6
4
2
0
Figure 2.
2 = 2.
10
15
1.0
0.5
0.0
0
Figure 3.
10
15
573
574
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper is concerned with an opportunity-based age replacement policy for a system under two
types of failures, minor and catastrophic. We consider a general distribution for the time to the first opportunity,
dropping the usual assumption of exponentially distributed times between opportunities. Under this model
the system undergoes a minimal repair whenever a minor failure occurs whereas a perfect restoration follows
any catastrophic failure and after the N minor failure. The system is preventively replaced at maintenance
opportunities arising after instant S and also at the moment its age reaches T . We take into account the costs
due to minimal repairs, perfect repairs, opportunity based replacements and preventive maintenances. We focus
on the optimum policy (T , N ) that minimizes the long-term cost per unit of time, providing conditions under
which such optimum policy exists.
INTRODUCTION
Preventing systems from failing constitutes a significant task for manufacturers because of the costs and
dangers incurred due to failures. Such importance
has make the design of maintenance policies to be an
essential research area in reliability theory. The preventive activities carried out to reduce the breakdown
risk are combined in practice with corrective actions
that restore the failed systems to the operating condition as soon as possible. Depending on the quality
of the corrective actions, the repair can be classified
into perfect or imperfect. The former brings the system
back to an as-good-as-new condition whereas the latter
restores the system somewhere between as-good-asnew and as-bad-as-old. The as-bad-as-old restoration
is known as minimal repair.
Many maintenance policies consider that preventive replacements can be carried out at any moment.
However the work interruption time is likely to produce high costs. If so, taking advantage of inactivity
periods is recommended, delaying the maintenance
until the moment when the mechanism is not required
for service. In general such opportunities are a type
of event occurring at random, e.g., when another
unit in the same system undergoes a failure. Repairs
and replacements carried out outside these so-called
maintenance opportunities can cause cost ineffective maintenances and may be even responsible for
high economic loses.
The following references are based on the principle
that an event type can be considered as an opportunity.
(Dekker and Smeitink 1991) consider a block replacement model in which a component can be replaced
preventively at maintenance opportunities only. Maintenance opportunities occur randomly and are modeled
through a renewal process. (Dekker and Dijkstra 1992)
establish conditions for the existence of a unique
average control limit policy when the opportunities
arise according to a Poisson process. (Jhang and Sheu
1999) propose an opportunity-based age replacement
with minimal repair for a system under two types of
failure. (Iskandar and Sandoh 2000) present a maintenance model taking advantage or not of opportunities
in a random way. (Satow and Osaki 2003) consider
that intensity rate of opportunities change at specific age. (Coolen-Schrijner et al., 2006) describe an
opportunity-based age replacement by using nonparametric predictive inference for the time to failure of a
future unit. They provide an alternative to the classical
approach where the probability distribution of a units
time to failure is assumed to be known. (Dohi et al.,
2007) analyze this type of replacement policies which
result to be less expensive than those carried out at any
time without taking opportunities into account.
In this work we present an opportunity-based maintenance model for a system that may undergo two types
of failures, minor and catastrophic. Minor failures are
followed by a minimal repair to bring the system back
into use whereas catastrophic failures are removed
with a perfect repair. A perfect repair is also carried out
after the N th minor failure. From time S on, opportunity maintenances occur which are independent from
the system time to failure. In addition the maintenance
575
THE MODEL
x
0
x
0
p1 =
0
= H (S, N ) +
N
1
Dk (x)e
x
0
H (x, N )dG(x S)
k
p(u)r(u)du
,
k!
N
2
T S
p2 =
p2 = H (S, N ) H (T , N )G(T S)
k = 0, 1, . . .
+ r(x)DN 1 (x)e
q(x)r(x)Dk (x)e
k=0
r(u)du
where
x
k=0
Dk (x) =
The distributions corresponding to the first catastrophic failure and the N th minor failure are independent. The reliability function corresponding to that
occurring first is given by
T
S
q(u)r(u)du
H (x, N ) =
dH (x, N ) +
r(u)du
A failure happening at instant x belongs to catastrophic class with probability q(x) = 1 p(x). The
reliability function of the time to the first catastrophic
failure is
e
Renewal opportunities independent from the system arise from time S on with 0 < S < T . Let G(x)
denote the reliability function of the time elapsed from
S to the first maintenance opportunity. We assume
that whenever the system is renewed, G(x) remains
the same.
A cycle, that is, the total renewal of the system is
completed after one of the four events described next
x
0
x
0
r(u)du
r(u)du
p3 = H (T , N )G(T S)
The following formula provides the mean length of
a cycle
576
E( ) =
xdH (x, N )
+
T S
(y + S)H (y + S, N )dG(y)
where the four foregoing terms correspond respectively to the mean length of a cycle ending after events
previously numbered as 1) to 4). The third term in the
formula can be rewritten as
(N 1)G(x S)m(x)p(x)r(x)
DN 1 (x)s(x)dx
H (x, N )G(x S)dx
y+S
y+S
H (x, N )dx +
m(x) = e
T S
x
0
dx
h(x, N 1)dx
y+S
p(u)r(u)du dG(y)
x
r(u)du
y+S
p(u)r(u)du dG(y)
x
y+S
p(x)r(x)DN 2(x)e
p(u)r(u)du dG(y)
T S
Hence,
H (x, N 1)q(x)r(x)
y+S
p(u)r(u)du h(x, N 1)dx
0
T S
MR3 =
G(x S)
+ SH (S, N )
kDk (x)s(x)q(x)r(x)m(x)dx
k=0
N
1
xH (x, N )dG(x S)
E( ) =
p(u)r(u)du h(x, N 1)dx
G(x S)
x
0
MR2 =
(N 1)m(x)p(x)r(x)DN 1 (x)s(x)dx
+ T H (T , N )G(T S)
kDk (x)s(x)q(x)r(x)m(x)dx
k=0
1
SN
MR1 =
=
T S
H (y + s, N 1)
y+S
p(u)r(u)du dG(y)
0
p(u)r(u)du
=
q(u)r(u)du
577
H (x, N 1)
0
p(u)r(u)du dG(x S)
p(u)r(u)du H (S, N 1)
MR3 =
p(u)r(u)du H (T , N 1)G(T S)
E[C( )] = c1 p1 + c2 p2 + c3 p3 + c4 MR
= c1 H (S, N ) + c2 H (S, N )
T
+ (c1 c2 )
G(x S)dH (x, N )
+ (c3 c2 )H (T , N )G(T S)
S
+ c4
p(x)r(x)H (x, N 1)dx
p(u)r(u)du
S
p(u)r(u)du
2
N
Dk (x)q(x)r(x)e
x
0
r(u)du
dx
k=0
+ G(T S)
Q(T , N ) =
p(u)r(u)du
0
p(x)r(x)DN 2 (x)e
x
0
r(u)du
dx
= G(T S)
h(x, N 1)dx
T
= G(T S)
E[C( )]
E[ ]
p(u)r(u)du
0
+ c4
p(u)r(u)du H (T , N 1)
where
L(T , N ) = E[ ]
C(T , N ) = E[C( )]
DN 1 (T )
Z(T , N ) = N 1
k=0 Dk (T )
578
Z(T , N ) verifies that 0 Z(T , N ) 1 and is increasing with T provided that its corresponding derivative is
N
2 T
k=0
dZ(T , N )
DN 2 (T )
= p(T )r(T )
+ p(T )r(T )
dT
Dk (T ))2
N 1 (N 2+j)!
1
1
j=1 (j1)!(N 2)! DN 2+j (T ) j N 1
1
Dk (T ))2
( Nk=0
(c1 c3 )
0
DN 1 (x)(r(x)
B(T , k)e(T , N )
k=0
Given that
b(x) = e
x
0
r(u)+l(uS)du
B(T , k) =
Dk (x)b(x)dx,
k = 0, 1, . . .
0
T
d(T , k) =
k = 0, 1, . . .
Dk (T )F(T )G(T S)
,
B(T , k)
e(T , k) =
k = 0, 1, . . .
p(x)r(x)DN 1 (x)b(x)dx
= DN (T )F(T )G(T S)
T
+
DN (x)(r(x) + l(x S))b(x)dx
0
(1)
+ (c1 c2 c4 )
B(T , N ) B(T , N + 1)
B(T , k)
B(T , N )
N 1
d(T , k) d(T , N )
k=0
N
2 T
k=0
k=0
k=0
W (T , N + 1) W (T , N )
N
B(T , k) (c4 (F(T , N + 1) F(T , N ))
=
N 1
k=0 B(T , k)
B(T , N )
N
2 T
k=0
N 1
where
T
F(T , k) =
r(x)Dk (x)b(x)dx
,
B(T , k)
k = 0, 1, . . .
579
CONCLUSIONS
The high cost incurred due to some preventive maintenances motivate carrying out opportunity-based policies. This paper provides conditions under which
an optimum opportunity-based policy exists in two
cases, the optimum T for a given N and the optimum N when N is fixed. Such conditions involve
an increasing failure rate of the time to failure and
a decreasing failure rate of the time to the first
opportunity apart from cost-related conditions. Concerning the simultaneous optimization of both T and
N we consider the use of the following algorithm
proposed by (Zequeira and Berenguer 2006) and
(Nakagawa1986):
1. Set N = 1
2. If Q(TN +1 , N + 1)) < Q(TN , N ), then go to step 3
or to step 4 otherwise
3. N = N + 1
4. Set N = N
The optimal policy turns out to be (T , N ). Note
that the foregoing algorithm doesnt ensure a global
optimum but just a local one. Moreover obtaining
REFERENCES
Coolen-Schrijner, P., F. Coolen, and S. Shaw (2006). Nonparametric adaptive opportunitybased age replacement
strategies. Journal of the Operational Research Society.
Dekker, R. and M. Dijkstra (1992). Opportunitybased age
replacement: exponentially distributed times between
opportunities. Naval Research Logistics (39), 175190.
Dekker, R. and E. Smeitink (1991). Opportunitybased block
replacement. European Journal of Operational Research
(53), 4663.
Dohi, T., N. Kaio, and S. Osaki (2007). Discrete time
opportunistic replacement policies and their application.
Recent advances in stochastic operations research. World
Scientific.
Iskandar, B. and H. Sandoh (2000). An extended opportunitybased age replacement policy. RAIRO Operations
Research (34), 145154.
Jhang, J. and S. Sheu (1999). Opportunity-based age replacement policy with minimal repair. Reliability Engineering
and System Safety (64), 339344.
Nakagawa, T. (1986). Periodic and sequential preventive
maintenance policies. Journal of Applied Probability (23),
536542.
Nakagawa, T. (2005). Maintenance Theory of Reliability.
Springer.
Satow, T. and S. Osaki (2003). Opportunity-based age
replacement with different intensity rates. Mathematical
and Computer Modelling (38), 14191426.
Zequeira, R. and C. Berenguer (2006). Optimal scheduling
of non-perfect inspections. IMA Journal of Management
Mathematics (2), 187207.
580
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Maintainable technical systems are considered whose failures can be revealed only by special
inspections. It is assumed that these inspections may be imperfect, i.e. they result may suggest wrong decisions.
Two optimization models are considered: one in which the coefficient of availability is maximized and second in
which related costs are minimized. For both models approximately optimal solutions have been found. Presented
examples show that these solutions are very close to the exact solutions when the time to failure is exponentially
distributed. The paper is illustrated with two numerical examples.
INTRODUCTION
For some technical systems their reliability state (operation or failure) cannot be directly observable. For
example, when a failure of a system is not of catastrophic nature it can be recognized either by doing
some measurements or by observing its performance.
Technical equipment of a production line serves as
a good example in such a case, when failures result
rather in worse performance (measured by observed
percentage of produced nonconforming items) than
in complete stoppage of a production process. Oneshot systems such as back-up batteries are examples
of other systems of that kind. Their reliability state is
not directly observable until the moment when they
are needed. In all such cases there is a need to perform
inspections in order to reveal the actual reliability state
of a considered system. Inspections policies may vary
from very simple, when times between consecutive
inspections (inspection intervals) are constant, to very
complicated, computed with the usage of additional
information about the system.
The problem of finding optimal inspection intervals has attracted many researchers since the works of
Savage (1956), Barlow et al. (1960) and some other
authors published in late 1950s and early 1960s. The
first general model was presented by Barlow, Hunter
and Proschan (1963) who assumed perfect inspections.
Later generalizations of this model were proposed, i.a.
by Menipaz (1978) and von Collani (1981). Among
relatively recent papers devoted to this problem, and
presenting simple and applicable in practice models,
we may list the papers by Baker (1990), Chung (1993),
Vaurio (1994), and Hariga (1996). More complex
inspection models are presented in recent papers by
Fung and Makis (1997), Chelbi and Ait-Kaidi (1999),
Wang and Christer (2003), and Berger et al. (2007).
MATHEMATICAL MODEL
581
A1 (h) =
i=0
(1)
R (ih) ,
i=1
(2)
1
1
(3)
is given by
Tr = [A1 (h) + A2 ] h + A1 (h) (0 + a )
+ A2 0 + r
(4)
,
Tr (h)
(5)
(6)
C2 (h) = A1 (h) ca
(7)
(8)
(9)
582
1 + A2
eh
eh 1
h
1 h + e 1 (r + A2 0 ) + 0 + a
(14)
where
A1 (h) =
+ eh [1 + (0 + a ) + (2A2 1)] .
if (ih) ,
(11)
i=1
(15)
(16)
1
[hcl + (c0 + ca )] + cl A2 h
C(h) = eh 1
A1 (h) =
1
.
eh 1
(13)
+ cr + cf cl .
(17)
(18)
(19)
583
0.5
h
(20)
This approximation is valid for when h is significantly smaller than , and what is more important
is valid for any probability distribution of the time to
failure.
When we apply this approximation to the objective
function given by (5) we obtain
K (h) =
h
,
W1 h2 + W2 h + W3
(21)
ZK
hK
hK,opt
1
2
3
5
10
31.622
63,245
94,868
158,114
316,228
31.485
62,701
93,645
154,728
302,804
where
a + 0
A2 0.5
W1 = A2 0.5,
(22)
ZK =
(23)
W3 = (a + 0 ) .
(24)
(25)
Hence, the approximately optimal inspection interval is given by the following simple expression:
a + 0
hK =
.
A2 0.5
(26)
2 0 .
(27)
584
(28)
case given by
C(h) =
0.5 [hcl + (c0 + ca )] + cl A2 h
h
+ cr + cf cl .
(29)
hC =
c0 + ca
1
.
A2 0.5
cl
(31)
c0
2 .
cl
hc
hC,opt
1
2
3
5
10
31.622
63,245
94,868
158.114
316.228
31.620
62,868
93,803
154,861
302,879
(30)
Hence, the approximately optimal inspection interval is given by the following simple expression:
ZC
(32)
We may also note the similarity between approximately optimal solutions in both considered models, suggesting the existence of certain equivalence
between both approaches.
In order to evaluate the accuracy of the approximate
solution of this optimization problem we compare the
approximately optimal inspection intervals calculated
from (30) with the optimal values calculated from (18)
for the case of the exponential distribution of the time
to failure. As in the previous case we fix the expected
time to failure as 1000 time units, and will vary the
value of
c0 + ca
1
ZC =
(33)
cl
A2 0.5
which determines the relation between hC and . The
results of this comparison are presented in Table 2.
The results presented in Table 2 show exactly the
same properties of the optimal inspection intervals as
it was presented in Table 1 for the previously considered model. What is more interesting, the accuracy
of approximate solutions, measured in terms of differences between optimal and approximately optimal
results, is very similar for both models. This observation confirms our suggestion that the equality of
ZK and ZC means that the consequences of making
NUMERICAL EXAMPLES
585
Badia, F.G., Berrade, M.D., Campos, C.A. 2002. Optimal inspection and preventive maintenance of units with
revealed and unrevealed failures. Reliability Engineering
& System Safety, 78: 157163.
Baker, M.J.C. 1990. How often should a machine be
inspected? International Journal of Quality and Reliability Management, 4 (4): 1418.
Barlow, R.E., Hunter L.C., Proschan F. 1960. Optimum checking procedures. In: Proc. of the Seventh
National Symposium on Reliability and Quality Control,
9: 485495
Barlow R.E., Hunter L.C. Proschan F. 1963. Optimum
checking procedures. Journal of SIAM, 11: 10781095.
Berger, K., Bar-Gera, K.,Rabinowitz, G. 2007. Analytical
model for optimal inspection frequency with consideration of setup inspections. Proc. of IEEE Conference on
Automation Science and Engineering: 10811086.
Chelbi, A., Ait-Kadi, D. 1999. Reliability Engineering &
System Safety, 63: 127131.
Chung, K.-J. 1993. A note on the inspection interval of a
machine. International Journal of Quality and Reliability
Management, 10(3): 7173.
Collani von, E. 1981. On the choice of optimal sampling
intervals to maintain current control of a process. In:
Lenz, H.-J., et al. (Eds.) Frontiers in Statistical Quality
Control: 3844, Wuerzburg, Physica Verlag.
Fung, J., Makis, V. 1997. An inspection model with generally
distributed restoration and repair times. Microelectronics
and Reliability, 37: 381389.
Hariga, M.A. 1996. A maintenance inspection model for a
single machine with general failure distribution. Microelectronics and Reliabiliy, 36: 353358.
Hryniewicz, O. 1992. Approximately optimal economic process control for a general class of control procedures.
In: H.J. Lenz et al. (Eds.) Frontiers in Statistical Quality
Control IV: 201215, Heidelberg, Physica Verlag.
ISO 2859-1: 1989(E): Sampling procedures for inspection
by attributes. Part 1. Sampling schemes indexed by
acceptable quality level (AQL) for lot-by-lot inspection.
Khan, F.I., Haddara, M., Krishnasamy, L. 2008. A new
methodology for Risk-Based Availability Analysis. IEEE
Transactions on Reliability, 57: 103112.
Menipaz, E 1978. On economically based quality control
decisions. European Journal of Operational Research, 2:
246256.
Savage, I.R. 1956. Cycling. Naval Research Logistic Quarterly, 3: 163175.
Vaurio, J.K. 1994. A note on optimal inspection intervals. International Journal of Quality and Reliability
Management, 11(6): 6568.
Vaurio, J.K. 1999. Availability and cost functions for periodically inspected preventively maintained units. Reliability
Engineering & System Safety, 63: 133140.
Wang, W., Christer, A.H. 2003. Solution algorithms for
a nonhomogeneous multi-component inspection model.
Computers & Operations Research, 30: 1934.
586
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Maintenance is one of the main used tools to assure the satisfactory functioning of components
and equipment and the reliability of technological systems. Literature on policies and maintenance models is
enormous, and there are a great number of described contexts where each politics of requisite maintenance is
selected to satisfy technical and financial restrictions. However, by assuming very simplified conditions, many
studies have a limited applicability in reality. Considering a maintenance policy based on periodic inspections,
a model is presented in this article that determines the time interval between inspections that minimizes the
global cost of maintenance per unit of time. It is assumed that the system consists on n series components. It
is recognized the occurrence of failures that are immediately revealed and failures that are only revealed at the
first inspection after their occurrence. The model also incorporates repairing times of components, but both
duration of inspections and duration of preventive maintenance are neglected. The analytical development of
the model allows us to obtain a closed-form function to determine the optimal time period between inspections.
This function will be discussed and a numerical example will be presented.
INTRODUCTION
THE MODEL
587
NOTATION
C
Total cost of maintenance per cycle
C1
Cost of maintenance of revealed failures per cycle
C2
Cost of maintenance of unrevealed failures per cycle
I1
Number of inspections until the occurrence of a revealed failure per cycle
I2
Number of inspections until a detention of an unrevealed failure per cycle
D
Time of not detention of bad functioning for cycle
U
Down-time per cycle
X
Lifetime of the system - E[X ] - (MTBF)
n
Number of components in the system
T
Period of inspection
T
Optimal period of inspection
Probability of performing corrective maintenance to k components (k = 1, . . . n) after inspection
Pr (CMk )
CI
Cost of each inspection plus preventive maintenance
CD
Cost of not detecting a bad functioning (per unit of time)
CU
Cost per down-time unit
CR
Cost of each corrective maintenance to a component
RS (t)
Reliability of the system for a time of mission t
Rk (t)
Reliability of the component k for a time of mission t
1
Cycle of functioning with revealed failure
2
Cycle of functioning with failure not revealed
R
Repairing time of a component (MTTR)
O [T ] =
E [C ( )]
E [ ]
(1)
In the previous expression, represents the functioning cycle, which is defined as the time interval
between two consecutive renewals of the system. The
length of the cycle depends on the type of the failures occurred. The occurrence of a revealed failure
determines the end of a cycle of functioning and the
commencement of a new (after repairing). In this case,
the ending cycle, 1 , is estimated from the life time
of the system and the down-time associated with the
repairing of the failure. Thus, the average cycle of
588
(2)
(3)
(4)
Concerning to failures, it is assumed that two possible scenarios can occur in the system: (1) the failure
is immediately detected at its occurrence (revealed
failure) which implies the execution of an immediate
corrective maintenance, and (2) the failure occurs but
is only detected in the first inspection after the occurrence (unrevealed failure). In the first case, the maintenance costs per functioning cycle must include the
cost of the expected number of inspections until the
failure occurrence plus the cost of their implicit preventive maintenances, the down-time cost associated
to the (unproductive) time that is necessary to complete the maintenance, and the cost of the corrective
maintenance itself that is necessary to repair the damaged component. In this scenario, the expected cost
per functioning cycle is then given by:
E [C (1 )] = CI E [I1 ] + CU R + CR
E [U ] = R E [CM ]
(8)
where E[CM ] represents the average number of corrective maintenances per functioning cycle.
2.2 Probabilities of inspections (and preventive
maintenances) and corrective maintenances
per functioning cycle
We consider that down-times are considerably inferior
to the interval of time between two inspections. In the
scenario 1, where an occurred failure is immediately
detected and repaired, if the life time of the system is
lesser than the interval of time until the first inspection,
T , then no inspection per cycle is performed. If the life
time extends behind the instant of the first inspection,
T , but not the instant of the second inspection, 2T , then
an inspection per cycle is counted. In general then, if
the life time of the system lies between the ith and the
(i + 1)th inspection, thus i inspections per cycle must
be counted. Thus, the expected number of inspections
per cycle, E[I1 ], for i = 1, 2, . . ., comes as:
E[I1 ] = 0 Pr(0 X < T ) + 1 Pr(T X < 2T ) +
+ i Pr[iT X < (i + 1)T ] +
(9)
That is,
E [I1 ] =
+
RS (iT )
(10)
i=1
The same reasoning applied to scenario 2 of unrevealed failures, implying that the average number of
inspections per cycle, E[I2 ], with i = 1, 2, . . . , is given
by:
E [I2 ] = 1 Pr (0 < X T ) + 2 Pr (T < X 2T )
+ + i Pr [(i 1) T < X iT ] +
(5)
(11)
In scenario 2, the global cost must incorporate an
additional parcel representing the cost associated with
running the system under bad functioning conditions
due to one or more unrevealed failures In this case, the
expected cost per cycle is given by:
(6)
E [I2 ] =
+
RS ((i 1) T )
(12)
i=1
E [C (2 )] = CI E [I2 ] + CD E [D] + CU E [U ]
+ CR E [CM ]
That is,
(7)
Considering again the scenario 1, the average number of corrective maintenances per functioning cycle,
E[CM ], is given by:
E [CM ] =
589
n
k=1
k Pr (CMk )
(13)
where Pr(CMk ) represents the probability of occurring k corrective maintenances in a cycle, and is
given by:
Pr (CMk ) =
n
n
a (T ) = CI
n
...
i1 =1 i2 =i1 +1
1 Rj (T )
t=i1 ,i2 ,... ,ik
n
Rt (T )
(14)
k=1
Rk (T )
and
b (T ) = p (E [X ] + R )
(15)
k=1
2.3
+ (1p) T
E [C ( )] = CI
+
+
RS (iT )
n
Rk (T )
+ (CU R + CR ) n
i=0
k=1
CD E [X ] (1 p)
RS (iT )+R n
n
Rk (T )
k=1
RS (iT ) + CU R + CR p
+ (CI + CD T )
(20)
i=1
+
i=0
RS (iT ) + p (CU R + CR CI
CD R ) + (1 p) (CU R + CR CD R )
n
Rk (T ) CD E [X ]
(19)
n
+
i=0
ik =ik1 +1
where
(16)
+
RS (iT ) + CU R + CR p
CI
i=1
+
O [T ] =
n
p (E [X ] + R ) + (1 p) T
RS (iT ) + R n
Rk (T )
i=0
RS (iT ) + (CU R + CR ) n
Rk (T ) CD E [X ] (1 p)
i=0
k=1
+
n
p (E [X ] + R ) + (1 p) T
RS (iT ) + R n
Rk (T )
(CI + CD T )
+
k=1
i=0
k=1
a (T )
b (T )
(17)
(18)
lim a (T ) > 0
T 0
590
and
lim a (T ) = CI + p (CU R + CR CI CD R )
T +
+ (1 p) n (CU R + CR CD R )
CD E [X ]
The expression defined by equation (20) shows that
b(T ) > 0, T R+ . Taking in account equation (19),
if lim a (T ) < 0, that is, if
T +
E [X ]
>
(21)
then there exists T0 R+ such that a(T0 ) < 0. Applying equation (18), we have that:
a (T0 )
< CD = lim O [T ]
T +
b (T0 )
T
Rk (T ) = e
> 0, > 0, T 0
O [T0 ] = CD +
NUMERICAL EXAMPLES
RS (T ) = e10
T
> 0, > 0, T 0
1 + 1 ,
E[X ] =
1
10 /
> 0, > 0
Assuming = 1, the values of the optimal inspection periods and corresponding minimal cost are calculated for each cases. The unit costs considered in
the examples were CI = 10, CD = 100, CR = 25 and
Optimum inspection time and optimum cost when the time to failure is a Weibull distribution.
p = 0.1
p = 0.5
p = 0.9
E [X ]
O [T ]
O [T ]
O [T ]
0.2
0.5
0.8
1
1.1
1.2
1.3
1.4
1.5
2
2.1
2.2
2.3
2.4
2.5
3
4
5
6
10
20
0.0012
0.02
0.0637134
0.1
0.118959
0.138069
0.157124
0.175968
0.194491
0.28025
0.295865
0.31096
0.325544
0.339628
0.353226
0.414484
0.509708
0.579325
0.632048
0.755685
0.867637
0.196586
0.205087
0.213363
0.221413
0.229242
0.265415
0.327678
0.369116
0.394981
0.444197
0.477394
100
100
100
100
100
100
100
100
100
100
97.7365
93.1971
89.2353
85.7534
82.6741
71.4919
59.7013
53.2903
49.0747
39.9996
31.8144
0.250725
0.259736
0.268439
0.276828
0.313843
0.362972
0.390392
0.408787
0.449002
0.478998
100
100
100
100
100
100
100
100
100
100
100
98.4482
94.3313
90.6934
87.4593
75.5591
62.7978
56.017
51.6696
42.7585
35.5176
0.414277
0.428473
0.442472
0.456246
0.469762
0.532297
0.628862
0.695681
0.743316
0.843921
0.921921
100
100
100
100
100
100
100
100
100
100
98.0627
93.6608
89.7803
86.3365
83.2616
71.8094
59.224
52.4872
48.299
40.5767
35.3
591
O[T]
400
350
300
250
200
150
100
50
1
O[T]
300
p=0.1
p=0.5
250
200
150
100
50
p=0.9
0.5
1.5
CONCLUSIONS
REFERENCES
Bada, F.G., Berrade, M.D. & Campos, C.A. 2002. Optimal inspection and preventive maintenance of units with
revealed and unrevealed failures. Reliability Engineering & Systems Safety 78: 157163.
Barros, A., Berenguer, C. & Grall, A. 2006. A maintenance
policy for two-unit parallel systems based on imperfect monitoring information. Reliability Engineering &
Systems Safety 91: 131136.
Bris, R., Chtelet, E. & Yalaoui, F. 2003. New method to minimize the preventive maintenance cost of series-parallel
systems. Reliability Engineering & Systems Safety82:
247255.
Chiang, J.H. & Yuan, J. 2001. Optimal maintenance policy for
a Markovian system under periodic inspection. Reliability
Engineering & Systems Safety71: 165172.
Kallen, M.J. & van Noortwijk, J.M. 2006. Optimal periodic
of a deterioration process with sequential condition states.
International Journal of Pressure Vessels and Piping 83:
249255.
Wang, G.J., & Zang, Y.L. 2006. Optimal Periodic Preventive Repair and Replacement Policy Assuming Geometric
Process repair. IEEE Transactions on Reliability55, 1:
118122.
Zequeira, R.I. & Brenguer, C. 2005. On the inspection
policy of a two-component parallel system with failure
interaction. Reliability Engineering & Systems Safety71:
165172.
592
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Laurent Bordes
Laboratoire de Mathmatiques Appliques
Universit de Pau et des Pays de lAdour, PAU Cedex, France
ABSTRACT: This paper discusses the problem of the optimization of condition based maintenance policy for
a stochastic deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone
stochastic process. The process of covariates is assumed to be a finite-state Markov chain. A model similar
to the proportional hazards model is used to represent the influence of the covariates. In the framework of a
non-monotone system, we derive the optimal maintenance threshold, optimal inspection period and the optimal
delay ratio to minimize the expected average maintenance cost. Comparison of the expected average costs under
different conditions of covariates and different maintenance policies is given by numerical results of Monte
Carlo simulation.
Keywords: condition based maintenance, covariates, Markov chain, proportional hazards model, nonmonotone system, maintenance, expected average cost
INTRODUCTION
593
In this Section, we consider a single-unit replaceable system in which an item is replaced with a
new one, either at failure or at planned replacement.
The degradation of the system is represented by a
continuous-state univariate stochastic process D(t)
with initial degradation level D(0) = 0. In this paper,
without loss of generality, we suppose that the deterioration system has an upward trend degradation, though
not necessarily monotonically increasing.
Dn = max(Dn1 + Xn1
Xn1
, 0),
594
(1)
n
n ).
The distribution function and the density function
of the variable Xn+ Xn are given by
Fn (x) =
n
+
n +
n
exp(x/
n )1(x0)
+
n
+
exp(x/
)
1(x0) ,
n
+
n + n
1
fn (x) = +
exp(x/n )1(x0)
n + n
+ exp(x/+
n )1(x0) .
+ 1
(Zn )
exp(x/
n (Zn ))1(x<0)
n (Zn )
+ (Zn ) exp(x/+
n (Zn )
+ 1 n +
1(x0) ,
n (Zn ) +
n (Zn )
exp(x/
n (Zn ))
fn (x, Zn ) =
1(x<0)
+
n (Zn ) +
n (Zn )
exp(x/+
n (Zn ))
+ +
.
1
(x0)
n (Zn ) +
n (Zn )
and for x = 0,
P(Dk+1 = 0|Dk = dk ) = P(Ak + Dk 0|Dk = dk )
= P(Ak dk |Dk = dk )
= P(Ak dk )
k exp(dk /k )
.
+
k + k
(3)
n
(0, +) and a mass distribution ++
exp(dn /n )
n
n
at x = 0.
(5)
for n 1, where Xn+ (Zn ), Xn (Zn ) are conditionally independent random variables (given Zn ), with
exponential distribution of mean parameters +
n (Zn )
and
n (Zn ) (without loss generality, we assume that
+
n (Zn ) n (Zn )).
The distribution function and the density function
of the variation An+1 (Zn ) = Xn+ (Zn ) Xn (Zn ) are
given by
Fn (x, Zn ) =
(4)
be the transition probabilities of the process Z. Filtration Ft = {Zs : s t} denotes the history of the
covariates.
We assume that the variation of the degradation at
time tn only depends on the covariates at time tn . Let
stochastic process D(t|Ft ) be the degradation level of
system given Ft . This process is observed at discrete
times t = tn (n N ). We shall denote by Dn the
observed process at time t = tn , defined as:
+
(Zn1 )Xn1
(Zn1 ), 0),
Dn = max(Dn1 +Xn1
Conditionally on Dk = dk (k = 1, 2, . . . ), for
x > 0, the r.v Dk+1 has the distribution
= P(Ak x dk ),
Let
+
n (Zn )+
n
So the distribution of Dn =
k=1 Dk can be
derived using the method of convolution and the total
probability formula.
To describe precisely the influence of the covariates
Zn = zn on An , similar to the proportional hazards
model proposed by Cox (1972), we suppose that the
parameters +
n and n depend on zn as follows:
+
+
n (Zn ) = n (Zn )
+
+
= +
n exp (1 1(Zn =1) + + K 1(Zn =K) )
+
= +
n exp (Zn ),
(6)
n (Zn ) = n (Zn )
595
=
n exp (1 1(Zn =1) + + K 1(Zn =K) )
=
n exp (Zn ),
(7)
where +
n (n ) denote the degradation rate (improvement rate) of the system when there is no covariates
considered, + = (1+ , 2+ , . . . , K+ ) and =
(1 , 2 , . . . , K ), from (6) and (7), these parameters allow to account the influence of covariates on the
degradation rate.
Considering the symmetrical property of i , without loss of generality, in what following, we assume
that 1+ 2+ . . . K+ and 1 2 . . . K .
0.95
P = 0.02
0.00
0.05
0.95
0.05
0.00
0.03 ,
0.95
+
n = 0.5 and n = 0.3. Notice that the stationary
distribution = (1 , 2 , 3 ) = (0.3, 0.5, 0.2).
For simplification, in what follows, we assume that
0 does not depend on n, and Z {1, 2, 3} is a 3-state
Markov chain as described in the above Example 2.1.
For covariates with initial state Z0 = 1, denote that
n = (1n , 2n , 3n ) with in = P(Zn = i|Z0 = 1), (i =
1, 2, 3), is the distribution of Zn then:
(1n , 2n , 3n )
= (P(Zn = 1|Z0 = 1), P(Zn = 2|Z0 = 1),
P(Zn = 3|Z0 = 1))
= (1, 0, 0)P n ,
Degradation
25
20
15
10
5
10
20
30
40
50
60
70
80
90
100
(a)
Covariables
lim in = i .
(8)
CONDITION-BASED PERIODIC
MAINTENANCE MODEL
In this section, we study the optimal periodic maintenance policy for the deteriorating system presented
in Section 2. Suppose that the system is assumed
to be a non-monotone stochastic deteriorating system with initial state D0 = 0, and the state can
exclusively be monitored by inspection at periodic
times tk = k (k = 1, 2, . . . ). We now give some
assumptions under which the model is studied.
1. Inspections are perfect in the sense that they reveal
the true state of the system and the explanatory
variables.
2. The system states are only known at inspection
times and all the maintenance actions take place
only at inspection times and they are instantaneous.
3. Two maintenance operations are available only at
the inspection time: preventive replacement and
corrective replacement.
4. The maintenance actions have no influence on the
covariates process.
30
and
0.0
10
20
30
40
50
60
70
80
90
100
(b)
596
The system is considered to be failed and correctively replaced at the first hitting time
GL = inf {t R+ : Dt L | D0 = 0}.
(9)
(10)
s=1
s=1
EC (Z) =
E(v(Z))
,
E(l(Z))
(12)
35
30
Degradation
s=1
25
20
15
10
5
0
50
100
150
200
250
300
350
200
250
300
350
(a)
Covariates
s=1
(11)
When the stochastic process (D, Z) forms a regenerative process, we can calculate the expected cost per
time unit as (Rausand and Hyland (2004))
E(C(t))
.
t
0.0
50
100
150
(b)
+: inspection; : preventive replacement;
: corrective replacement.
597
s=1
R+1
(Ci s + Vk+s )1(E3s ) ,
s=1
Lk = (R )1(E2 ) +
R+1
s 1(E1s )
EC =
(s + Lk+s ) 1(E3s ) ,
R+1
(CF P(E1s ) + E(Ci s + Cd
s=1
s=1
E(Lk ) = (R )P(E2 ) +
R+1
EC
(k)k .
(14)
EC
= 2.8650. Figure 3 gives the iso-level curves of
EC in the function of (Lp , ) and R takes its optimal
value R = 0.
P(E2 ) +
k=1
s=1
E(Vk ) =
s=1
R+1
R+1
(CF + Ci s + Cd ((k + s) GL ))1(E1s )
Vk =
E(s 1(E1s ) )
40
s=1
35
R+1
30
s=1
25
(Lp
min
, ,R)(0,L)R+ N
EC (Z).
20
15
10
(13)
5
3.2
10
15
20
25
30
Numerical simulation
598
35
Covariates
25
Z general
Z =1
Z =2
Z =3
C
20
Covariates
Z general
Z =1
Z =2
Z =3
C
Case1
Case2
Case3
Case4
EC
30
15
10
10
15
20
25
30
(a)
60
Case1
Case2
Case3
Case4
50
18
16
14
12
10
40
30
20
8
10
6
0
10
15
20
25
30
35
40
(b)
2
35
0
0.5
1.5
2.5
3.5
4.5
Case1
Case2
Case3
Case4
5
30
25
EC
20
15
10
10
(c)
599
CONCLUSION
In this paper we deal with a non-monotone deteriorating system with covariates, we use a method similar to
the proportional hazards model to account the influence of dynamical covariates, defined by a 3-state
Markov chain.
Expected average cost is calculated, optimum periodic inspection/replacement policies are derived for
different maintenance costs per unit, as a function of
the preventive level Lp , inspection interval and delay
ratio R. The results show that:
1. The optimal average cost is an increasing function
of the parameters .
2. The optimal inspection interval includes the information on R, and in order to optimize the average cost, we can only consider the parameters
(Lp , ).
3. The expected average maintenance cost for system
with a covariate Markov chain is always greater
than the weighted mean of the optimal cost for the
three statical cases.
REFERENCES
Bagdonavicius, V. and Nikulin, M. 2000. Estimation in degradation models with explanatory variables. Lifetime Data
Analysis 7(1): 85103.
Barker, C.T. and Newby, M. 2008. Optimal non-periodic
inspection for Multivariate degradation model. Reliability
Engineering and System Safety (In press).
Brenguer, C., Grall, A. Dieulle, L. and Roussignol, M.
2003. Maintenance policy for a continuously monitored
deteriorating system. Probability in the Engineering and
Informational Sciences 17(2): 235250.
Cox, D.R. 1972. Regression models and life-tables. Journal
of the Royal Statistical Society. Series B 34(2): 187220.
Dieulle, L., Brenguer, C. Grall, A. and Roussignol, M.
2006. Asymptotic failure rate of a continuously monitored
system. Reliability Engineering and System Safety 91(2):
126130.
Gouno, E., Sen, A. and Balakrishnan, N. 2004. Optimal
step-stress test under progressive type-I censoring. IEEE
Transactions on Reliability 53(3): 388393.
Grall, A., Brenguer, C. and Dieulle, L. 2002. A conditionbased maintenance policy for stochastically deteriorating
systems. Reliability Engineering and System Safety 76(2):
167180.
Grall, A., Dieulle, L. Brenguer, C. and Roussignol, M.
2002. Continuous-time predictive-maintenance scheduling for a deteriorating system. IEEE Transactions on
Reliability 51(2): 141150.
Jia, X. and Christer, A.H. 2002. A prototype cost model of
functional check decisions in reliability-centred maintenance. Journal of Operational Research Society 53(12):
13801384.
Kharoufeh, J.P. and Cox, S.M. 2005. Stochastic models
for degradation-based reliability. IIE Transactions 37(6):
533542.
Kong, M.B. and Park, K.S. 1997. Optimal replacement of
an item subject to cumulative damage under periodic
inspections. Microelectronics Reliability 37(3): 467472.
Lawless, J. and Crowder, M. 2004. Covariates and random
effects in a gamma process model with application to
degradation and failure. Lifetime Data Analysis 10(3):
213227.
Makis, V. and Jardine A. 1992. Optimal replacement in the
proportional hazards model. INFOR 30, 172183.
Newby, M. 1994. Perspective on weibull proportional hazards model. IEEE Transactions on Reliability 43(2):
217223.
Newby, M. and Dagg, R. 2003. Optimal inspection and
maintenance for stochastically deteriorating systems II:
discounted cost criterion. Journal of Indian Statistical
Association 41(1): 927.
600
Singpurwalla, N.D. 1995. Survival in dynamic environnements. Statistical Science 1(10): 86103.
van Noortwijk, J.M. 2008. A survey of the application of
gamma processes in maintenance. Reliability Engineering
and System Safety (In press).
Wang, H. 2002. A survey of maintenance policies of
deteriorating systems. European Journal of Operational
Research 139(3): 469489.
601
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Identical components are considered, which become obsolete once new-type ones are available,
more reliable and less energy consuming. We envision different possible replacement strategies for the old-type
components by the new-type ones: purely preventive, purely corrective and different mixtures of both types of
strategies. To evaluate the respective value of each possible strategy, a cost function is considered, which takes
into account replacement costs, with economical dependence between simultaneous replacements, and energy
consumption (and/or production) cost, with a constant rate per unit time. A full analytical expression is provided
for the cost function induced by each possible replacement strategy. The optimal strategy is derived in long-time
run. Numerical experiments close the paper.
INTRODUCTION
strategy. More generally, some mixture of both strategies, preventive and corrective, may also be envisioned
(details below) and may lead to lower costs, as will be
seen later. The point of the present paper is to look
for the optimal replacement strategy with respect of a
cost function, which represents the mean total cost on
some finite time interval [0, t]. This function takes into
account replacement costs, with economical dependence between simultaneous replacements (Dekker,
Wildeman, and van der Duyn Schouten 1997), and also
energy consumption (and/or production) cost, with a
constant rate per unit time.
A similar model as here has already been studied in
(Elmakis, Levitin, and Lisnianski 2002) and (Mercier
and Labeau 2004) in case of constant failures rates
for both old-type and new-type components. In those
papers, all costs were addingly discounted at time 0,
contrary to the present paper. In such a context, it
had been proved in (Mercier and Labeau 2004) that in
case of constant failure rates, the only possible optimal
strategies were either purely corrective or nearly pure
preventive (details further), leading to some simple
dichotomous decision rule.
A first attempt to see whether such a dichotomy is
still valid in case of general failure rates was done in
(Michel, Labeau, and Mercier 2004) by Monte-Carlo
(MC) simulations. However, the length of the MC
simulations did not allow to cover a sufficient range
for the different parameters, making the answer difficult. Similarly, recent works (Clavareau and Labeau
2006a) or (Clavareau and Labeau 2006b) e.g. proposed
complex models including the present one, which are
603
THE MODEL
0 U1:n
U2:n
Ui:n Ui+1:n
UK-1:n UK:n
Corrective replacements
of new components
Figure 1.
604
gK (t) :=
1
(CK+1 ([0, t]) CK ([0, t]))
cp
(1)
r
cp
t
F U1:n (t)
+ n bE U1:n
aE V (t) V (t U1:n )+
and, for 1 K n 1, we have:
gK (t) = (a 1) FUK+1:n (t)
t
t
+ (n K) bE UK+1:n
UK:n
FUK:n (t) FUK+1:n (t)
aE V (t UK:n )+ V (t UK+1:n )+
In order to find the optimal strategy according to the
mission time t and to the data of the model as in the
case of constant failure rates (see (Mercier and Labeau
2004)), the point should now be to find out the sign
of gK (t) for 0 k n 1. This actually seems to
be impossible in the most general case. However, we
are able to give some results in long-time run, which
is done in next subsection.
r + cf
1
a=
cp
0
b=
cp
Setting
THEORETICAL RESULTS
3.2
gK () = a 1 + b
K
(r + cf )(FUi:n (t)
a
E (V )
i=1
t
+ E(V ((t Ui:n )+ ))) + E(Ui:n
)
cf
a
1+ b
nE (U1:n U0:n )
cp
E (V )
+ (nK) (r+cf )E(V ((t UK:n )+ ))
g0 () =
t
+ cp FUK:n (t) + E(UK:n
) + nt
where we set
605
U0:n := 0.
E(V )
0 or
we then have gK () 0
alternatively
for all 0 K n 1 (we recall that a 1 and
r+c
cf cp ). Consequently, if E(V f) , the best strategy
among 0, . . . , n in long-time run is strategy 0. Such
a result is conform to intuition: indeed, let us recall
that stands for the additional energy consumption
rate for the old-type units compared to the new-type
ones; also, observe that r+cf
is the cost rate per unit
E(V )
time for replacements due to failures among new-type
components in long-time run. Then, the result means
that if replacements of new-type components due to
failures are less costly per unit time than the benefit due
to a lower consumption rate, it is better to replace oldtype components by new-type ones as soon as possible.
a
Now, we have to look at the case b E(V
) < 0
and for that, we have to know something about the
monotony of
DK := (n K) (UK+1:n UK:n ),
with respect of K, where DK is the K-th normalized spacing of the order statistics (U1:n , . . . , Un:n ),
see (Barlow and Proschan 1966) or (Ebrahimi and
Spizzichino 1997) e.g.. With that aim, we have to
put some assumption on the distributions of the residual life times of the old-type components at time
t = 0 (Ui for 1 i n): following (Barlow and
Proschan 1966), we assume that U1 , . . . , Un are i.i.d.
IFR (Increasing Failure Rate), which implies that
(DK )0Kn1 is stochastically decreasing. A first way
to meet with this assumption is to assume that all
old-type components have been put into activity simultaneously (before time 0) so that the residual life times
are i.i.d. (moreover assumed IFR). Another possibility
is to assume that all units have already been replaced
a large number of times. Assuming such replacement
times for the i-th unit to make a renewal process with
inter-arrival times distributed as some U (0) (independent of i), the residual life at time 0 for the i-th unit may
then be considered as the waiting time until next arrival
for a stationary renewal process with inter-arrivals distributed as U (0) . Such a waiting time is known to admit
as p.d.f. the function fU (t) such that:
F U (0) (t)
1R+ (t),
(2)
E U (0)
assuming 0 < E U (0) < +. Also, it is proved in
(Mercier 2008) that if U (0) is IFR, then U is IFR too.
The r.v. U1 , . . . , Un then are i.i.d. IFR, consequently
meeting with the required assumptions from (Barlow
and Proschan 1966).
We are now ready to state our main result:
fU (t) =
a
Theorem 4 If b E(V
) 0, the optimal strategy
among 0, . . . , n in long time-run is strategy 0.
a
In case b E(V
) < 0, assume that U1 , . . . , Un are
i.i.d. IFR r.v. (which may be realized through assuming that Ui stands for the waiting time till next arrival
for a stationary renewal process with inter-arrival time
(0)
distributed as U (0) , where
(0)U is a non-negative
IFR r.v. with 0 < E U
< +). Assume
too that Ui s are not exponentially distributed. The
sequence (E (DK ))0Kn1 is then strictly decreasing,
and, setting
a1
c:= a
(V
)b
E
and d:=
cf
cp 1
a
b
E(V )
c,
NUMERICAL EXPERIMENTS
We here assume that Ui s are i.i.d IFR random variables with known distribution. Examples are provided
in (Mercier 2008) for the case where the data is the distribution of some U (0) and the common p.d.f. fU of Ui
is given by (2) (see Theorem 4). All the computations
are made with Matlab.
All Ui s and Vi s are Weibull distributed according to W (U , U ) and W (V , V ), respectively, (all
independent) with survival functions:
U
F U (x) = eU x
for all x 0.
606
V
and F V (x) = eV x
We finally compute E V (t UK:n )+ with:
We take:
V = 1/ 2.25 103
U = 1/103 ;
(3)
U = V = 2.8 > 1
(4)
n1
K 1
t
V (t u)FUK1 (u)F UnK (u)fU (u)du
=n
E (U )
10.5, (U )
4.1,
E (V )
14, (V )
5.4.
We also take:
FU (x)
n!
t K1 (1t)nK dt
(K 1)!(nK)!
F UK+1:n (u) F UK:n (u) du
Table 1.
t \ 1V 103
5
10
15
20
25
30
35
40
45
50
75
100
1.2
1.5
1.75
2.25
2.5
3.5
10
10
10
7
9
10
9
10
10
10
10
10
10
10
10
10
7
9
10
9
9
9
9
9
9
9
10
10
10
6
8
9
7
8
8
8
8
8
8
10
10
2
5
8
9
6
7
7
7
7
7
7
10
10
2
5
7
8
5
6
6
6
6
6
6
10
10
1
4
7
7
4
5
6
5
5
5
5
10
10
1
4
6
6
4
4
5
4
4
4
4
10
10
1
3
5
4
2
3
3
3
3
3
3
10
10
0
2
4
3
1
2
2
2
2
2
2
10
10
0
1
3
1
0
1
1
1
1
1
1
10
10
0
1
1
0
0
0
0
0
0
0
0
607
10
Kopt
10
0
0
0.5
Kopt
Figure 5.
1.5
10
2
0
10
20
30
40
50
Kopt
Figure 2.
10
6
Kopt
0
0
0.2
0.4
Figure 6.
2
0
1
Figure 3.
1.5
2
cf
2.5
0.6
cp
0.8
1.2
1.4
10
8
Kopt
6
4
2
0
0
Figure 4.
0.02
0.04
0.06
0.08
0.1
CONCLUSIONS
608
609
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The main objective of this work is optimizing the function of maintenance at the foundry of the
company BCR in Algeria. For this, we use a comprehensive approach involving two global areas: organizational
aspect and technical aspect. As a first step, we analyse the reliability of a certain equipment through a Pareto
Analysis. After that, we present the influence of repair times on the unavailability of this equipment. In order
to calculate the optimal renewal times of some spare parts of an equipment (for they present an important time
of unavailability), we first make an economic evaluation, which leads to express the direct and indirect costs
of maintenance. Finally, in order not to charge an available item (good condition), we give an overview of
implementing a non destructive control.
INTRODUCTION
SELECTION OF EQUIPMENT
3
3.1
611
Figure 1.
The results show that the model of the two parameters Weibull is accepted for a level of significance =
0.05 for the facilities: DECRT, ROVT021, ROVT041,
ROVT042, GREN011, GREN021, NOYT020, BASP,
but For the NOYT018 machine, the model of Weibull
is rejected.
In addition to a tendency of these facilities(equipment) to the Weibull law : DECRT, ROVT041, ROVT042, GREN011, GREN021, NOYT020,
their exponentiality in lifetime is validated as well. For
equipment BASP, ROVT021, the exponential model is
rejected.
Table 1.
Equipment chosen.
Code
Designation
ROVT021
BASP
DECRT
NOYT018
GREN021
GREN011
NOYT020
ROVT042
ROVT041
Table 2.
Equip
Model adjusted
Parameters
Dks
d(n,5%)
GREN011
197
244
NOYT018
203
NOYT020
195
ROVT021
110
ROVT041
165
ROVT042
112
DECRT
268
BASP
388
= 1.08, = 79.12
= 0.013
= 1.07, = 66.68
= 0.087
= 1.05, = 127.28
= 0.0080
= 1.09, = 138
= 0.0075
= 1.25, = 182.04
= 0.0059
= 1.15, = 176.77
= 0.0059
= 1.14, = 196.45
= 0.0053
= 0.97, = 150.98
= 0.0065
= 1.32, = 55.66
= 0.0195
0.068
0.057
0.043
0.053
0.097
0.088
0.070
0.057
0.055
0.135
0.092
0.075
0.059
0.063
0.050
0.005
0.034
0.139
0.097
GREN021
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
Weibull
Exponential
612
0.087
0.095
0.097
0.13
0.108
0.128
0.083
0.069
Table 3.
Equip
Graphic test.
n
Curve of
tendency
Model
Rate
Exponential Constant
Exponential Constant
Exponential Constant
Exponential Constant
IFR
Growing
Exponential Constant
Exponential Constant
Exponential Constant
IFR
Growing
exponential model. That is to say that the failure rate of such equipment is common, which
is confirmed by the test chart that validated the
exponential model of its lifetime.
AVAILABILITY OF EQUIPMENTS
GREN021,
Dop =
3.3
MUT
MUT
=
,
MUT + MDT
MTBF
(1)
According to the preceding results, one notices homogeneity between the results gotten by the parametric
modeling and those non parametric because they join
themselves in the description of the type of failure.
For the BASP and ROVT021, the parameter of
shape of the model of Weibull is superior to 1. Their
failure rates are therefore increasing with age, what
corresponds, in the IFR law found by the graphic
test. It means that their breakdowns are due to the
ageing.
For the rest of the equipment, modeling parametrical reliability of the their lifetimes gave the model
Weibull with parameter shape close to 1 and the
Dop =
MUT
MUT + MTTR
(2)
613
Table 4.
Equip
Law adjusted
Parametres
ROVT021
ROVT042
DECRT
ROVT041
BASP
GREN021
GREN011
NOYT020
NOYT018
111
113
269
166
389
245
198
197
204
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
m = 0.93,
m = 1.06,
m = 0.74,
m = 0.75,
m = 0.71,
m = 0.78,
m = 0.66,
m = 0.59,
m = 0.55,
Table 5.
= 1.27
= 1.13
= 1.18
= 1.14
= 1.07
= 0.96
= 0.96
= 0.88
= 0.83
Dks
d(n,0.05)
MTTR
0.11
0.08
0.05
0.07
0.06
0.07
0.03
0.06
0.04
0.13
0.12
0.08
0.10
0.06
0.08
0.09
0.09
0.09
5.67
5.48
4.23
4.12
3.62
3.49
3.07
2.68
2.49
Equip
Law adjusted
Parametres
Dks
d(n,0.05)
MDT
GREN011
GREN021
NOYT018
NOYT020
ROVT021
ROVT041
ROVT042
DECRT
BASP
198
245
204
197
111
166
113
269
389
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
Log-normal
m = 1.17 , = 0.93
m = 1.46, = 1.04
m = 1.21, = 0.92
m = 1.20, = 1.01
m = 1.61, = 1.54
m = 1.51, = 1.33
m = 1.86, = 1.36
m = 1.31, = 1.49
m = 1.47, = 1.10
0.05
0.07
0.08
0.05
0.12
0.08
0.13
0.08
0.06
0.09
0.08
0.09
0.09
0.13
0.11
0.13
0.08
0.07
5.00
7.38
5.14
5.49
16.37
11.05
16.35
11.24
7.96
Table 6.
Results of the modeling of the availability times, where Ddif = Dopr Dops and Dres =
Dopr Dops
1Dopr .
Equip
MUT(h)
MTTR(h)
MDT(h)
Dopr
Dops
Ddif
Dres
GREN011
GREN021
NOYT018
NOYT020
ROVT021
ROVT041
ROVT042
DECRT
BASP
76.75
64.68
125.17
133.17
169.2
167.93
186.62
153
51.20
3.07
3.49
2.49
2.68
5.67
4.12
5.48
4.23
3.62
5.00
7.38
5.14
5.49
16.37
11.05
16.35
11.24
7.96
0.93
0.89
0.96
0.96
0.91
0.94
0.92
0.93
0.86
0.96
0.95
0.98
0.98
0.97
0.98
0.97
0.97
0.93
0.03
0.06
0.02
0.02
0.06
0.04
0.05
0.04
0.07
0.43
0.54
0.50
0.50
0.66
0.66
0.62
0.57
0.50
614
Table 7.
5.3
Code
Designation
POT300
THER
DETP
AXCL
VISB
Table 8.
Costs of maintenance.
(3)
Code
CP (DA)
Cd (DA)
POT300
THER
DETP
AXCL
VISB
47036.96
3136.46
3772.05
6813.46
291.99
24000
28800
19200
48000
24000
C p + Cd
(Cd ) + Cp
=
= c ,
() =
MUT
R(t)dt
0
(4)
The economic justification of the preventive maintenance comes from the calculation of the gain by using
a preventive maintenance policy.
to put the adequate means in place for a better hold in
charge of the repairs.
5.1
Selection of components
5.2
(T )
R(t)dt + R(t) =
Cd
.
(Cd Cp )
(5)
615
T
0
0
R(t)dt
R(t)dt
k + (1 k) F(T ), k =
Cp
Cp + Cd
(6)
Table 9.
Equip
Law
Parametres
D(ks)
d(n,5%)
POT300
16
Weibull
0.143
0.328
THER
26
Weibull
0.267
0.267
DETP
55
Weibull
0.114
0.183
AXCL
17
Weibull
0.168
0.318
VISB
28
Weibull
= 3.86,
= 3284.45
= 2.21,
= 1002.25
= 4.23,
= 2460.00
= 3.39,
= 5156.40
= 2.33,
= 1968.67
0.075
0.257
Table 10.
Compo
Parameters of
Weibull law
POT300
THER
DETP
AXCL
VISB
= 3.86,
= 3284.45
= 2.21,
= 1002.25
= 4.23,
= 2460
= 3.39,
= 5156.40
= 2.33,
= 1968.67
Cd
Cp
T0 = X .
0.51
9.18
0.36
5.18
0.52
1279.2
7.05
0.45
2320.38
82.19
0.17
334.67
r=
360.81
This relationship has an interesting graphic interpretation operated by Kay (1976) in the case of several
criteria. Considering the first member of this relationship, it is a function strictly increasing of t = F(T )
in the interval [0,1]to value in [0,1]. It is written in the
form:
T =F 1 (t)
(t = F(T )) =
0
R(t)dt
R(t)dt
Ti
Ti Ti1
ti = i/r
S(Ti )
Si
S(T16 )
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
1000
2400
2500
3200
3200
4500
4500
4588
4800
4840
4900
5240
5290
5900
6400
6912
0
1000
1400
100
700
0
1300
0
88
212
40
60
340
50
610
500
512
0,062
0.125
0.187
0.25
0.312
0.375
0.437
0.5
0.562
0.625
0.687
0.75
0.812
0.875
0.937
1
24000
45000
46400
55500
55500
69800
69800
70592
72288
72568
72928
74628
74828
76658
77658
78170
0
0.307
0.575
0.593
0.709
0.709
0.892
0.892
0.903
0.924
0.928
0.932
0.954
0.957
0.98
0.993
1
Table 12.
(7)
Rangi
Code
tk
T0
POT300
THER
DETP
AXCL
VISB
1.96
0.11
0.19
0.14
0.012
2.04
0.12
0.23
0.16
0.012
0.038
0.163
0.125
0.036
350
1282
2400
345
616
CONCLUSION
REFERENCES
Bunea, C. Bedfford, T. (2002). The effect of model uncertainty on maintenance optipization. IEEE, 486493.
Canfield, R. (1983). Cost optimisation of periodic preventive
maintenance. IEEE, 7881.
Cocozza-Thivent, C. (1997). Processus stochastique et fiabilt des systmes. Springer.
Gasmi, S. love, C. and W. Kahle (2003). A general
repair, proportional-hasard, framework to model complex
repairable systems. IEEE, 2632.
Lyonnet, P. (2000). La maintenance mathmatique et mthodes. Tec and Doc.
Pellegrin, C. (1997). Fondements de la Dcision Maintenance. Economica.
Pongpech, J. Murth, D. Optimal periodic preventive maintenance policy for leased equipment. Science Direct.
Priel, V. (1976). La maintenance: Techniques Modernes de
Gestion. EME.
Yves, G. Richet, D. and A. Gabriel (1999). Pratique de la
Maintenance Industrielle. Dunod.
617
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The inspection and maintenance policy is determined by the crossing of a critical threshold
by an aggregate performance measure. Rather than examining the first hitting time of the level, we base our
decisions on the probability that the system will never return to the critical level. The inspection policy is state
dependent and we use a "scheduling function" to determine the time to the next inspection given the system
state. Inspection reveals the true state of the system and allows the determination of the appropriate action, do
nothing or repair, and the time of the next inspection. The approach is illustrated using a multivariate system
model whose aggregate measure of performance is a Bessel process.
INTRODUCTION
The models derived in this paper are a natural extension of models which use the first hitting time of a
critical level as a definition of failure (Barker and
Newby 2006). Here we develop a model in which
the system is repaired if the probability of returning to the critical level is small (a last exit time) and
it has not crossed a second level which corresponds
to catastrophic failure. The intention is to maintain
a minimum level of performance. The approach is
appropriate when the system is subject to relatively
minor repairs until it begins to degrade faster and
requires major repair. This is typically the behaviour of
infrastructure and large capital items. It also captures
the behaviour of systems which eventually become
economically obsolete and not worth repairing. The
system is complex in the sense that it consists of a
number of components whose states evolve in time.
The system state is summarized using a Bessel process Rt [0, ) which is transient and thus tends
to increase. Transience ensures that the process will
eventually escape to . There are two critical levels,
and F > . The system is repaired if on inspection it has a small probability of returning to , and
suffers a catastrophic failure if it reaches F. The time
to inspection and repair is determined by a scheduling
function (Grall et al. 2002) which gives the time until
the next action as a function of the current state.
The threshold defines the repair actions and is
incorporated in a maintenance function r. The actions
are determined by the probability that the process
has escaped from [0, ) and F defines the failure of
the system and hence its replacement. The sequence
0
of failure and replacement times GF
constitutes a
renewal process. This embedded renewal process is
used to derive the expected cost per unit time over
an infinite time horizon (for the periodic inspection
policy) and the total expected cost (for the nonperiodic inspection policy). The costs are optimized
with respect to the system parameters.
W0 = 0
T
with = [1 , . . . , N ]T , Bt = Bt(1) , . . . , Bt(N )
1
N 1,
2
= 2
619
Pitman 1980) we handle repair by adjusting the thresholds. The difficulty is resolved by calculating the
distance remaining between the observed state at repair
and the threshold and represent repair by restarting
the process from the origin (in RN ) and lowering the
threshold to that remaining distance. Extensive treatments of the Bessel process with drift and the radial
Brownian motion are given in (Revuz and Yor 1991;
Pitman and Yor 1981; Rogers and Pitman 1980).
2
2.1
PERIODIC INSPECTIONS
Features of the model
x
which is a stopping time. The density of GF
is g xF .
Inspection at time t = (immediately before any
maintenance) reveals the systems performance measure R . The level of maintenance (replacement
or imperfect maintenance) is decided according to
0
whether the system has failed GF
or is still work0
. Replacement is determined by the first
ing GF
hitting time of threshold F.
P[H0x ] 1
P[H0x ] > 1
620
F (x)
Cx =
R0
+ Ci + Cr (x) + V 1{G0
F (x)
> }
R0
A = E Cf 1{G0
F r(x) }
(1)
= Cf
0
0
gF
r(x) ( y)dy
R0
B = E Ci + Cr (x) + V 1{G0
>
}
F r(x)
0
y)
dy
gF
= {Ci + Cr (x)} 1
r(x) (
0
gF
r(x) ( y) dy
F r(x)
vy f0 ( y) dy
vx = Q(x) + (x)
F r(x)
= Q(x) + (x)
0
1 2 0
2 + 2
I (F)
E[e 2 GF ] =
I (F 2 + 2 )
Solutions to (2) were obtained by performing numerical inversions of the Laplace transform using the
EULER method (Abate & Whitt 1995).
The Volterra equations (2) and (4) are reformulated
as Fredholm equations
+ 1
vx
lx
vy f0
( y) dy
lx
(2)
= P(x) + (x)
0
K {x, y} vy dy
K {x, y} ly dy
(3)
F r(x) }
R0
+ + L 1{G0
F r(x) > }
R0
F r(x)
ly f0 ( y) dy
(4)
3
NON-PERIODIC INSPECTIONS
621
2
(x b)
(a 1) + 1, 0 x b
2
m2 [x | a, b] =
b
1,
x > b.
a1
x + a, 0 x b
m3 [x | a, b] =
b
1,
x>b
All the functions decrease from a to 1 on the interval
[0, b] and then remain constant at 1. The scheduling
function m1 decays linearly; the function m2 is concave, initially steeply declining and then slowing; the
function m3 is convex, initially slowly declining and
then more rapidly. The different shapes reflect different attitudes to repair, m3 tends to give longer intervals
initially and m2 gives shorter intervals more rapidly.
To avoid a circular definition maintenance function
r employs the performance measure just before repair
x, P[H0x m(x)] 1
r(x) =
kx, P[H0x m(x)] > 1
with 0 < < 1, k (0, 1].
The next inspection is scheduled at m (r(x)) units
of time with cost Cr (r(x)), where
0,
P[H0r(x) m (r(x))] 1
Cr (r(x)) =
Crep , P[H0r(x) m (r(x))] > 1
3.2
F r(x) m(
[Ci + Cr (x) + V
vx = E[V x ] = A + B + C
m(r(x)) 0
A = Cf + v0
gF r(x) ( y) dy
0
B = {Ci + Cr (x)} 1
R0m(r(x))
]1{G0
r(x))}
F r(x) >m(
m(r(x))
0
m(r(x))
C = 1
0
F
0
gF
r(x) ( y) dy
0
y)
dy
gF
(
r(x)
0
1{yF r(x)} vy fm(r(x))
( y) dy
with
(x) = 1
0
m(r(x))
0
gF
r(x) ( y) dy
K{x, yt} y dy
(6)
and solve for x as for the periodic model as in (Barker
Newby 2007). The solution is then obtained by setting
x = 0.
= [Cf + V 0 ]1{G0
NUMERICAL RESULTS
622
Ci , Crep , Cf
160
( , )
v0
l0
C0
(1.2, 9)
(1.6, 9.5)
(max, any)
(1.8, 7.5)
(1.6, 9.5)
(1.6, 10.5)
(max, any)
(1.6, 9.5)
(1.4, 8.5)
9255.1
6972
500
2640.2
6972
5542.3
5
6972
90295
20781
208.81
5.83
81.77
208.81
163.69
5.83
208.81
2350.5
0.45
33.39
85.89
32.29
33.39
33.86
0.86
33.39
38.42
140
Expected cost per unit time
Table 1.
120
100
80
60
33.39
20
5
9.5
Repair threshold:
15
C0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
3.4
3.2
3.0
2.8
2.4
2.4
2.0
2.0
1.6
1.6
1.4
15.72
17.04
18.80
20.37
23.08
25.18
28.59
30.02
32.78
33.39
36.21
623
Table 3.
0.1
C0
8
1.6
33.39
0.5
0.9
9.5
1.6
33.39
11
1.6
33.39
Table 4.
b and .
Repair
threshold
=1
=2
=3
=4
=5
m1
m2
m3
m1
m2
m3
m1
m2
m3
m1
m2
m3
m1
m2
m3
2.2
2.1
2.1
2.2
2.2
2.1
2.4
2.5
2.4
5.2
5.2
5.2
6.5
6.5
6.5
1.5
4.2
0.9
1.7
2.9
1
2.5
2.8
1
3.7
3.8
1.9
0.5
0.7
0.5
1171.7
1169.5
1170.4
1189.1
1194
1189.9
1546.3
1572.1
1547.8
2.3283 105
2.34 105
2.3264 105
3.8437 106
3.8437 106
3.8437 106
unit time remains constant in the three cases studied. Figure 2 clearly shows that this is not the case
for inspection periods ( , =0.1 ], where =0.1
satisfies
t > =0.1 : P H0 < t > 1 0.1
For most values in this interval, the expected cost per
unit time increases with : the model penalizes a costly
strategy that favors too many repairs. For a period
of inspection greater than =0.1 , the expected costs
per unit time are identical since in all three cases the
approach towards repair is similar: the system will be
repaired with certainty
P H0 < t > 0.9 P H0 < t > 0.5
P H0 < t > 0.1 .
4.2
624
2.4
2.4
2.5
2.4
2.4
2.6
3.2
2.4
2.4
2.4
2.5
2.6
2.3
2.5
2.5
2.9
2.5
2.2
1467.6
1546.3
2296.7
1377.3
1546.3
2971.9
203.67
1546.3
13134
5500
Inspection strategy considered: m1
Inspection strategy considered: m
2
Inspection strategy considered: m
5000
4500
4000
SUMMARY
3500
3000
REFERENCES
2500
2000
1500
1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Inspection
policy
= 0.1
m1
m2
m3
2.3
2.3
2.3
2.4
3.1
2.1
1535
1622.7
1560.3
= 0.5
m1
m2
m3
2.4
2.5
2.4
2.5
2.8
1
1546.3
1572.1
1547.8
= 0.9
m1
m2
m3
2.1
2.1
2.1
2.2
4.1
0.9
1169.1
1169.1
1169.9
625
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Preventive maintenance planning is one of the most common and significant problems faced by
the industry. It consists of a set of technical, administrative and management actions to decrease the component
ages in order to improve the systems availability. There has been a great amount of research in this area
using different methods for finding an optimal maintenance schedule. This paper proposes a decision model,
integrating Bayesian approach with a multicriteria decision method (MCDM) based on PROMETHEE. This
model determines the best solution for the preventive maintenance policy taking both costs and availability as
objectives and considering prior expert knowledge regarding the reliability of the systems. Several building
considerations regarding the model are presented in order to justify the procedure and model proposed. Finally,
a numerical application to illustrate the use of the model was carried out.
1
INTRODUCTION
Maintenance seeks to optimize the use of assets during their life cycle, which means either preserving
them or preserving their ability to produce something safely and economically. In recent decades,
maintenance management has evolved and, nowadays,
it is not just a simple activity, but a complex and
important function. Its importance is due to the great
number of operational and support costs incurred on
systems and equipment. The United States, for example, used to spend approximately 300 billion dollars
every year on the maintenance of plant and operations
(Levitt, 1997).
Since maintenance planning and development are
crucial, their best practice is important for any industry. Maintenance planning can be defined according
to three types of maintenance. One type is corrective maintenance, which is performed after a failure occurs. Other types are preventive maintenance
and predictive maintenance which are considered to
be proactive strategies, because they are performed
before any failure has occurred and act in order to
reduce the probability of a failure.
Preventive maintenance (PM) is deemed as actions
that occur before the quality or quantity of equipment deteriorates. These actions include repair or
replacement of items and are performed to extend
the equipments life, thus maximizing its availability.
Predictive maintenance, on the other hand, concerns
planned actions that verify the current state of pieces of
equipment in order to detect failures and to avoid system breakdown, thus also maximizing the availability
of equipment. However, in order to create a PM program, it is necessary to determine how often and when
to perform PM activities (Tsai et al., 2001).
In this paper, we apply a multicriteria method
to solve the problem of preventive maintenance and
take into consideration costs, downtime and reliability. The following section, Section 2, contains an
overview of the PM problem and discusses previous
PM scheduling work. Section 3 presents explanations
of the PROMETHEE method and describes its procedures. In Section 4, the model applied is presented
along with the results and discussions. Finally, the conclusions are summarized and possible future studies
are suggested in Section 5.
2
2.1
THE PROBLEM
Preventive maintenance problem
This kind of maintenance is not known to be the dominant practice, because people resist spending money
on efforts to forestall failures that might not happen. There are two situations where PM is important:
when it reduces the probability or the risk of death,
injuries and environmental damage or when the task
costs are smaller than the cost of the consequences
(Levitt, 2003).
Of the many reasons to implement a PM policy in an
industry, the greatest is the improvement of a systems
availability. But that is not the only one. There are
many others such as: increasing automation, minimizing administrative losses due to delays in production,
627
628
3.2
The PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) conceived
by Brans consists of two phases: the construction of
an outranking relation and then the exploitation of
outranking values relations (Almeida et al., 2003).
These French school methods, which include the
PROMETHEE family, like the American school, are
used in multi-criteria problems of the type:
max {f1 (x), f2 (x), . . . , fk (x)|x A} ,
(1)
(2)
(a, b) =
where:
n
1
wj Pj (a, b)
W j=1
W =
n
(3)
wj
j=1
629
(a, b)
bA
n1
(4)
(b, a)
bA
n1
(5)
(6)
(7)
a,
xa = (a)
ya = (a)
+ a,
where
n is the number
of actions,
1
(a)
=
(a, b) (b, a) = 1n (a)
n
2
bA
a2 = 1n
(a, b) (b, a) (a)
,
bA
> 0.
(8)
(9)
For the problem presented, the use of a noncompensatory method was indicated as the most
suitable solution. Therefore this method favors the
alternatives with the best average performance. The
PROMETHEE method was chosen because of its
630
3.3.3 Criteria
A Weibull distribution was assumed for equipment
which deteriorates with time. Besides being highly
used, it is also flexible and therefore is proven to be a
good fit in many cases.
The Weibull distribution has two parameters in this
model: , shape parameter and scale parameter
respectively. The probability density function (10) and
f (t) =
R(t) = e
1
t
t
e
(10)
(11)
(12)
NUMERICAL APPLICATION
The criteria have already been described and the uncertainties have been discussed. The simulation presented
in this section was based on data taken from the context of electrical energy for devices and was found to
be highly adequate to preventive replacement.
For the replacement policy it was necessary to determine some parameters, as shown in Table 1. After the
determination of the parameters or equipment characteristics, the alternatives for the simulation are defined
(see Table 2). The alternatives determined are not chosen randomly, but according to opportune times that
allow preventive maintenance
631
Table 1.
Input data.
Table 4.
Distribution
Parameters
Value
Alternatives
R(t)
Cm(t)
Weibull
ca
cb
2,8
Unknown
10
2
A1
A2
A3
A4
A5
A6
A7
A8
A9
A10
0.986
0.941
0.877
0.8
0.717
0.631
0.548
0.469
0.398
0.334
2.117
1.244
1.025
0.958
0.945
0.954
0.971
0.992
1.013
1.033
Table 2.
Set of actions.
Alternatives
Time
years
A1
A2
A3
A4
A5
A6
A7
A8
A9
A10
1
2
3
4
5
6
7
8
9
10
Cm
Max/min
Weights
Preference function
Indifference limit
Preference limit
Max
0.55
V
0.02
0.05
0.23
Min
0.45
V
0.08
0.5
11
2.34
Ranking of alternatives.
Alternatives
Time
years
+
A1
A2
A3
A4
A5
A6
A7
A8
A9
A10
1
2
3
4
5
6
7
8
9
10
0.54
0.54
0.49
0.44
0.38
0.32
0.26
0.19
0.13
0.07
0.45
0.22
0.12
0.18
0.24
0.31
0.37
0.43
0.49
0.55
0.09
0.31
0.37
0.26
0.14
0.01
0.11
0.24
0.36
0.48
632
0,4
0,2
Net flow
0,2
0,4
0,6
A1 A2 A3 A4 A5 A6 A7 A8 A9 A10
Alternatives
Figure 1.
ACKNOWLEDGEMENTS
This work has received financial support from CNPq
(the Brazilian Research Council).
CONCLUSIONS
REFERENCES
Almeida, A.T. de & Costa, A.P.C.S. 2003. Aplicaes com
Mtodos Multicritrio de Apoio a Deciso. Recife: Ed.
Universitria.
Apeland, S. & Scarf, P.A. 2003. A fully subjective approach
to modeling inspection maintenance. European Jounral
of Operational Research: p. 410425.
Bevilacqua, M. & Braglia, M. 2000. The analytical hierarchy process applied to maintenance strategy selection.
Reliability Engineering & System Safety: p. 7183.
Brans, J.P. & Vincke, 1985. A Preference Ranking Organization Method: (The PROMETHEE Method for Multiple
Criteria Decision-Making). Management Science Vol. 31,
No. 6: p.647656.
Brans, J.P. & Mareschal, B. 2002. Promethee-Gaia, une
Methodologie dAide la Dcision em Prsence de
Critres Multiples. Bruxelles: ditions Ellipses.
633
634
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Most companies have their maintenance plans and method of conducting them in place now.
The maintenance plan is in agreement with the producers recommendations or is changed by the maintenance
technician on the basis of his experiences. The producers recommendations for the warranty period neednt be
the best of course, its best for the producer and its warranty. These maintenance strategies could be optimal
for commonly available equipment, but big rotary machines are often unique. Expensive maintenance done
by the producer could be in the long term more profitable because of their better experiences. This paper is
about total cost assessment of big rotary machine (i.e. a hydrogen compressor used in the refinery industry)
related to selected maintenance. Reliability data about MTBF and MTTR, economical evaluation of conducting
maintenance and last but not leastproduction lost from scheduled shutdown or equipment failure, are all
considered as all of these affect the cost. The companies that have one or more big rotary machines should do
a study of operational costs and they should do profitability assessment of outsourcing maintenance from the
producer. An economical model can find other problems, for example the wrong list of spare parts. The model
proposed in this paper could help companies to do studies for their big rotary machines.
1
Effective control of production risks consists of understanding the risk character in operating practice and in
finding suitable tools for evaluating and optimizing
the risk. One of the most important risks is associated
with failures of expensive and unique equipment.
The producers experience and construction knowledge can decrease the failure risk due to change of
mean time between failures and mean time to repair
or due to a shorter delivery period of spares. Better
maintenance can affect the useful lifetime also.
The paper is about the profitability evaluation of
the maintenance contract with the producer and finding profitability boundary values of the new MTBF,
MTTR and delivery period. The warranty, warranty period and additional warranty cost are not
included, because the paper is related to important
equipment whose cost/losses come mainly from production losses. In case of failure during warranty
period producers pay the spares and the reparation,
but production losses are on the users side.
This is a practical application of known procedures
and the model can be easy used for other suitable
equipments.
2
INTRODUCTION
NOTATIONS
635
The equipment researched for this study, which measures the profitability of outsourcing maintenance, is
compressor. Other suitable equipment types would be
big pumps, turbines, etc. For this application it is
necessary to highlight the following presumptions.
Equipment is unique
The user does not have the same or similar equipment, which can perform the same function.
The delivery period of spare parts for repairs and
scheduled maintenance is in weeks or months.
Equipment or spare parts are expensive.
Equipment performs important function
Equipment downtime causes stopping or reduction
of production. It generates large production losses.
Table 1.
REQUIRED INPUTS
The compressor in a refinery was chosen as an example of this study. Its function is compressed hydrogen
production. A great deal of company production
depends on this equipment because the compressed
hydrogen is necessary to the technological process.
The user and the producer will not specify the input
data because of private data protection, therefore this
has been changed, but it has not impacted on the
method.
Equipment basic information:
input: 3MW
purchase cost (including installation and liquidation): 3 million EUR
condition monitoring is necessary (vibrodiagnostic,
tribology)
recommended period of inspections and overhaul:
4 years
recommended period of general overhaul: 12 years
supposed useful lifetime: 30 years
Remark: The recommended periods of inspections
and overhaul proceed from scheduled outage of the
refinery.
5.1
[EUR /
year]
Cost
[EUR]
[year]
Purchase cost
Spare parts cost
bearings
rotary
seals system
seals
Operating cost
(energy)
3 000 000
30
100 000
50 000
30 000
30 000
6 000
116 000
4
12
12
4
12 500
2 500
2 500
1 500
19 000
1500
Table 2.
Current cost.
10 000
3 000
600
3600
1 000 000
Production losses.
Production losses
PL [EUR / hour]
0
15 000
636
Table 3.
MTBF, MTTR.
Reliability quantities
MTBF [year]
MTTR [day]
Parts in storage
Parts not in storage
15
50
10
10 + 300
Variety 2
The maintenance contract includes these points:
Producer services
online vibrodiagnostic
tribology
technology parameters monitoring
annual reports
Training course
Condition monitoring
Delivery, storage and maintenance of spare parts
Technical support
The annual contract price is 60 000 EUR. The estimated benefits for the user are MTBF and useful
lifetime increase to 105% and repairing times decrease
to 95%. Due to contract existence this will not delay
the ordering and administration, so the delivery period
could be about 8 months.
Variety 3
The maintenance contract includes these points:
Training course
Condition monitoring
Delivery, storage and maintenance of spare parts
Technical support
Inspection and overhaul conducting (every 4 years)
General overhaul conducting (every 12 years)
The annual contract price is 75 000 EUR. The estimated benefits for the user are MTBF and useful
lifetime increase to 110% and repairing times decrease
to 90%. Due to contract existence this will not delay
the ordering and administration, so the delivery period
could be about 8 months.
The benefits of the varieties are summarized in
table 4.
5.5
The varieties differ in annual purchase cost (including installation and liquidation), monitoring cost and
production lost.
Purchase cost is the same, but annual purchase cost
depends on useful lifetime (equation 2).
Annual_purchase_ cos t =
Purchase_ cos t
Useful_lifetime
[EUR/year]
The offered services for outsourcing can be summarized into two packages (variety 2 & variety 3).
The current method of maintenance is described in
variety 1.
Variety 1
The first possibility is to continue with the current maintenance strategy without using the standard
(1)
637
Table 4.
Benefits of varieties.
Variety
1
2
3
0
5
10
0
5
10
0
5
10
10
8
8
1
2
3
Lifetime
[year]
MTBF SP in storage
[year]
Time to repair
[day]
Delivery time
[month]
30
31.5
33
15
15.75
16.5
50
52.5
55
10
9.5
9
10
8
8
MTTR_s
MTTR_o
+
MTBF_s MTBF_o
24 PL [EUR/year]
5.6
(2)
TC1 TCx
CONx
(4)
(3)
638
(DP C_DP) 24 PL
C_DP
MTBF_o
= DP
CONx MTBF_o
[day]
24 PL
(5)
CONx =
CONx =
MTTR_s 24 PL
MTTR_o 24 PL [(MTTR_o(1 DP) C_RT + DP] 24 PL
(1 C_RT ) +
MTBF_s
MTBF_o
MTBF_s
MTBF_s C_BF
+
CONx =
MTTR_s
MTTR_o
+ MTBF_o
) 24 PL
( MTBF_s
MTTR_o 24 PL
MTTR_o 24 PL
C_BF
=
MTTR_s
MTTR_o
MTBF_o
MTBF_o C_BF
( MTBF_s + MTBF_o ) 24 PL CONx
PUR
PUR
PUR
C_LT =
LT
LT C_LT
PUR LT CONx
Contract
Savings
C_ DP
C_ RT
C_ BF
C_ LT
(7)
(8)
Purchase cost,
installation,
liquidation
Spare parts cost
Storage and
maintenance
of spare parts
Labour cost
Monitoring
Operating cost
Production lost
tasks during stop
period
failure; SP in storage
failure; SP not in
storage
Table 6.
(6)
Variety 1
[EUR/y.]
Variety 2
[EUR/y.]
Variety 3
[EUR/y.]
1 00 000
19 000
95 238
19 000
90 909
19 000
1500
10 000
3 600
10 00 000
1500
10 000
0
10 00 000
1500
10 000
0
10 00 000
2 40 000
22 32 000
2 17 143
17 10 857
1 96 364
16 29 818
36 06 100
0
0
30 53 738
60 000
4 92 362
29 47 591
75 000
5 83 509
Profitability boundary.
Variety 2
Variety 3
292
81
102.5
250
290
76
103.1
400
days
%
%
%
CONCLUSION
639
640
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
A. Artiba
Institut Suprieur de Mcanique de Paris (Supmeca), France
ABSTRACT: This paper addresses the selective maintenance optimization problem for a multi-mission seriesparallel system. Such a system experiences several missions with breaks between successive missions. The
reliability of the system is given by the conditional probability that the system survives the next mission given
that it has survived the previous mission. The reliability of each system component is characterized by its hazard
function. To maintain the reliability of the system, preventive maintenance actions are performed during breaks.
Each preventive maintenance action is characterized by its age reduction coefficient. The selective maintenance
problem consists in finding an optimal sequence of maintenance actions, to be performed within breaks, so that
to minimize the total maintenance cost while providing a given required system reliability level for each mission.
To solve such a combinatorial optimization problem, an optimization method is proposed on the basis of the
simulated annealing algorithm. In the literature, this method has been shown to be suitable for solving such a
problem. An application example with numerical results are given for illustration.
INTRODUCTION
For reparable systems, several mathematical models have been developed in the literature for optimal
design of maintenance policies (for a survey, see for
example (Cho and Parlar 1991; Dekker 1996)). Nevertheless, most of these models do not take into account
the limitations on the resources required to perform
maintenance actions. This drawback has motivated the
development of a relatively new concept called the
selective maintenance. The objective of the selective
maintenance consists in finding, among all available
maintenance actions, the set of appropriate actions
to be performed, under some operation constraints,
so that to maximize the system reliability or either
to minimize the total maintenance cost or the total
maintenance time. Selective maintenance, as a maintenance policy, is relevant to systems that are required
to operate a sequence of missions such that, at the
end of a mission, the system is put down for a finite
length of time to provide an opportunity for equipment
maintenance. Such systems may include for example manufacturing systems, computer systems and
transportation systems.
641
MULTI-MISSION SERIES-PARALLEL
SYSTEM DESCRIPTION
(1)
R(Bij (m))
,
R(Aij (m))
(2)
642
Aij (m)
Bij (m)
hij (t)dt
hij (t)dt
(3)
t
where Hij (t) = 0 hij (x)dx is the cumulated hazard
function of component Cij .
From the above equation, it follows that the reliability of subsystem Si and that of the system S are
respectively denoted by Ri (m) and R(m) and given as:
Ri (m) = 1
Ni
(1 Rij (m)),
and
R(m) =
Ri (m).
(5)
i=1
(m = 1, . . . , M 1)
(4)
j=1
n
1 if Cij undergoes PM ap
ap (Cij , m) = at the end of mission m,
0 otherwise.
In this paper two types of maintenance are considered, namely corrective maintenance (CM) and
preventive maintenance (PM). CM by means of
minimal repair is carried out upon components
failures during a given mission while PM is a
planned activity conducted at the end of missions
(i.e. within breaks) to improve the overall system mission reliability. It is assumed that component failure is operational dependent and the
time in which a given component undergoes minimal repair is negligible if compared to the mission
duration.
Each component Cij of the system is characterized by its hazard function hij (t) and its minimal
repair cost cmrij . The preventive maintenance model
is given on the basis of the age reduction concept initially introduced by Nakagawa (Nakagawa
1988). According to this concept, the age of a
given component is reduced when PM action is performed on this component. In this paper, the vector
VPM = [a1 , . . . , ap , . . . , aP ] represents the P PM
actions available for a given multi-mission system. For
each PM action ap (p = 1, . . . , P) is assigned the cost
cpm(ap ) and the time duration dpm(ap ) of its implementation, the age reduction coefficient (ap ) [0, 1]
and the set Comp(ap ) of components that may undergoes PM action ap . Regarding the values taken by
a given age reduction coefficient (ap ), two particular cases may be distinguished. The first case
corresponds to (ap ) = 1 which means that the PM
action ap has no effect on the component age (the
component status becomes as bad as old), while the
second case is (ap ) = 0 and corresponds to the
case where the component age is reset to the null
value (i.e. the component status becomes as good
as new).
Selective maintenance model attempts to specify a
PM action that should be performed, on which component and at the end of which mission it has to be
(6)
where Aij (m) and Bij (m) represent the ages of component Cij , respectively, at the beginning and at the end
of a given mission m (m = 1, . . . , M ) and Aij (1) = 0
by definition. If component Cij undergoes preventive
maintenance action ap (p = 1 . . . , P) at the end of
mission m, then the value of the component age Bij (m)
is reduced by the age reduction coefficient (ap ). In
this case, the minimal repair cost CMRij (m) assigned
to Cij becomes:
CMRij (m) = cmrij
Aij (m)
hij (x)dx ,
(9)
643
Bij (M )
Aij (M )
+ cmrij
hij (x)dx
1
P M
p=1 m=1
1
P M
CMRij = cmrij Hij (M +
Hij (m, p),
p=1 m=1
(12)
where Hij (M ) = Hij (Bij (M )) Hij (Aij (M )) and
Hij (m, p) = Hij (g((ap )) Bij (m)) Hij (Aij (m)).
From Equation (12), it follows that the total cost
CMR of minimal repair, induced by all components
during missions, is given by:
Ni
n
CMRij .
(13)
i=1 j=1
1
P M
(14)
p=1 m=1
Ni
n
DPM (m) =
hij (x)dx .
(11)
CMR =
(17)
(18)
Subjectto :
R(m + 1) R0 ,
(19)
(20)
sK
ap (Cij , m) 1,
(21)
p=s1
CPMij .
(15)
ap (Cij , m) {0, 1},
i=1 j=1
Finally, the total maintenance cost Ctotal to be minimized is given from Equations (13) and (15) such
that:
Ctotal = CMR + CPM .
(16)
(22)
i = 1, . . . , n; j = 1, . . . , Ni ;
m = 1, . . . , M 1;
p = s1 , . . . , sK ,
and K P.
(23)
(24)
where constraint (20) stands that PM actions undertaken at the end of a given mission should be completed
within the allotted time, constraint (21) imposes the
fact that each component may receive almost one PM
action at the end of each mission, while constraint(22)
is a {0, 1}-integrality constraint.
644
4
4.1
OPTIMIZATION METHOD
Figure 1.
algorithm.
APPLICATION EXAMPLE
The test problem used is this paper is based on a seriesparallel system composed of n = 4 subsystems Si
(i = 1, . . . , 4) as shown in Figure 2. Subsystems S1 ,
645
Table 2.
Figure 2.
example.
Table 1.
Component Cij
ij
ij
cmrij
C11
C12
C13
C14
C15
C21
C22
C23
C31
C32
C33
C34
C41
C42
0.006
0.01
0.006
0.002
0.008
0.002
0.01
0.011
0.002
0.007
0.009
0.006
0.002
0.002
1.36
2.19
1.66
1.93
1.41
1.58
2.57
2.84
2.01
1.34
1.13
1.7
1.27
2.09
1
2.3
1.5
1
1.9
2.6
1.6
1.3
1.2
2.4
1.9
0.7
0.7
0.4
PM action p
Comp(p)
(p)
cpm(p)
dpm(p)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
C11
C11
C12
C12
C13
C13
C14
C14
C15
C15
C21
C21
C22
C22
C22
C23
C23
C23
C31
C31
C32
C32
C32
C33
C33
C33
C34
C34
C34
C41
C41
C41
C42
C42
0.51
0.41
0.00
0.48
0.27
1.00
1.00
0.12
0.15
0.00
0.00
1.00
0.37
0.18
0.00
0.42
0.38
0.68
0.48
0.34
0.42
0.71
0.34
0.13
1.00
0.43
0.63
1.00
0.75
0.78
0.52
0.00
1.00
0.58
10.01
14.33
15.08
3.3
6.28
5.93
7.38
18.19
16.4
20.83
19.6
7.73
6.92
16.67
16.99
10.54
13.22
5.04
11.83
13.96
12.25
4.49
13.33
18.55
9.70
1.68
5.45
1.44
8.71
3.68
6.54
17.66
2.53
9.51
0.66
1.02
1.50
0.22
0.49
0.30
0.37
1.62
1.43
2.08
1.96
0.39
0.51
1.41
1.70
0.74
0.96
0.30
0.80
1.04
0.86
0.26
0.99
1.64
0.49
0.12
0.33
0.07
0.50
0.21
0.43
1.77
0.13
0.6
84
9
57
9
59
9
54
11
78
7
68
9
70
9
76
7
84
11
49
cost Ctotal
, induced by the obtained selective mainte
nance plan and minimal repairs, is Ctotal
= 400.64,
while the execution time is about 76 sec .
646
PM action p
Cij Comp(p)
2
4
11
15,30,26,
20,8
11,34
15,26,34,
30,20
20,30,
15,8
11,20,8,
26,15
C21
C22 , C41 , C33 ,
C31 , C14
C21 , C42
C22 , C33 , C42 ,
C41 , C31
C31 , C41 ,
C22 , C14
C21 , C31 , C14 ,
C33 , C22
5
6
7
8
Figure 3.
horizon.
CONCLUSION
647
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper considers a k-out-of-N system with identical, repairable components under called
(m, NG ) maintenance policy. Under this policy, maintenance is initiated when the number of failed components
exceeds some critical level identified by m. After a possible set-up time of spares replacement, at least NG
components should be good in the k-out-of-N system when it is going to be sent back to user. A multi-server
repair shop repairs the failed components. The operational availability of this kind depends on not only the spare
part stock level, the repair capacity, but also the two parameters m and NG of maintenance policy. This paper
presents a mathematical model of operational availability for repairable k-out-of-N system given limited spares
under (m, NG ) maintenance policy. We can make trade-off between the spare part stock level, the number of
repairmen and two parameters of maintenance policy using this model. From the analysis of an example, we get
the some valuable conclusions.
INTRODUCTION
649
DESCRIPTION OF PROBLEM
RESOLVING PROCESS
Ao =
E(To )
E(To ) + E(Td ) + E(Tr ) + E(Ts )
(1)
650
To
Td
Tr
Ts
Figure 1.
server. And guests leave when they finish their services. So, Pr (a, b, c, t) equals to the probability when
the sum of the guests which are served or are waiting for the service is b, after t time. The formula of
Pr (a, b, c, t) can be treated according to the following
conditions.
Pr (a, b, c, t)
= cect
min{a,c}t
ec Pr (a 1, b, c, ) d (4)
ec c ec ec
Pr (a 2, b, c, )d d
= (c)2 ect
= =
ec Pr (a2, b, c, )(t )d
(2)
(c)ac ect
(a c 1)
ec
0
(5)
(3)
t
0
Pr (a, b, c, t) =
(c)ab ect
(a b 1)!
(6)
where Pr (b, b, c, ) = ec .
Synthesizing all above conditions, we can get the
following computing equation of Pr (a, b, c, t).
651
e min{a,c}t ,
ac
cb1
b i
i
C
(1)
C
a cb
cbi
i=0
ac1
C ac
(b+i)t
ct
(e
e
)
Pr (a, b, c, t) =
j
j=1 (c b i)
(t)acj ct
e
+ Ccb
(a c j)!
(1)cb (ct)ac
ect ,
(a c)!
(ct)ab ct
e
,
(a b)!
3.3
To (n) =
nN
M 1
i=0
1
(n i)
(8)
E(To ) =
N
n=NG
N
n=NG
Pa (n)
nN
M 1
i=0
1
(n i)
(7)
a>c>b0
a>bc0
(9)
t1 = To (ns ) + Td = Td +
ns N
M 1
i=0
652
1
(ns i)
(10)
p2 (NM , sm , ne , se ) = Pr (N + X ne , N + S
X +N
ns
p1 (ns , ss , NM , sm )
sm =ss
p2 (NM , sm , ne , se )
where p1 (ns , ss , NM , sm ) is the translating probability when the state of system changes from
(ns , ss ) to (NM , sm ) in the first phase. p2 (NM , sm , ne , se )
is the translating probability when the state of system changes from (NM , sm ) to (ne , se ) in the second
phase.
p1 (ns , ss , NM , sm ) equals to the probability when the
number of the components repaired in t1 time is sm ss .
So, we can get following equation.
p1 (ns , ss , NM , sm ) = Pr (L1 , L2 , c, t1 )
ne se , c, ts )
p2 (NM , sm , ne , se ) = Pr (X + NM N sm ,
X se , c, ts )
p2 (NM , sm , ne , se )
Pr (Z, Z se , c, ts ) ,
Pr (N + X ne , N + X ne se , c, ts ) ,
Pr (X + NM N sm , X se , c, ts ) ,
0,
ne = NG and sm NG NM
NG < ne < N and sm = ne NM
where L1 = N + X ns ss , L2 = N + X ns sm .
According to (m, NG ) maintenance policy, it should
go through the process of replacement of failed components and waiting for spare parts. Therefore, we can
discuss it according to following conditions.
p2 (NM , sm , ne , se ) = Pr (Z, Z se , c, ts )
(14)
According to (m, NG ) maintenance policy, The condition that ne and sm dissatisfy above conditions, is
impossible to happen. Thus, p2 (NM , sm , ne , se ) = 0.
Synthesizing the Equation (13),(14) and (15), we
can get the equation of p2 (NM , sm , ne , se )
(11)
Pb (n, s) =
N X
+N i
Pb (i, j) p(i,j),(n,s)
i=NG
j=0
(12)
(13)
Order
= [Pb (NG , 0) Pb (NG , Z)
653
Pb (NG + i, 0) Pb (NG + i, Z i) ]T ,
(15)
Q =
|
.
| ..
|
.
| ..
|
.
| ..
|
.
| ..
|
.
| ..
|
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
P(NG ,0),(N ,X )
..
.
p(NG ,Z),(N ,X )
..
.
p(NG +i,0),(N ,X )
..
.
p(NG +i,Zi),(N ,X )
..
.
P(N ,0),(N ,X )
..
.
P(N ,X ),(N ,X )
we can get
=Q
Pa (n) =
Z
(17)
Pb (n, s)
(18)
s=0
E (Tr ) =
NG N
M 1
E Tc NG NM sm , N
sm =0
+X NM sm , c
Ps (sm )
(19)
..
.
..
.
..
.
..
.
..
.
|
|
|
|
|
|
|
|
|
|
|
(16)
process of replacement and repair of system components, and Tr can be treated as 0. When sm dissatisfies
the least demand (NG NM ) of repair, it must wait
for the lacking components which are repaired to finish the process of replacement of system components.
So, Tr is the lingering time which is produced by
waiting for the demanding spare parts,
and it just
relates to sm . Assume that Tc s , s , c is the lingering
time which is produced by waiting for the demanding
spare parts, when the number of demanding replaced
spare parts is s , the number of components that
wait for the repairing is s and the number of repair
teams is c at the starting time of component replacement period. Therefore, we can catch the computing
formula of E(Tr ).
When s c, the number of components that wait
for the repairing is less than the number of repair
teams, and the intervalof the next component that
has been repaired is 1 s . When s > c, the
number of components that wait for the repair is
larger than the number of repair teams, and the interval
of the next component
that has been repaired is
1 (c). And T c s , s , c equals to the addition of the
654
1
+ E Tc s 1, s 1, c
min {s , c}
=1
=2
=3
=4
(3,5)
(3,6)
(3,7)
0.8797
0.8974
0.9106
0.9106
0.8797
0.8974
0.9106
0.9106
0.7726
0.8359
0.8907
0.9106
0<s s c
0. 92
0. 9
Operationalavailability
s 1
(s h)
h=0
s ,
E[Tc (s , s , c)] = c
s c s s +c1 1
(ch) ,
c
h0
0,
For Ps (sm ), because the number of available components must be NM at the starting time of component
replacement period, Ps (sm ) equals to the steady probability when the state of system is (NM , sm ) at the
starting time of component replacement period. And
we have known 0 ss sm , so we can get
Ps (sm ) =
sm
N
Pa (ns , ss ) p1 (ns , ss , NM , sm )
ns =NG ss =0
(22)
At the end, by the Equation (21) we can compute
to E(Tr ).
4
ANALYSIS OF AN EXAMPLE
Assume that there is a 2-out-of-7 hot standby redundancy system. The failed time of components has the
exponential distribution with rate = 0.005. There
is only one repair man (c = 1) to repair the failed
components replaced. And the repairing time also has
the exponential distribution with rate = 0.1. The
time of system down and in transit to depot is Td = 5.
And time of system in transit back is Ts = 5. When
we set the initial number (X) of spare parts from 1 to
5, the parameter of (m, NG ) maintenance policy corresponds to (3,6), (4,6), (3,7) and (4,7), we want to
know the result of operational availability. Following
the above method, the Table 1 and Figure 2 show the
date table and curve of operational availability when
we set m = 3 and change NG in the (m, NG ) maintenance policy. And the Table 2 and Figure 3 show
0. 88
0. 86
Initial number of s pare parts
0. 84
X=1
X=2
X=3
X=4
0. 82
0. 8
0. 78
0. 76
655
0. 96
Initial number of spare parts
0. 94
X=1
X=2
X=3
X=4
Initial number of
spare parts
X
X
X
X
=1
=2
=3
=4
(4,5)
(4,6)
(4,7)
0.9142
0.9249
0.9323
0.9382
0.8605
0.9250
0.9322
0.9382
0.7916
0.8351
0.8837
0.9268
Operational availability
Parameter of policy
0. 92
0. 9
0. 88
0. 86
0. 84
0. 82
0. 96
0. 94
0. 92
0. 9
0. 88
0. 86
Initial number of spare parts
0. 84
0. 95
X=1
X=2
X=3
X=4
0. 82
Operational availability
0. 8
0. 78
0. 9
0. 85
( 4, 6) maintenance policy
( 3, 6) maintenance policy
m=4 maintenance policy
m=3 maintenance policy
0. 8
0. 75
0. 7
0. 65
1
Parameter of policy
Initial number of
spare parts
X
X
X
X
=1
=2
=3
=4
(2,6)
(3,6)
(4,6)
(5,6)
0.8238
0.8605
0.8609
0.8609
0.8797
0.8974
0.9106
0.9106
0.8605
0.9250
0.9322
0.9382
0.8636
0.9048
0.9500
0.9533
3
Initial number of spare parts
656
CONCLUSIONS
657
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J.H. Saleh
School of Aerospace Engineering, Georgia Institute of Technology, USA
ABSTRACT: Maintenance planning and activities have grown dramatically in importance and are increasingly
recognized as drivers of competitiveness. While maintenance models in the literature all deal with the cost
of maintenance (as an objective function or a constraint), only a handful addresses the notion of value of
maintenance, and seldom in an analytical or quantitative way. We propose that maintenance has intrinsic value
and argue that existing cost-centric models ignore an important dimension of maintenance, its value, and in so
doing, can lead to sub-optimal maintenance strategies. We develop a framework for capturing and quantifying
the value of maintenance activities. The framework presented here offers rich possibilities for future work in
benchmarking existing maintenance strategies based on their value implications, and in deriving new maintenance
strategies that are value-optimized.
INTRODUCTION
659
BACKGROUND
660
Each type of maintenance can be further classified according to the degree to which it restores the
system [Pham and Wang, 1996]. At one end of the
spectrum, perfect maintenance restores the system to
its initial operating condition or renders it as good
as new. At the other end of the spectrum, minimal
repair returns the system to the condition it was in
immediately prior to failing (in the case of corrective
maintenance), or as bad as old. In between these
extremes lies imperfect maintenance, which returns
the system to a condition somewhere in between as
good as new and as bad as old. Finally, there is also
the possibility that maintenance leaves the system in
a worse condition than before the failure, through, for
example, erroneous actions such as damaging adjacent
parts while replacing a faulty unit.
2.2
Maintenance models
661
The present work builds on the premise that engineering systems are value-delivery artifacts that provide a
flow of services (or products) to stakeholders. When
this flow of services is priced in a market, this pricing or rent of these systems services allows the
assessment of the systems value, as will be discussed
shortly. In other words, the value of an engineering
system is determined by the market assessment of the
flow of services the system provides over its lifetime.
We have developed this perspective in a number of previous publications; for further details, the interested
reader is referred to for example Saleh et al., (2003),
Saleh and Marais (2006), or Saleh (2008).
In this paper, we extend our value-centric perspective on design to the case of maintenance. Our
argument is based on four key components:
1. First, we consider systems that deteriorate stochastically and exhibit multi-state failures, and we
662
p11(2)
p11(1)
p12(1)
p11(0)
p22(1)
p22(2)
p23(1)
pnn(i)
pnd(i)
Deteriorated
Deteriorated
p23(2)
We consider a k-state discrete-time Markov deteriorating system with time-dependent transition probabilities as shown in Figure 1, for the no-maintenance
case with three states. The states are numbered from
1 through k in ascending order of deterioration where
state 1 is the new state and state k is the failed state.
The time-dependence allows us to take account of the
fact that a new (or deteriorated) system will become
more likely to transition to the deteriorated (or failed)
state as it ages (time-dependence implies dependence
on the virtual age of the system). With no maintenance the failed state is an absorbing state whence
it is not possible to transition to either of the other
states. Further, it is not possible to transition from the
deteriorated state to the new state without performing maintenance. In other words, the system can only
transition in one direction, from new to failed, perhaps via the deteriorated state (but the system has no
self-healing properties).
pnf(i)
pdd(i)
p13(0)
Bworst
P(i) =
..
..
..
.
.
.
0
0
0
Failed
i, j 1
1m<nk
(2)
If we define 0 to be the initial probability distribution of the system, the probability distribution after
j state transitions is:
1
model
(1)
pdf(i)
Figure 1. Three-state
maintenance.
New
p13(1)
Failed
New
p22(2)
p23(2)
p12(0)
4.1
B1
p12(2)
p13(2)
of
system
with
no
j = Pj P2 P1 0
663
(3)
(4)
p Bj =
pi (Bj )
(7)
i=1
p(Bj ) PV (N , Bj )
(8)
all branches
(5)
To simplify the indexing, we consider that the revenues the system can generate between (i 1)T and
iT are equal to um (i) T .
Each branch of the graph represents a particular
value trajectory of the system, as discussed below.
Each time step, the system can remain fully operational, or it can transition to a degraded or failed
state. A branch in the graph is characterized by
the set of states the system can visit for all time
periods considered. For example, the branch B1 =
{1, 1, 1, 1, . . ., 1} represents the system remaining in
State 1 throughout the N periods considered, whereas
Bj = {1, 1, 1, 2, 2, . . ., 2} represents a system starting in State 1 and remaining in this state for the
first two periods, then transitioning to a degraded
state (here State 2) at the third period and remaining in this particular degraded state. Notice that the
branch Bworst = {1, k, k, . . ., k} represents a new
system transitioning to the failed state at the first
transition.
Since each state has an associated utility um (i),a
Present Value can be calculated for each branch over
N periods as follows:
N
PV N , Bj =
(1 + rT )i
(9)
.95
P= 0
0
.04
.9
0
.01
.1
1
uBj (i)
i=1
k
N
um (i) i (m)
(1 + rT )i
i=1 m=1
(6)
664
Table 1.
Branch
Transitions
B1
B4
{1, 1, 1, 1, 1}
{1, 1, 1, 2, 2}
Comment
The system starts in state 1 and remains in this state throughout the four periods.
The system starts in state 1; it remains in State 1 for two periods, then transitions to
the degraded State 2 in the third period and remains in this State 2.
{1, 1, 2, 2, 3} The system starts in State 1; it remains in State 1 for the first period, then transitions
to the degraded State 2 in the second period; it remains in this degraded state for
the third period, then transitions to the failed State 3 in the fourth period.
B8
PV of Branch 1
PV of Branch 4
PV of Branch 8
$300,000
Present Value
Perfect
maintenance
p(B 1 )=81.4%
$350,000
p(B 4 )=3.2%
$250,000
p(B 8 )=0.3%
$200,000
$150,000
$100,000
New
$50,000
$Period 1
Period 2
Period 3
Period 4
Perfect
maintenance
j = Pj P2 P1 0
Deteriorated
Imperfect
maintenance
Failed
Figure 4. Performing perfect maintenance returns the system to the NEW state.
Reliability
Perfect
maintenance
Perfect maintenance
shifts the reliability
curve to the right
Figure 5.
(10)
665
p11(i)
p11(0)
New
p12(0)
p12(i)
p13(0) p13(i)
Deteriorated
p22(i)
p23(i)
Failed
p12(1)
p13(1)
p22(1)
p23(1)
maintenance
p12(0)
p13(0)
p11(0)
p12(0)
p22(1)
New
Deteriorated
Failed
and
Maintenance
p22(2)
p13(0)
(12)
(13)
This result is easily extended for perfect maintenance occurring at the end of any period j:
p23(1)
te
main
nan
ce
p23(2)
p11(1)
p11(0)
(14)
Thus we can model the transition assuming maintenance at time tm > t0 as:
m1 = Pk P2 P1 0
m = 0
(11)
m+k = Pk P2 P1 0
In short, maintenance acts on two levers of value:
(1) it lifts the system to a higher value trajectory,
666
Period 1
Period 2
Period 3
Period 4
Period n+1
Time
p11(2)
Branch 1
without
maintenance
p11(n)
p11(3)
p11(1)
p11(0)
p11(0)
p11(n-2)
p11(1)
Branch 1
with perfect
maintenance
occurring at the
end of the
second period
p11(1)
p11(0)
Perfect
maintenance
occurs here
= $76, 900
Period 1
Period 2
Period 3
Period 4
Time
Maintenance lifts a
degraded system to
higher value branches
PV
maintenance_1
PV
provided by
maintenance
PV
Figure 9.
tenance.
no_maintenance_1
(15)
CONCLUSIONS
667
REFERENCES
Block, H.W., Borges, W.S. and Savits, T.H. 1985. AgeDependent Minimal Repair, Journal of Applied Probability 22(2): 370385.
Brealy R, Myers C. 2000. Fundamentals of Corporate
Finance. 6th Ed. New York: Irwin/McGraw-Hill.
Dekker, Rommert. 1996. Applications of maintenance optimization models: a review and analysis. Reliability
Engineering and System Safety (51): 229240.
Doyen, Laurent and Gaudoin, Olivier, 2004. Classes of
imperfect repair models based on reduction of failure
intensity or virtual age. Reliability Engineering and
System Safety, 84: 4556.
Hilber, P., Miranda, V., Matos, M.A., Bertling, L. 2007.
Multiobjective Optimization Applied to Maintenance Policy for Electrical Networks. IEEE Transactions on Power
Systems 22(4): 16751682.
Kijima, Masaaki, Morimura, Hidenori and Suzuki, Yasusuke,
1988. Periodical replacement problem without assuming minimal repair. European Journal of Operational
Research 37(2): 194203.
Malik, M.A.K. 1979. Reliable Preventive Maintenance
Scheduling. AIIE Transactions 11(3): 221228.
Marais, Karen, Lukachko, Stephen, Jun, Mina, Mahashabde,
Anuja, and Waitz, Ian A. 2008. Assessing the Impact
of Aviation on Climate. Meteorologische Zeitung, 17(2):
157172.
Nakagawa, T. 1979a. Optimum policies when preventive
maintenance is imperfect. IEEE Transactions on Reliability 28(4): 331332.
Nakagawa, T. 1979b. Imperfect preventive-maintenance.
IEEE Transactions on Reliability 28(5): 402402.
Nakagawa, T. 1984. Optimal policy of continuous and discrete replacement with minimal repair at failure. Naval
Research Logistics Quarterly 31(4): 543550.
668
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The objective of this paper is to define a process for maintenance management and to classify
maintenance engineering techniques within that process. Regarding the maintenance management process, we
present a generic model proposed for maintenance management which integrates other models found in the
literature for built and in-use assets, and consists of eight sequential management building blocks. The different
maintenance engineering techniques are playing a crucial role within each one of those eight management building blocks. Following this path we characterize the maintenance management framework, i.e. the supporting
structure of the management process.
We offer a practical vision of the set of activities composing each management block, and the result of the
paper is a classification of the different maintenance engineering tools. The discussion of the different tools can
also classify them as qualitative or quantitative. At the same time, some tools will be very analytical tools while
others will be highly empirical. The paper also discusses the proper use of each tool or technique according to
the volume of data/information available.
669
Effectiveness
Phase 1:
Definition of the
maintenance
objectives and
KPIs
Phase 8:
Continuous
Improvement
and new
techniques
utilization
Phase 7:
Asset life cycle
analysis
and replacement
optimization
Assessment
Figure 1.
Phase 2:
Assets priority
and maintenance
strategy definition
Improvement
Phase 6:
Maintenance
execution
assessment
and control
Effectiveness
Phase 3:
Immediate
intervention
on high impact
weak points
Phase 1:
Balance
Score Card
(BSC)
Phase 4:
Design of
the preventive
maintenance
plans and
resources
Phase 8:
Total Productive
Maintenance
(TPM),
e-maintenance
Phase 5:
Preventive plan,
schedule
and resources
optimization
Phase 7:
Life Cycle
Cost Analysis
(LCCA)
Efficiency
Phase 2:
Criticality
Analysis
(CA)
Improvement
Phase 4:
ReliabilityCentred
Maintenance
(RCM)
Phase 6:
Reliability
Analysis (RA)
& Critical Path
Method
(CPM)
Phase 5:
Optimization
(RCO)
Efficiency
Assessment
Phase 3:
Failure Root
Cause Analysis
(FRCA)
MAINTENANCE MANAGEMENT
FRAMEWORK
Maintenance
Cost Effectiveness
Maintenance
planning
and scheduling
Quality
Learning
PM
Compliance
(98%)
Accomplishment
of criticality analysis
(Every 6 months)
Data integrity
(95%)
Figure 3.
qualitative techniques which attempt to provide a systematic basis for deciding what assets should have
priority within a maintenance management process
(Phase 2), a decision that should be taken in accordance with the existing maintenance strategy. Most of
the quantitative techniques use a variation of a concept
known as the probability/risk number (PRN) [11].
Assets with the higher PRN will be analysed first.
Often, the number of assets potentially at risk outweighs the resources available to manage them. It is
therefore extremely important to know where to apply
670
+
1
Critical
Figure 5.
Maintainability
Working
Time
Safety
Environment
Quality
Delivery
C
C
B.C
B.C
B.C
A.B
Reliability
A.B
M
A.B
R
B.C
Figure 4.
Consequence
A.B
10 20 30 40 50
B.C
Non-critical
B.C
Semi-critical
F
r
e
q
u
e
n
c
y
assets assessment, as a way to start building maintenance operations effectiveness, may be obtained. Once
there is a certain definition of assets priority, we have
to set up the strategy to be followed with each category
of assets. Of course, this strategy will be adjusted over
time, but an initial starting point must be stated.
As mentioned above, once there is a certain ranking of assets priority, we have to set up the strategy
to follow with each category of assets. Of course, this
strategy will be adjusted over time, and will consist
of a course of action to address specific issues for
the emerging critical items under the new business
conditions (see Figure 6).
Once the assets have been prioritized and the maintenance strategy to follow defined, the next step would
be to develop the corresponding maintenance actions
associated with each category of assets. Before doing
so, we may focus on certain repetitiveor chronic
failures that take place in high priority items (Phase 3).
671
Ensure certain
equipment availability
levels
RCM
Implementation phase
RCM
team
conformation
Maintenance strategy
Asset category
Initial
Phase
Operational
context
definition
and asset
selection
Criticality
Analysis
(level?)
Functional
failures
Function
FMEA
Failure Mode and
Effects Analysis
Failure modes
Effect of
failure modes
Sustain improve
current situation
Final
Phase
Figure 7.
Application of
the RCM
logic
Maintenance
plan
documentation
Optimality
Criteria
Equipment
status &
functional
dependencies
Failure
Dynamics
Monte Carlo
Model
PM
Schedule
Preventive
Maintenance
Plan
System
constraints
Figure 8.
Work in process
and the maintenance/replacement interval determination problems, mid-term models may address, for
instance, the scheduling of the maintenance activities
in a long plant shut down, while short term models
focus on resources allocation and control [13]. Modelling approaches, analytical and empirical, are very
diverse. The complexity of the problem is often very
high and forces the consideration of certain assumptions in order to simplify the analytical resolution of
the models, or sometimes to reduce the computational
needs.
For example, the use of Monte Carlo simulation modelling can improve preventive maintenance
scheduling, allowing the assessment of alternative
scheduling policies that could be implemented dynamically on the plant/shop floor (see Figure 8).
Using a simulation model, we can compare and discuss the benefits of different scheduling policies on the
status of current manufacturing equipment and several operating conditions of the production materials
flow. To do so, we estimate measures of performance
by treating simulation results as a series of realistic
672
CAPEX
Capital Costs
Development
costs
Investment
costs
OPEX
Operational Costs
Operation
costs
Conventional Maintenance
E-maintenance
Top Management
Top Management
Acquisition
Reports
Design
Middle Management
Middle Management
Investigation
Reports
Login to
iScada
Maintenance Dept
Construction
Precise &
Concise
Information
Remove
Inspections/Complaints
Time (years)
Figure 9.
Maintenance Dept
Assets /
Information Source
Assets /
Information Source
CONCLUSIONS
673
REFERENCES
[1] EN 13306:2001, (2001) Maintenance Terminology.
European Standard. CEN (European Committee for
Standardization), Brussels.
[2] Crespo Marquez, A, (2007) The maintenance management Framework. Models and methods for complex
systems maintenance. London: Springer Verlag.
[3] Vagliasindi F, (1989) Gestire la manutenzione. Perche
e come. Milano: Franco Angeli.
[4] Wireman T, (1998) Developing performance indicators for managing maintenance. New York: Industrial
Press.
[5] Palmer RD, (1999) Maintenance Planning and
Scheduling. New York: McGraw-Hill.
[6] Pintelon LM, Gelders LF, (1992) Maintenance management decision making. European Journal of Operational Research, 58: 301317.
[7] Vanneste SG, Van Wassenhove LN, (1995) An integrated and structured approach to improve maintenance. European Journal of Operational Research, 82:
241257.
[8] Gelders L, Mannaerts P, Maes J, (1994) Manufacturing strategy, performance indicators and
674
[9]
[10]
[11]
[12]
[13]
[14]
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
A.F. Leito
Polytechnic Institute of Bragana, Bragana, Portugal
G.A.B. Pereira
School of Engineering, University of Minho, Guimares, Portugal
ABSTRACT: In industry, spare equipments are often shared by many workplaces with identical equipments
to assure the production rate required to fulfill delivery dates. These types of systems are called Maintenance
Float Systems. The main objective of managers that deal with these types of systems is to assure the required
capacity to deliver orders on time and at minimum cost. Not delivering on time has often important consequences;
it can cause loss of costumer goodwill, loss of sales and can damage organizations image. Maintenance cost is
the indicator more frequently used to configure maintenance float systems and to invest in maintenance workers
or spare equipments. Once the system is configured, other performance indicators must be used to characterize and measure the efficiency of the system. Different improvement initiatives can be performed to enhance
the performance of maintenance float systems: performing preventive maintenance actions, implementation of
autonomous maintenance, improvement of equipments maintainability, increase of maintenance crews efficiency etc. Carrying out improvement based on facts is a principle of Total Quality Management (TQM)
in order to step to business excellence. It requires monitoring processes through performance measures. This
work aims to characterize and highlight the differences and relationships between three types of performance
measuresequipment availability, equipment utilization and workplace occupation, in the context of maintenance float system. Definitions and expressions of these three indicators are developed for maintenance float
systems. The relationship between maintenance float systems efficiency and the referred indicators is shown.
Other indicators are also proposed and compared with the first ones (number of standby equipments, queue
length etc.).
INTRODUCTION
675
units in standby
workstation
in queue
be attended by
maintenance servers
Figure 1.
undesirable effects in production, caused by downtimes. Spares can be efficiently managed when identical equipments are operating in parallel in the
workstation. This type of system is called a Maintenance Float System. Float designates equipments
in standby and equipments waiting for maintenance
actions in the maintenance center. Equipment or
unit involved in a Maintenance Float System (MFS)
switches among different states: operating in workstation, waiting in the queue to be repaired, being repaired
in the maintenance center, waiting until required by the
workstation (see fig. 1).
Some studies present mathematical and simulation
models to configure MFS. One of the first attempts
to determine the number of float units was proposed
by Levine (1965) who uses analytical method based
on traditional reliability theory. The author introduced
a reliability factor based on the ratio MTTR/MTBF.
Gross et al., (1983), Madu (1988), Madu & Kuei
(1996) use Buzens algorithm. Zeng & Zhang (1997)
consider a system where a key unit keeps the workstation functioning and a set of identical units are kept in
a buffer to replace units sent for repairing. The system
is modeled as a closed queue (an M/M/S/F queue),
and the idle probability of the system is obtained. The
optimal values of the capacity of the inventory buffer
(F), the size of repair crew (S) and the mean repair rate
are determined by minimizing the total cost Shankar &
Sahani (2003) consider a float system whose failures
are classified as sudden and wear-out. Units subject
to wear-out failures are replaced and submitted to
preventive maintenance actions after a specific time
period. Based on the reliability function of the system,
authors find the number of floating units needed to
support the active units such that the number of active
units does not change. Most of the studies based on
676
EQUIPMENT AVAILABILITY
Active
in workstation
Figure 2.
Tup
,
Tup + Tdown
MTBF
MTBF + MTTR
be attended in
maintenance center
(2)
in standby
Active
in workstation
Waiting
in the queue
be attended in
maintenance center
(1)
Waiting
in the queue
in standby
Figure 3.
EQUIPMENT UTILIZATION
677
WORKPLACE OCCUPATION
Workplace occupation depends on time between failures and time until replacement of the failed unit
(fig. 4).
Workplace occupation = time in workplace /operating cycle
This operating cycle (Dw ), however, does not correspond to the cycle defined for equipment availability
and utilization calculation purposes. This operating
cycle (fig. 5) begins when an unit starts operating in
the workstation and ends when another unit takes its
place. For MFS, the new unit is different from the
failed one and the replacement is performed before
the conclusion of initial unit repair.
Dw = time in workstation + time until replacement
of failed unit occurs
If the replacement of failed equipment is always
immediate, then the workstation occupation will be 1,
meaning that the workplace is always available when
required. The time to replace equipment is neglected.
The time until a new unit starts operating in the
workplace depends on time to repair and on the number
of spare equipments. Workplace occupation is related
with equipment availability and utilization. Equipment
utilization can be low due to time to repair or due to
standby time. In the first case, workplace occupation is
workplace 1
workplace 2
......
workplace M
Workstation
Figure 4.
5.2
cycle
Considering that time interval between two successive conclusions of maintenance actions follows the
Exponential distribution:
new unit
starts operating
failure of the
operating unit
new unit
starts operating
Figure 5.
g(t) = et ,
t
(3)
678
= Larep + Lbrev
(4)
t r et
(r + 1)
(ALAR) F
t r+1
0
r+1
t r et
dt =
,
(r + 1)
ALAR
t2
z2
Figure 6.
ALAR
b2
Failure
Failed machine in queue
(7)
t
X
Active machine
(6)
a2
(ALAR) F
ALAR
Tup
D
z1
b) Failed machine
(5)
t1
(ALAR) NF
a1
(ALAR) NF
679
T
T
+P(AL AR ) t dt
Simplifying:
Tup
+P(AL AR ) PF [T + t1 + z1 ]
D = F(T)
P(AL AR ) PNF [T + a2 ]
+P(AL AR ) PF [T + t2 + Zz ]
+
T + P(AL AR ) PNF a1
PNF a2 + P(AL AR ) PF t2
T
f (t) t dt
+
0
+ P(AL AR ) [t + b2 ] dt
Simplifying:
D = F(T)
T
+
T + P(AL AR ) PNF a1
+P(AL AR ) PF [t1 + z1 ] +
P(AL AR ) PNF a2
+P(AL AR ) PF [t2 + z2 ]
f (t) t + P(AL AR ) b2 dt
=
a1 = a2
(i + j L) + 1
u
(8)
where
P(AL AR ) =
Pi,j
i+j+1L
P(AL AR ) =
Pi,j
L<i+j+1R
P(AL AR ) =
Pi,j
i+j+1>R
Tup = F(T)
+P(AL AR ) PF [T + t1 ]
P(AL AR ) PNF [T + a2 ]
+P(A A ) P [T + t ]
R
F
2
L
680
5.6
(9)
r+1
0
(10)
r+1
=
0
t r et
r!
t
f ef t2 dt2 dt
=
t r et
f ef t2 dt2 dt
r!
r+1
t r et
1 ef t dt
r!
h+ 1 r t
t e
dt t r e(+f )t dt
=
r!
=
r!
r+1
r!
r!
r+1
( + f )r+1
r+1
0 0
f ef d dt
(11)
PF
tv =
r+1
r+1 (+
)r+2
f
1
f
1
(+f )r+1
r+1
PF
(12)
rf =
PNF = 1 PF
t r et
r!
i+jL
Pi,j
i+jL
(13)
The mean time until failure tv of an active equipment waiting for an overhaul is obtained based on
Erlang distribution and on Exponential distribution
where max[0; (i+jL)a(RL)] represents the average number of lacked equipments in the workstation
for the state (i, j).
Using the same logic that is used for determining
b2 (presented above), z1 ez2 are given by equation 15
below.
z1= z2 =
681
(rf + 1)
au
(14)
6.1
Pi,j (i + j)
6.2
(15)
The average number of equipments in the queue (equation 17) allows the identification of the need for
maintenance center improvement.
Pi,j (i + j L)
(16)
i+jL
6.3
The average number of lacked equipments in the workstation determined by Lopes et al. (2007) is directly
related to the average number of equipments in standby
(equation 18) and could also be used to assess MFS
efficiency.
i+jL
Pi,j [R (i + j)]
(17)
i+j>L
REFERENCES
CONCLUSIONS
The three indicators, equipment availability, equipment utilization and workplace occupation, addressed
in this work are important and need to be used in order
to prioritize improvement initiatives and monitor the
efficiency of MFS. Analytically, it seems complex to
determine equipment availability and equipment utilization. It involves quantifying the time in standby.
However, as shown, some other indicators provide
similar information and are easier to determine.
Decision makers use several kinds of indicators to
define and identify improvement initiatives. However,
one of the most important indicators for production
processes in the context of MFS is the workplace
682
Madu, C.N. & Kuei, C.-H. 1996. Analysis of multiechelon maintenance network characteristic using implicit
enumeration algorithm. Mathematical and Computer
Modelling 24(3):7992.
Madu, I.E. 1999. Robust regression metamodel for a maintenance float policy. International Journal of Quality &
Reliability Management 16(3):433456.
Shankar, G. & Sahani, V. 2003. Reliability analysis of a
maintenance network with repair and preventive maintenance. International Journal of Quality & Reliability
Management 20(2):268280.
Zeng, A.Z. & Zhang, T. 1997. A queuing model for designing
an optimal three-dimensional maintenance float system.
Computers & Operations Research 24(1):8595.
683
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper shows a practical view about the behaviour of an industrial assembly in order to assess
its availability and reliability. For that intention it will be used such a complex system like a Bioethanol Plant.
A computerized model will help to create a realistic scenario of the Bioethanol Plant Life Cycle, obtaining an
estimation of the most important performance measures through real data and statistic inference. By this way, it
will be possible to compare and discuss the profit of different plant configurations using the model and following
the initial technical specifications. Basically, the Bioethanol Plant will be divided for that purposes in functional
blocks, defining their tasks and features, as well as their dependencies according to the plant configuration.
Additionally, maintenance information and data bases will be required for the defined functional blocks. Once
these data have been compiled and using any commercial software, it will be possible to carry out a model of the
plant and to simulate scenarios and experiments for each considered configuration. Parameters about availability
and reliability will be obtained for the most important functions, following different plant configurations. From
their interpretation, it will be interesting to consider actions that improve the availability and reliability of the
system under different plant functional requirements. Among other important aspects, it will be researchable
as well a sensitive analysis, i.e., the exploring on how parameters modifications have influence on the result or
final goal.
INTRODUCTION
2
2.1
PROCEDURE
Definition of the system configuration
687
2.2
Data compiling
Grain (Starch)
Hydrolysis
Saccharification
Evaporation
Centrifuge
Distillation
Fermentation
Drying
Dehydration
DDG
Ethanol
Model construction
Milling
Water
Syrup
Figure 1.
process.
CO2
Simulation
Description: Using the model and with the above mentioned data we will simulate scenarios and experiments
for each considered configuration.
Result: Scenarios list, real and unreal events replication.
2.5
Conversion of starch into bioethanol involves several more process steps, as. It starts by making a
beer from the milled grain, then distilling off the
alcohol followed by recovery of the residual solids and
recycle of water.
It is necessary to be care with each step of the process to assure an efficient conversion, particularly due
to the fact that it is a biological process where unsuitable reactions can happen causing loss in yield, and
also because different grains have different process
requirements. Some of the most important matters to
have into account in the sequence of process steps are
below mentioned.
Together with this description are also included
Block Diagrams of the main systems in a Bioethanol
plant. These diagrams are only an approaching of an
ideal plant. They try to show briefly the main equipments, devices, as well as material flows inside the
whole plant.
3.1 Slurry preparation
The feed grain is milled to a sufficient fineness in order
to allow water access to all the starch inside each grain.
The meal is then mixed with warm water till a specific
concentration without generating excessive viscosities
downstream.
688
3.2
Figure 4.
system.
Hydrolysis
The slurry temperature is raised up in order to accelerate the hydrolysis of the grains starch into solution.
Again there is an optimum depending on the grain
typeif the slurry is too hot, the viscosity is excessive and if too cool, the required residence time for
effective hydrolysis is too long.
3.3
Saccharification
With enzymes, the dissolved starch is converted to sugars by saccharification, but at a reduced temperature
which again is selected to achieve a balance between a
satisfactory reaction rate, and avoiding the promotion
of unsuitable side reactions and a subsequent loss in
yield.
3.4
Fermentation
Distillation
The beer contains about 812% ethanol. It is continuously pumped to the distillation unit which produces
an overhead stream of about 90% ethanol and water.
Ethanol and water form a 95% azeotrope so it is not
possible to reach 100% by simple distillation.
3.6
Dehydration
689
SUMMARY OF FORMULAS
AND CONCEPTS REGARDING
RELIABILITY AND AVAILABILITY
MTBF1 MTBF2
MTBF1 + MTBF2
690
And using:
1
2
MTBFparallel = 1/parallel =
MTBF1 MTBF2
MDT1 + MDT2
MDT
MTBF
(1)
MTBF + MDT
MTBF
MDT m1
)
n1 C m1 n
MTBF
n!
=
m MDT m1
(n m)!(m 1)!
m_out_of _n = (
(2)
MDTparallel
MDTparallel
=
= MTBF1 MTBF2
MTBFparallel
MDT +MDT
1
m1
Aparallel = A1 + A2 A1 A2
UAparallel
MDT1 MDT2
MDT1 + MDT2
MDTparallel =
Assuming that:
Unavailability =
MDT1
MDT2
MTBF1 MTBF2
Consequently:
MDT2
MTBF2 + MDT2
MTBF2 =
UA1 UA2 =
691
MTBF m
n!
(nm)!(m1)!
MDT m1
Am_out_of _n =
i=nm+1
n!
Ai (1 A)ni
(n i)! i!
n
i=0
Here it has been included the Hydrolysis and Saccharification in the same process area as Fermentation.
n!
Ai (1 A)ni
(n i)! i!
Am_out_of _n = 1
nm
i=0
n!
Ai (1 A)ni
(n i)! i!
MTBFMilling MTBFFermentation
MTBFMilling +MTBFFermentation
MTBFEthanol =
n!
(1 A)m
m! (n m)!
MTBFCO2 MTBFDis+Deh
MTBFCO2 +MTBFDis+Deh ,
Where:
MTBFDis+Deh =
MTBFDistillation MTBFDehydration
MTBFDistillation +MTBFDehydration
MTBFDDGS =
MTBFCO2 MTBFDis+Cen+Dry
MTBFCO2 +MTBFDis+Cen+Dry ,
n!
UAm
=
m! (n m)!
Where:
MTBFDis+Cen+Dry =
MDTm_out_of _n
MTBFm_out_of _n
MTBFDis+Cen =
MTBFDis+Cen MTBFDrying
MTBFDis+Cen +MTBFDrying ,
MTBFDistillation MTBFCentrifugation
MTBFDistillation +MTBFCentrifugation
MTBFSyrup =
MTBFCO2 MTBFDis+Cen+Eva
MTBFCO2 +MTBFDis+Cen+Eva ,
Where:
MTBFDis+Cen+Eva =
above mentioned:
Where:
MTBFDis+Cen MTBFEvaporation
MTBFDis+Cen +MTBFEvaporation ,
and as
Consequently:
MDTm_out_of _n =
MDT
m
MTBFDis+Cen =
MTBFDistillation MTBFCentrifugation
MTBFDistillation + MTBFCentrifugation
692
5.4
which can be applied to analyze the reliability characteristics of the whole complex system, in this case, a
Bioethanol Plant.
MDTCO2
MTBFMilling MDTFermentation +MTBFFermentation MDTMilling
MTBFMilling +MTBFFermentation
MDTEthanol
MTBFCO2 MDTDis+Deh +MTBFDis+Deh MDTCO2
MTBFCO2 +MTBFDis+Deh
Where
MDTDis+Deh
=
MDTDDGS
=
Where
MDTDis+Cen+Dry
=
And
MDTDis+Cen
=
MDTSyrup =
MTBFCO2 MDTDis+Cen+Eva +MTBFDis+Cen+Eva MDTCO2
MTBFCO2 +MTBFDis+Cen+Eva
Where
MDTDis+Cen+Eva
MTBF
MDT
+MTBF
Dis+Cen
Evaporation
Evaporation
=
MTBFDis+Cen +MTBFEvaporation
and as above mentioned,
MDTDis+Cen
CONCLUSION
With this research we pretend to improve the estimations, demonstrating as well how requirements
expressed in initial technical specifications can be
incompatible or even impossible to accomplish for
determined plant configurations. That means, availability expectations on proposed configurations of the
whole plant could be lower, having higher reliability
or mantenability on each functional block, following
the technical requirements in effect.
Additionally, reasonable estimations will be provided for the production availability, which can be
delivered to the final customer in a more realistic
engineering project. These estimations will be based
on validated calculations of functional blocks considered for the model simulation, showing moreover the
importance and opportunity of a sensitive analysis. It
can also be decisive for the final selection the plant
technical configuration.
At the same time, this study can also be used to
adjust some initial requirements in the plant technical
specification. Once the data have been introduced in
the model, they can be adjusted according to the real
equipments included in the offer. By the way, it is possible to study logistical aspects like spare parts amount
in stock.
Finally, not only the availability and reliability are
important, but also the costs estimation is a key factor.
Therefore, an extension of this study could be to transfer the information provided in this research to a life
cycle cost model, with the intention to assess globally
the plant.
MDTDis+Cen
=
ACKNOWLEDGEMENTS
The author would like to thank the reviewers of the
paper for their contribution to the quality of this work.
REFERENCES
Asociacin Espaola de Mantenimiento. 2005. El Mantenimiento en Espaa: Encuesta sobre su situacin en las
empresas espaolas.
Bangemann T., Rebeuf X., Reboul D., Schulze A., Szymanski
J., Thomesse J.P., Thron M., Zerhouni N.. 2006. ProteusCreating distribuited maintenance systems through an
integration platform. Computers in Industry, Elselvier.
Benot Iung 2006. CRAN Laboratory Research Team
PRODEMAS in Innovative Maintenance and Dependability. Nancy UniversityNancy Research Centre
693
694
Pintelon L.M. & Gelders L.F. 1992. Maintenance management decision making. European Journal of Operational
Research.
Porter, M. 1985. Competitive Advantage. Free Press.
Prusak Laurence 1996. The Knowledge Advantage. Strategy
& Leadership.
Ren Yua, Benoit Iung, Herv!e Panetto 2003. A multi-agents
based E-maintenance system with case-based reasoning
decision support. Engineering Applications of Artificial
Intelligence 16 321333.
Saaty, T.L. 1977. A Scaling Method for Priorities in Hierarchical Structures. Journal of Mathematical Psychology,
15: 234281, 1977.
Saaty, T.L. 1980. The Analytic Hierarchy Process. McGraw
Hill.
Saaty, T.L. 1990. How to make a decision: The analytic hierarchy process. European Journal of Operational
Research.
Shu-Hsien Liao 2005. Expert system methodologies and
applications-a decade review from 1995 to 2004.
Elselvier. Expert Systems with Applications 28 93103.
695
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Two problems often encountered in uncertainty processing (and especially in safety studies)
are the following: modeling uncertainty when information is scarce or not fully reliable, and taking account
of dependencies between variables when propagating uncertainties. To solve the first problem, one can model
uncertainty by sets of probabilities rather than by single probabilities, resorting to imprecise probabilistic models.
Iman and Conover method is an efficient and practical means to solve the second problem when uncertainty is
modeled by single probabilities and when dependencies are monotonic. In this paper, we propose to combine
these two solutions, by studying how Iman and Conover method can be used with imprecise probabilistic
models.
1
INTRODUCTION
pi (x)dx.
PRELIMINARIES
697
698
0.9
0.5
0.1
0
500 K
Figure 2.
500 K
Figure 1.
600 K
800 K
900 K
600 K
700 K
800 K
900 K
1000 K
1000 K
Illustration of a p-box.
with P a probability distribution. For a given possibility distribution and for a given value [0, 1], the
(strict) -cut of is defined as the set
= {x R|(x) > }.
P-boxes are appropriate models when experts provide a set of (imprecise) percentiles, when considering
the error associated to sensor data, when we have
only few experimental data or when we have only
information about some characteristics of a distribution (Ferson, Ginzburg, Kreinovich, Myers, and Sentz
2003). Consider the following expert opinion about
the temperature of a fuel rode in a nuclear reactor core
during an accidental scenario:
Temperature is between 500 and 1000 K
The probability to be below 600 K is between 10
and 20%
The probability to be below 800 K is between 40
and 60%
The probability to be below 900 K is between 70
and 100%
Figure 1 illustrates the p-box resulting from this
expert opinion.
Possibility distributions correspond to information given in terms of confidence intervals, and thus
correspond to a very intuitive notion. A possibility distribution is a mapping : R [0, 1] such that there
is at least one value x for which (x) = 1. Given a
possibility distribution , possibility and necessity
N measures of an event A are respectively defined as:
Note that -cuts are nested (i.e. for two values <
, we have ). An -cut can then be interpreted
as an interval to which we give confidence 1 (The
higher , the lower the confidence). -cuts and the set
of probabilities P are related in the following way
P = {P| [0, 1], P( ) 1 }.
Possibility distributions are appropriate when
experts express their opinion in term of nested confidence intervals or more generally when information
is modeled by nested confidence intervals (Baudrit,
Guyonnet, and Dubois 2006). As an example, consider
an expert opinion, still about the temperature of a fuel
rode in a nuclear reactor core, but this time expressed
by nested confidence intervals:
Probability to be between 750 and 850 K is at
least 10%
Probability to be between 650 and 900 K is at
least 50%
Probability to be between 600 and 950 K is at
least 90%
Temperature is between 500 and 1000 K (100%
confidence)
Figure 2 illustrates the possibility distribution
resulting from this opinion.
Methods presented in Section 2 constitute very practical solutions to solve two different problems often
encountered in applications. As both problems can be
699
encountered in a same application, it would be interesting to blend these two tools. Such a blending is
proposed in this section.
3.1
See Figure 3.B for an illustration. Thus, given a pbox [F, F], to a sampled value [0, 1] we associate
the interval such that
:= [F
(), F 1 ()]
Possibility distributions: In the case of a possibility distributions, it is natural to associate to each value
the corresponding -cut (see Figure 3.C for illustration). Anew, this -cut is, in general, not a single
value but an interval.
We can see that, by admitting imprecision in our
uncertainty representation, usual sampling methods do
not longer provide precise values but intervals (which
are effectively the imprecise counterpart of single values). With such models, elements of matrix S can be
intervals and propagating them through a model T will
require to use interval analysis technics (Moore 1979).
Although achieving such a propagation is more difficult than single point propagation when the model
T is complex, it can still remain tractable, even for
high dimensional problems (see (Oberguggenberger,
King, and Schmelzer 2007) for example). Nevertheless, propagation is not our main concern here, and
sampling scheme can be considered independently of
the subsequent problem of propagation.
(1)
(2)
(3)
700
1
F
x= F
( )
( )
1(
0
Fig. 3.C: possibility dist.
(4)
701
CONCLUSIONS
REFERENCES
Alvarez, D.A. (2006). On the calculation of the bounds of
probability of events using infinite random sets. I. J. of
Approximate Reasoning 43, 241267.
Baudrit, C. and D. Dubois (2006). Practical representations
of incomplete probabilistic knowledge. Computational
Statistics and Data Analysis 51 (1), 86108.
Baudrit, C., D. Guyonnet, and D. Dubois (2006). Joint
propagation and exploitation of probabilistic and possibilistic information in risk assessment. IEEE Trans. Fuzzy
Systems 14, 593608.
Clemen, R., G. Fischer, and R. Winkler (2000, August).
Assessing dependence : some experimental results. Management Science 46 (8), 11001115.
Destercke, S., D. Dubois, and E. Chojnacki (2007). Relating practical representations of imprecise probabilities. In
Proc. 5th Int. Symp. on Imprecise Probabilities: Theories
and Applications.
Dubois, D. and H. Prade (1992). On the relevance of nonstandard theories of uncertainty in modeling amd pooling expert opinions. Reliability Engineering and System
Safety 36, 95107.
Ferson, S., L. Ginzburg, V. Kreinovich, D. Myers, and
K. Sentz (2003). Constructing probability boxes and
dempster-shafer structures. Technical report, Sandia
National Laboratories.
Ferson, S. and L.R. Ginzburg (1996). Different methods are
needed to propagate ignorance and variability. Reliability
Engineering and System Safety 54, 133144.
Helton, J. and F. Davis (2002). Illustration of samplingbased methods for uncertainty and sensitivity analysis.
Risk Analysis 22 (3), 591622.
Iman, R. and W. Conover (1982). A distribution-free
approach to inducing rank correlation among input variables. Communications in Statistics 11 (3), 311334.
Moore, R. (1979). Methods and applications of Interval Analysis. SIAM Studies in Applied Mathematics.
Philadelphia: SIAM.
Nelsen, R. (2005). Copulas and quasi-copulas: An introduction to their properties and applications. In E. Klement
and R. Mesiar (Eds.), Logical, Algebraic, Analytic, and
Probabilistics Aspects of Triangular Norms, Chapter 14.
Elsevier.
Oberguggenberger, M., J. King, and B. Schmelzer (2007).
Imprecise probability methods for sensitivity analysis in
engineering. In Proc. of the 5th Int. Symp. on Imprecise
Probabilities: Theories and Applications, pp. 317326.
Sallaberry, C., J. Helton, and S. Hora (2006). Extension
of latin hypercube samples with correlated variables.
Tech. rep. sand2006- 6135, Sandia National Laboratories, Albuquerque. http://www.prod.sandia.gov/cgibin/
techlib/accesscontrol. pl/2006/066135.pdf.
Walley, P. (1991). Statistical reasoning with Imprecise
Probabilities. New York: Chapman and Hall.
702
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Enrico Zio
Energy Department, Politecnico di Milano, Milan, Italy
ABSTRACT: Monte Carlo simulation is used to investigate the impact of the maintenance strategy on the
production availability of offshore oil and gas plants. Various realistic preventive maintenance strategies and
operational scenarios are considered. The reason for resorting to Monte Carlo simulation is that it provides
the necessary flexibility to describe realistically the system behavior, which is not easily captured by analytical
models. A prototypical offshore production process is taken as the pilot model for the production availability
assessment by Monte Carlo simulation. The system consists of a separator, compressors, power generators,
pumps and dehydration units. A tailor-made computer program has been developed for the study, which enables
to account for the operational transitions of the system components as well as the preventive and corrective
maintenance strategies for both power generators and compressor systems.
INTRODUCTION
To satisfy these requirements, stochastic simulation models, such as Monte Carlo simulation, are
increasingly being used to estimate the production
availabilities of offshore installations. They allow
accounting for the realistic maintenance strategies and
operational scenarios [E.Zio et al., 2006].
The purpose of the present study is to develop a
Monte Carlo simulation method for the evaluation of
the production availability of offshore facilities while
accounting for the realistic aspects of system behavior.
A Monte Carlo simulation model has been developed
to demonstrate the effect of maintenance strategies
on the production availability, e.g., by comparing
the system performance without and with preventive
maintenance, and of delays of spare parts for critical
items.
2
SYSTEM DESCRIPTION
Functional description
703
Gas
Oil
Water
Electricity
Gas Export
Dehydration
Power Generation
Production
Well
Three-Phase
Separation
Oil Export
Figure 1.
Table 1.
Component
Failure
Repair
Dehydration
Lift gas compressor
Export oil pump
Injection water pump
Three-phase separator
Export gas compressor
Power generation
3.49 104
8.33 102
6.98 102
3.66 102
1.33 102
19.6 102
4.29 102
3.24 102
6.57 104
7.06 104
2.27 104
4.25 104
6.69 104
1.70 103
704
Table 2.
Production
level (system
capacity, %)
100%
70%
70%
50%
50%
30%
0%
Example of
failure events
Oil
(km3 /d)
Gas
(MMscm/d)
Water
injection
(km3 /d)
None
Lift gas compressor
Water injection pump
Two export gas compressors
One power generator
Two export gas compressors
and one power generator
together
Two export gas compressors and
injection water pumping
Lift gas compressor and
injection water pump
Dehydration unit
All three export gas compressors
Both power generators
30
20
20
15
6
4
4
3
5
4
0
5
15
10
c
i
2 i
0
i
2i
2.3
Production re-configuration
total(3i)
Figure 2.
MAINTENANCE STRATEGIES
705
c
i
2i
0
Period,
month (year)
Maintenance
action
2 (1/6)
4 (1/3)
12 (1)
Detergent wash
Service/Cleaning
Baroscopic Inspection/
Generator check
Overhaul or replacement
60 (5)
Downtime
hr (day)
6 (0.25)
24 (1.0)
72 (3.0)
120 (5.0)
2i
Figure 3.
Preventive maintenance
The following is assumed for the preventive maintenance tasks considered in the study:
Scheduled preventive maintenance is only implemented to the compressor system for the gas export
and to the power generation system.
Scheduled maintenance tasks of the compressors
and the power generation system are carried out at
the same time, to minimize downtime.
Well should be shutdown during preventive
maintenance.
The schedule maintenance intervals for both systems are given in Table 3.
706
Table 4.
state.
Initial state
Transition time
Corrective
maintenance
Preventive
maintenance
Normal (including
partial load)
Part list
Step 1: Will the stock-out has
direct effect on the offshore
production?
Yes
Hold parts on
offshore platforms
No
Review/revise of maintenance
strategies
Figure 5.
NUMERICAL RESULTS
Table 5. Comparison results of the production availability with the delay scenarios.
5.2
No
Yes
5.1
No spares holding
Yes
No
(Average)
Production
availability
8.86 101
8.36 101
7.78 101
8.37 101
707
CONCLUSIONS
As future study, it is of interest to formalize the preventive maintenance interval optimization and spare
parts optimization process with the Monte Carlo simulation. To this aim, it will be necessary to combine
the results of the availability assessment based on the
Monte Carlo simulation with the cost information.
The optimization of preventive maintenance intervals
should be determined based on an iterative process
where the overall availability acceptance criteria and
costs fall within the optimal region; the spare parts
optimization will consider the cost of holding different
numbers of spare parts and that of not holding any.
REFERENCES
ABS. 2004. Guidance notes on reliability-centered maintenance. ABS.
E. Zio, P. Baraldi, and E. Patelli. 2006. Assessment of
availability of an offshore installation by Monte Carlo
simulation. International Journal of Pressure Vessel and
Piping, 83: 312320.
Marseguera M. & Zio E. 2002. Basics of the monte carlo
method with application to system reliability. Hagen,
Germany:LiLoLe-Verlag GmbH.
NORSOK Standard (Z-016). 1998. Regularity management & reliability technology. Oslo, Norwat, Norwegian
technology standards institution.
708
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this paper, the recently developed Subset Simulation method is considered for improving the
efficiency of Monte Carlo simulation. The method, originally developed to solve structural reliability problems,
is founded on the idea that a small failure probability can be expressed as a product of larger conditional failure
probabilities for some intermediate failure events: with a proper choice of the conditional events, the conditional
failure probabilities can be made sufficiently large to allow accurate estimation with a small number of samples.
The method is here applied on a system of discrete multi-state components in a series-parallel configuration.
INTRODUCTION
(1)
where x = {x1 , x2 , . . . , xj , . . . , xn } n is the vector of the random states of the components, i.e. the
random configuration of the system, with multidimensional probability density function (PDF) q :
n [0, ), F n is the failure region and
IF : n {0, 1} is an indicator function such that
IF (x) = 1, if x F and IF (x) = 0, otherwise.
In practical cases, the multi-dimensional integral
(1) can not be easily evaluated by analytical methods
nor by numerical schemes. On the other hand, Monte
Carlo Simulation (MCS) offers an effective means
for estimating the integral, because the method does
not suffer from the complexity and dimension of the
domain of integration, albeit it implies the nontrivial task of sampling from the multidimensional PDF.
Indeed, the MCS solution to (1) entails that a large
number of samples of the values of the component
states vector be drawn from q(); an unbiased and consistent estimate of the failure probability is then simply
computed as the fraction of the number of samples that
lead to failure. However, a large number of samples
709
the series-parallel, discrete multi-state system is illustrated. Finally, some conclusions are proposed in the
last Section.
2
2.1
SUBSET SIMULATION
Basics of the method
m1
P(Fi+1 |Fi )
(2)
i=1
Markov Chain Monte Carlo (MCMC) simulation comprises a number of powerful simulation techniques for
generating samples according to any given probability
distribution (Metropolis et al., 1953).
In the context of the reliability assessment of interest in the present work, MCMC simulation provides an
efficient way for generating samples from the multidimensional conditional PDF q(x|F). The distribution
of the samples thereby generated tends to the multidimensional conditional PDF q(x|F) as the length of the
Markov chain increases. In the particular case of the
initial sample x 1 being distributed exactly as the multidimensional conditional PDF q(x|F), then so are the
subsequent samples and the Markov chain is always
stationary (Au & Beck 2001).
Furthermore, since in practical applications dependent random variables may often be generated by some
transformation of independent random variables, in
the following it is assumed without loss of generality that the
components of x are independent, that is,
q(x) = nj=1 qj (xj ), where qj (xj ) denotes the onedimensional PDF of xj , j = 1, 2, . . . , n (Au & Beck
2001).
To illustrate the MCMC simulation algorithm with
reference to a generic failure region Fi , let x u =
{x1u , x2u , . . ., xju , . . ., xnu } be the uth Markov chain sample drawn and let pj (j |xju ), j = 1, 2, . . ., n, be a
one-dimensional proposal PDF for j , centered at
the value xju and satisfying the symmetry property
pj (j |xju ) = pj (xju |j ). Such distribution, arbitrarily
chosen for each element xj of x, allows generating
a precandidate value j based on the current sample value xju . The following algorithm is then applied
to generate the next Markov chain sample x u+1 =
{x1u+1 , x2u+1 , . . ., xju+1 , . . ., xnu+1 }, u = 1, 2, . . ., Ns 1
(Au and Back 2001):
1. Generate a candidate sample x u+1 = {x1u+1 , x 2u+1 ,
. . ., x ju+1 , . . ., x nu+1 }: for each parameter xj , j =
1, 2, . . ., n, sample a precandidate value ju+1 from
pj (|xju ); compute the acceptance ratio rju+1 =
qj ( j u+1 )/qj (xju ); set x ju+1 = ju+1 with probability min(1, rju+1 ) and x ju+1 = xju with probability
1 min(1, rju+1 ).
2. Accept/reject the candidate sample vector x u+1 :
if x u+1 = x u (i.e., no precandidate values have
been accepted), set x u+1 = x u . Otherwise, check
whether x u+1 is a system failure configuration, i.e.
x u+1 Fi : if it is, then accept the candidate x u+1
as the next state, i.e., set x u+1 = x u+1 ; otherwise, reject the candidate x u+1 and take the current
sample as the next one, i.e., set x u+1 = x u .
The proposal PDFs {pj : j = 1, 2, . . ., n} affect
the deviation of the candidate sample from the current one, thus controlling the efficiency of the Markov
chain samples in populating the failure region. In particular, the spreads of the proposal PDFs affect the
size of the region covered by the Markov chain samples. Small spreads tend to increase the correlation
between successive samples due to their proximity
to the conditioning central value, thus slowing down
the convergence of the failure probability estimators.
Indeed, it can be shown that the coefficient of variation
(c.o.v.) of the failure probability estimates, defined as
710
In the actual SS implementation, with no loss of generality it is assumed that the failure event of interest can
be defined in terms of the value of a critical response
variable Y of the system under analysis (e.g., its output
performance) being lower than a specified threshold
level y, i.e., F = {Y < y}. The sequence of intermediate failure events {Fi : i = 1, 2, . . . , m} can then
be correspondingly defined as Fi = {Y < yi }, i =
1, 2, . . . , m, where y1 > y2 > . . . > yi > . . . > ym =
y > 0 is a decreasing sequence of intermediate threshold values (Au & Beck 2001). Notice that since these
intermediate threshold values (i.e., failure regions) are
introduced purely for computational reasons in SS,
they may not have a strict physical interpretation and
may not be connected to known degradation processes.
The choice of the sequence {yi : i = 1, 2, . . . , m}
affects the values of the conditional probabilities
{P(Fi+1 |Fi ) : i = 1, 2, . . . , m1} in (2) and hence the
efficiency of the SS procedure. In particular, choosing
the sequence {yi : i = 1, 2, . . . , m} arbitrarily a priori
makes it difficult to control the values of the conditional probabilities {P(Fi+1 |Fi ) : i = 1, 2, . . . , m 1}
in the application to real systems. For this reason, in
this work, the intermediate threshold values are chosen
adaptively in such a way that the estimated conditional
failure probabilities are equal to a fixed value p0 (Au
& Beck 2001).
The SS algorithm proceeds as follows (Figure 1).
First, N vectors {x k0 : k = 1, 2, . . ., N } are sampled by
standard MCS, i.e., from the original probability density function q(). The subscript 0 denotes the fact
that these samples correspond to Conditional Level
0. The corresponding values of the response variable {Y (x k0 ) : k = 1, 2, . . ., N } are then computed
(Figure 1a) and the first intermediate threshold value
y1 is chosen as the (1 p0 )N th value in the decreasing list of values {Y (x k0 ) : k = 1, 2, . . ., N }. By so
doing, the sample estimate of P(F1 ) = P(Y < y1 ) is
equal to p0 (note that it has been implicitly assumed
that p0 N is an integer value) (Figure 1b). With this
choice of y1 , there are now p0 N samples among
{x k0 : k = 1, 2, . . ., N } whose response Y lies in
F1 = {Y < y1 }. These samples are at Conditional
level 1 and distributed as q(|F1 ). Starting from each
one of these samples, MCMC simulation is used to
generate (1 p0 )N additional conditional samples
distributed as q(|F1 ), so that there are a total of N
conditional samples {x k1 : k = 1, 2, . . ., N } F1 , at
Conditional level 1 (Figure 1c). Then, the intermediate threshold value y2 is chosen as the (1p0 )N th value
in the descending list of {Y (x k1 ) : k = 1, 2, . . ., N }
to define F2 = {Y < y2 } so that, again, the sample estimate of P(F2 |F1 ) = P(Y < y2 |Y < y1 ) is
equal to p0 (Figure 1d). The p0 N samples lying in
F2 are conditional values from q(|F2 ) and function
as seeds for sampling (1 p0 )N additional conditional samples distributed as q(|F2 ), making up a
711
APPLICATION TO A SERIES-PARALLEL
DISCRETE MULTI-STATE SYSTEM
In this Section, SS is applied for performing the reliability analysis of a series-parallel discrete multi-state
system of literature (Zio & Podofillini 2003).
Let us consider a system made up of a series of =
2 macro-components (nodes), each one performing a
given function, e.g. the transmission of a given amount
of gas, water or oil flow. Node 1 is constituted by
n1 = 2 components in parallel logic, whereas node
2 is constituted by a single component (n2 = 1) so
that the overall number of components in the system
is n = 2b=1 nb = 3.
For each component j = 1, 2, 3 there are zj possible
states, each one corresponding to a different hypothetical level of performance, vj,o , o = 0, 1, . . ., zj 1.
Each component can randomly occupy the discrete
states, according to properly defined probabilities
qj,o , j = 1, 2, 3, o = 0, 1, . . ., zj 1.
In all generality, the output performance Wo associated to the system state 0 = {o1 , o2 , . . ., oj , . . ., on }
is obtained on the basis of the performances vj,o of
the components j = 1, 2, . . ., n constituting the system. More precisely, we assume that the performance
of each node b constituted by nb elements in parallel logic is the sum of the individual performances of
the components and that the performance of the node
series system is that of the node with the lowest performance, which constitutes the bottleneck of the
system (Levitin & Lisnianski 1999).
The system is assumed to fail when its performance
W falls below some specified threshold value w, so
that its probability of failure P(F) can be expressed
as P(W < w). During simulation, the intermediate
failure events {Fi : i = 1, 2, . . . , m} are adaptively
generated as Fi = {W < wi }, where w1 > w2 > . . . >
wi > . . . > wm = w are the intermediate threshold
values (see Section 2.3).
Mean
Standard deviation
1
2
3
56.48
58.97
92.24
25.17
23.11
11.15
712
Component, j
Mean
Standard deviation
1
2
3
58.17
60.66
93.55
24.35
22.32
10.02
0
10
Analytical
SS (NT = 840)
Failure probability,P(F)
-1
10
-2
10
-3
10
0
80
100
80
100
SS (NT = 1110)
-1
10
-2
10
-3
10
-4
10
40
60
Failure threshold,w
Analytical
3.4
20
0
10
Failure probability,P(F)
20
40
60
Failure threshold,w
713
P(F)
Batch 1
Batch 2
Batch 3
SS
Standard MCS
SS
1.364 103
1.364 103
1.94103
1.67104
0.4327
0.4611
0.4821
0.7265
0.7530
0.6656
0.4181
0.4425
0.4960
0.6409
0.5611
0.6826
P(F)
Batch 1
Batch 2
Batch 3
Standard MCS
1.94103
0.6983
0.7793
0.6112
1.67104
1.6670
1.8915
1.6190
in the estimation of P(F) = 1.671 104 SS provides mean relative absolute errors which are even four
times lower than those produced by standard MCS (for
instance, see Batch 2 of Table 4). This result is quite
reasonable: in fact, the estimation of failure probabilities near 104 by means of standard MCS with
1110 samples is not efficient since on average only
1110 104 0.1 failure samples are available in the
failure region of interest. In contrast, due to successive
conditioning, SS guarantees that there are 840, 570,
300 and 30 conditional failure samples at probability
levels P(F) = 101 , 102 , 103 and 104 , thus providing sufficient information for efficiently estimating
the corresponding failure probabilities.
Finally, the computational efficiency of SS can be
compared with that of a standard MCS in terms of the
coefficient of variation (c.o.v.) of the failure probability estimates computed from the same number of
samples.
The sample c.o.v. of the failure probability estimates obtained by SS in S = 200 independent runs
are plotted versus different failure probability levels
P(F) (solid line) in Figure 3, for both Case 1 (top)
and Case 2 (bottom). Recall that the number of samples required by SS at the probability levels P(F) =
101 , 102 , 103 and 104 are NT = 300, 570, 840
and 1110, respectively, as explained in Section 3.3.
The exact c.o.v. of the Monte Carlo estimator using
the same number of samples at probability levels
P(F)
= 101 , 102 , 103 and 104 are computed
using (1 P (F)) P (F) NT , which holds for NT
independent and identically distributed (i.i.d.) samples: the results are shown as squares in Figure 3, for
Case 1 (top) and Case 2 (bottom). It can be seen that
while the c.o.v. of the standard MCS grows exponentially with decreasing failure probability, the c.o.v. of
the SS estimate approximately grows in a logarithmic
manner: this empirically proves that SS can lead to
a substantial improvement in efficiency over standard
MCS when estimating small failure probabilities.
714
1.4
10
SS (NT = 840) (average 200 runs)
MCS
uncorrelated (lower limit)
fully correlated (upper limit)
NT = 840
Failure probability,P(F)
1.2
0.8
0.6
NT = 570
0.4
-1
10
-2
10
Analytical
SS (NT = 840) (average 200 runs)
NT = 300
0.2
-3
0 -3
10
-2
-1
10
10
10
Failure probability,P(F)
10
10
20
30
40
50
60
70
Failure threshold,w
80
90
100
100
3.5
SS (NT = 1110) (average 200 runs)
MCS
uncorrelated (lower limit)
-1
Failure probability,P(F)
3 N = 1110
T
2.5
2
1.5
NT = 840
10
-2
10
Analytical
-3
10
NT = 570
0.5
NT = 300
0
-4
10
-4
10
-3
-2
10
Failure probability,P(F)
-1
10
10
10
10
20
30
40
50
60
70
80
90
100
Failure threshold,w
3.4.1.2
715
Table 5. Sample means P(F) of the failure probability estimates over 200 SS runs and the corresponding
biases [P(F)] produced by SS in the estimation of
P(F) = 1.364 103 (Case 1); these values have been
computed for three batches of S = 200 simulations
each.
CONCLUSIONS
Subset simulation
Batch 1
Batch 2
Batch 3
Sample mean
Bias
1.136103
1.145103
1.065103
0.1672
0.1606
0.2192
Batch 1
Batch 2
Batch 3
P(F) = 1.942103
P(F) = 1.671104
Sample mean
Bias
Sample mean
Bias
1.714103
1.579103
1.715103
0.1170
0.1865
0.1164
1.374104
1.181104
1.347104
0.1769
0.2928
0.1934
that the bias due to the correlation between the conditional probability estimators at different levels is not
negligible (Au & Beck 2001).
This finding is also confirmed by the analysis of
the sample c.o.v. of the failure probability estimates
which are plotted versus different failure probability
levels P(F) (solid line) in Figure 3, for both Case 1
(top) and Case 2 (bottom). In these Figures, the dashed
lines show a lower bound on the c.o.v. which would
be obtained if the conditional probability estimates
at different simulation levels were uncorrelated; on
the contrary, the dot-dashed lines provide an upper
bound on the c.o.v. which would be obtained in case
of full correlation among the conditional probability
estimates. From these Figures, it can be seen that the
trend of the actual c.o.v. estimated from 200 runs follows more closely the upper bound, confirming that
the conditional failure probability estimates are almost
completely correlated in both Case 1 and Case 2. The
high correlation between conditional probability estimates may be explained as follows: differently from
continuous-state systems whose stochastic evolution
is modeled in terms of an infinite set of continuous
REFERENCES
Au, S. K. & Beck, J. L. 2001. Estimation of small failure probabilities in high dimensions by subset simulation.
Probabilist. Eng. Mech. 16(4): 263277.
Au, S. K. & Beck, J. L. 2003. Subset Simulation and its
application to seismic risk based on dynamic analysis.
J. Eng. Mech.-ASCE 129(8): 117.
Au, S. K., Wang, Z. & Lo, S. 2007. Compartment fire analysis by advanced Monte Carlo simulation. Eng. Struct.,
in press (doi: 10.1016/j.engstrct.2006.11.024).
Levitin, G. & Lisnianski, A. 1999. Importance and sensitivity analysis of multi-state systems using the universal
generating function method. Reliab. Eng. Syst. Safe. 65:
271282.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N. &
Taller, A. H. 1953. Equations of state calculations by fast
computing machines. J. Chem. Phys. 21(6): 10871092.
Schueller, G. I. 2007. On the treatment of uncertainties in
structural mechanics and analysis. Comput. Struct. 85:
235243.
Zio E. & Podofillini, L. 2003. Monte Carlo simulation analysis of the effects of different system performance levels
on the importance of multi-state components. Reliab. Eng.
Syst. Safe. 82: 6373.
716
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: To reduce the cost of Monte Carlo (MC) simulations for time-consuming processes (like Finite
Elements), a Bayesian interpolation method is coupled with the Monte Carlo technique. It is, therefore, possible
to reduce the number of realizations in MC by interpolation. Besides, there is a possibility of thought about
priors. In other words, this study tries to speed up the Monte Carlo process by taking into the account the prior
knowledge about the problem and reduce the number of simulations. Moreover, the information of previous
simulations aids to judge accuracy of the prediction in every step. As a result, a narrower confidence interval
comes with a higher number of simulations. This paper shows the general methodology, algorithm, and result
of the suggested approach in the form of a numerical example.
INTRODUCTION
The so-called Monte Carlo (MC) technique helps engineers to model different phenomena by simulations.
However, these simulations are sometimes expensive
and time-consuming. This is because of the fact that
the more accurate models, usually defined by finite
elements (FE), are time-consuming process themselves. To overcome this problem the cheaper methods
are generally used in the simulation of complicated
problems and, consequently, less accurate results are
obtained. In other words, implementing more accurate models in the Monte Carlo simulation technique
provides more accurate and reliable results; by the
reduction of calculations cost to a reasonable norm,
more accurate plans for risk management are possible.
To reduce the cost of Monte Carlo simulations for a
time-consuming process (like FE), numerous research
projects have been done, primarily in the structural
reliability to get the benefits of not only a probabilistic approach but also to obtain accurate models. For
instance, importance sampling and directional sampling are among those approaches implemented to
reduce the cost of calculations. But still this coupling is
a time-consuming process for practical purposes and it
should be still modified. This research tries to speed up
the Monte Carlo process by considering the assumption that the information of every point (pixel) can
give an estimation of its neighboring pixels. Taking
the advantage of this property, the Bayesian interpolation technique (Bretthorst 1992) is applied to our
requirement of randomness of the generated data. In
this study, we try to present a brief review of the
method and important formulas. The application of
the Bayesian interpolation into the MC for estimation
GENERAL OUTLINES
BAYESIAN INTERPOLATION
717
ei = ui f (ui1 , ui+1 ) = ui
(2)
1
1
P(ei |) =
exp 2 ei2
2
2
(3)
i=j
(5)
ui1 + ui+1
2
(1)
i=j
P(u|d, I ) =
(4)
P(uj |d, I ) =
ui1 + ui+1
2
THE PRIOR
We expect some logical dependence between neighboring pixels and this expectation is translated in the
(6)
1
1
ui1 + ui+1 2
exp 2 ui
=
(7)
2
2
2
Assuming that there is no logical dependence
between the errors e1 , . . . , ev , the multivariate PDF
of all the errors is a product of the univariate PDFs.
Then, by making the change of variable from ei to ui
we find the following multivariate PDF for the pixels
u1 , . . . , uv .
P(u1 , . . . , uv |u0 , uv+1 , )
v
1
ui1 + ui+1 2
1
exp 2
ui
=
2 i=1
2
(2)v/2 v
(8)
The boundary pixels are treated separately. In fact,
these two pixels are assigned to the first and last position and presented as u0 = v1 and uv+1 = vv+2 . As
a result of using the principle of Maximum Entropy,
the PDF of the boundary pixel u0 is obtained in Equation 9. And a similar equation can be established for
718
1
1
exp 2 [u0 u1 ]2
=
2
2
(9)
Combining Equations 8 and 7 using Bayes Theorem, the next equation will be obtained. This equation
is written in a matrix form where u is vector of pixel
positions,
P(u0 , u1 , . . . , uv+1 |)
1
(2 )(v+2)/2 v+2
Q
exp 2
2
(10)
where
Q = uT Ru
and
1.5
0.5
.
R
..
..
.
.
..
0
1.5
0.5
0.5
2
..
.
..
.
..
.
3
..
.
2
..
.
0.5
..
.
..
.
..
.
..
.
0.5
..
.
0.5
0
2
3
0.5 1.5
0
..
.
..
.
..
.
0.5
1.5
1
THE LIKELIHOOD
Apart from our model and prior, we also have n+2 nonoverlapping data points , n v . These data points can
be assigned arbitrarily to any pixel uc where c is an element of the vector c described in Section 2. The value
of c corresponds with the location of the observed data
regarding the pixel numbers (see Figure 2). The error
of the model at the location of any observed data point
is defined as:
ec = uc dc
(11)
(12)
Figure 2. An illustration of the pixels which data points are assigned to. The - is a representation of the evaluated values
in the pixels.
719
Substituting 11 into 12 and making a change of variable from the error ec to the data dc , the likelihood
function can be obtained according to Equation 13.
1
P(dc |uc , ) =
exp 2 (dc uc )2
2
2
1
(13)
1
1
2
(dc uc )
=
exp 2
(2 )n/2 n
2 cc
(14)
2 2
2 2
(17)
Equation 19 is conditional on unknown parameters
and , but since we dont know these parameters
we will want to integrate them eventually out as nuissance parameters. We first assign Jefferys prior to
these unknow parameters:
P() =
P( ) =
(18)
P(d1 , . . . , dn |u1 , . . . , un , )
1
1
T
exp
(d
Su)
(d
Su)
=
(2 )n/2 n
2 2
v+3 n exp
2 2
2 2
(19)
(15)
Where d is a padded vector of length v + 2 where
the data points have been coupled with their corresponding pixels and S is a diagonal matrix with entry
1 for pixels which data points are assigned to them and
0 everywhere else.
For example, the S matrix for the grid in Figure 1,
d = (0, d1 , 0, , 0, d2 , 0, d3 ) becomes:
0 0
0
.. . .
..
.
.
.
.. . .
. 0
.
..
.
. ..
0
0
..
.
..
.
0
..
.
0
..
..
.
.
0
0
0
0
0
0
0
..
.
..
.
..
.
0
1
exp
except duj
(d Su)T (d Su) uT Ru
2 2
2 2
1
du0 . . . duv+1
()v+3 n
(20)
except duj
(16)
For actual evaluation of Equation 20, we refer the
interested reader to ( Bretthorst 1992).
7
ALGORITHM
THE POSTERIOR
Combining the prior in Equation 10 and with the likelihood presented in Equation 15 we get a function which
is proportional to the posterior PDF of all the pixels
720
NUMERICAL EXAMPLE
One of the important research topics in hydraulic engineering focuses on the impact of water waves on walls
and other coastal structures, which create velocities
and pressure with magnitudes much larger than those
associated with the propagation of ordinary waves
under gravity. The impact of a breaking wave can generate pressures of up to 1000 kN /m2 which is equal
to 100 meters of water head. Although many coastal
structures are damaged by breaking waves, very little
is known about the mechanism of impacts. Insight into
the wave impacts has been gained by investigating the
role of entrained and trapped air in wave impacts. In
this case, a simplified model of maximum pressure of
ocean waves on the coastal structures is presented by
Equation 21.
Pmax = C
p k u2
d
(21)
(22)
0.5897 1010
0.3126 109
+ 0.5265 1010 uj + 0.1339 1010 uj 2
(23)
721
722
points, we get a more precise PDF. Since this interval is small enough, we can assume that we have got
the enough accuracy. Therefore, the simulation effort
has been reduced by 67% for the presented numerical
example.
In fact, the number of simulations in the Monte
Carlo technique depends on several factors. The most
important ones are the tolerance and the distance
between pixels defined for the analysis. In other words,
to get a more precise result we need to implement more
data points. Meanwhile, a higher number of pixels lead
to a higher accuracy.
DISCUSSION
Figure 7. This figure shows the probability distribution function of variable which is assigned to the pixel j = 210. In
Figure (a) Just the information of 2 data points are considered while in Figure (b) and (c), the information of 20 and 90 pixels
are considered, respectively.
723
10
CONCLUSION
724
Occupational safety
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Diego Sagasti
Divisin de Realidad Virtual, EUVE- European Virtual Engineering, Spain
Luca Jord
Unidad de Ingeniera de Producto, AIMME-Instituto Tecnolgico Metalmecnico, Spain
ABSTRACT: Virtual Reality (VR) is emerging as an important tool in the industry sector to simulate humanmachine interaction providing significant findings to improve occupational and industrial safety. In this paper
several VR applications and tools developed for industries in the manufacturing and chemical sector are presented.
These developments combine VR simulations, immersive 3D dynamic simulation and motion capture, addressing
risk assessments in the design process, in normal operation and training.
INTRODUCTION
Virtual Reality is commonly understood as a computer simulation that uses 3D graphics and devices
to provide an interactive experience. It is being used
nowadays in multiple sectors as entertainment (video
games), industry, marketing, military, medicine . . . In
the industry sector, this technology is emerging as an
important tool to simulate and evaluate the humanmachine interaction, especially when this fact may
have important consequences on the safety of people,
processes and facilities or in the environment.
VR training applications are probably the most
broadly used. Military, aeronautic and nuclear industries develop virtual scenarios in order to train operators to avoid risky and costly operations in the real
word. Also, simulators of fork-lift trucks and gantry
cranes have been developed to help worker to reduce
labour risks (Helin K. 2005, Iriarte et al., 2005).
Beyond the training applications, some research has
been done to study the capabilities of VR as a technology to study different safety aspects (risk identification, machinery design, layout and training of
operators) in a manufacturing factory (Mtt T.,
2003). Probably the sector that has shown greater
interest in these technologies in the recent years has
been the automotive industry, where VR is being used
not only as a technology oriented to the final user
727
RESULTS
Figure 1. Schematic
capabilities.
illustration
of
the
UDS-Lab
Figure 2.
728
Figure 4.
729
Figure 7.
developed in an industrial process where the training was identified as a key factor for the occupational
health and safety improvement. The tool was prepared
to be used directly by industry trainers and it was
based in a mixture of real video records, 3D simulation (CATIA/DELMIA) and ergonomic evaluation
recognised methodologies. With these software tools,
workers are showed their bad body positions in the
executions of their tasks, the ergonomics risks derived
from them and the correct body positions they should
adopt and the improvement derived. (see Figure 7).
Figure 5 & 6.
workplace.
FUTURE PERSPECTIVES
Virtual Reality is currently being used as a promising technology to improve industrial and occupational
health and safety.
In this paper, some applications of this technology
developed by this research group have been presented.
In the near future, our research group plans three
main activities. Firstly, to perform more experiences
with immersive VR, analysing different alternatives to
improve the technical difficulties drafted above, and
developing/testing more work environment scenarios.
Second, to export the gained background to other
industrial sectors, mainly the education and training
applications. Finally, it should be mention that the
experience obtained from these developments is being
used to initiate experiences in order to use VR technology to improve physical and cognitive disabled people
labour integration (Lpez de Ipia, J. 2007).
ACKNOWLEDGMENTS
730
REFERENCES
Helin K. et al., Exploiting Virtual Environment Simulator
for the Mobile Working Machine User Interface Design.
VIRTSAFE Seminar. 04-06.07.2005, CIOP-PIB, Warsaw.
Iriarte Goi, X. et al., Simulador de carretillas elevadoras para
la formacin y disminucin en riesgos laborales: motor
grfico y simulacin dinmica. 7 Congreso iberoamericano de ingenieria mecanica, Mxico D.F., 12 al 14 de
Octubre de 2005.
Monacelli G., Elasis S.C.P, VR Applications for reducing time and cost of Vehicle Development Process. 8th
Conference ATA on Vehicle Architecture: Products, Processes and Future Developments, Florence, 2003.05.16.
731
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Applying the resilience concept in practice: A case study from the oil
and gas industry
Lisbeth Hansson & Ivonne Andrade Herrera
SINTEF, The Foundation for Scientific and Industrial Research at the University of Trondheim
Trond Kongsvik
NTNU Social Studies LTD, Norway
Gaute Solberg
StatoilHydro
ABSTRACT: This paper demonstrates how the resilience concept (Hollnagel et al., 2006) can be used as a
perspective for reducing occupational injuries. The empirical background for the paper is a case study on an
oil and gas installation in the North Sea that had a negative trend in LTI (Lost Time Injury) rates. The HSE
(Health, Safety, Environment) administration initiated a broad process that included the crew on the installation,
the onshore administration and a group of researchers to improve the situation. Instead of focusing the analysis
on incident reports, we applied a proactive view. Thus, we adapted a model for resilience that was used in a
development process. In the context of occupational accidents, we focused on the following factors: sufficient
time, knowledge and competence, resources and including working environment. These factors have been
identified as important for complexity and necessary for the organization to be able to anticipate, perceive and
respond to different constellation of conditions. This paper illustrates to what extent the concept of resilience
was fruitful analytically and as a reflection tool in the development of new HSE measures that are now being
implemented. The links between the resulting HSE measures and the qualities of the resilience concept are
discussed.
1
1.1
INTRODUCTION
Background
733
that make the organization prepared for the unexpected. This can be regarded as a proactive approach
to safety. In a proactive view individuals and organizations must adjust to cope with current conditions.
These adjustments handle different constellations of
conditions that can produce accidents and also successes. Thus a resilient organization (or system) can
adjust its functioning prior to or following changes
and disturbances to continue working in face of continuous stresses or major mishap. Here, variability is
regarded as potentially positive for safety, in line with
what Hollnagel (2008) label Theory Z.
The study is limited to one installation, and can be
regarded as a case study. This implies the exploration
of a bounded system over time involving several
data sources rich in context (Creswel 1998).
1.2
734
Up d a
System
ting
Dynamic developments
Anticipation
Attention
Response
Knowing
what to expect
Knowing
what to look for
Knowing
what to do
Knowledge
Competence
Time
ing
Environment
Learn
Resources
735
Based on the increasing number of occupational accidents, StatoilHydro initiated a project in cooperation
with SINTEF. The scope was to turn the negative
trend identifying safety measures dedicated to the pilot
Heidrun TLP.
The action research approach that was used on Heidrun TLP is a part of an organizational development
(OD) process (Greenwood & Levin 1998). The goal of
this process was to increase the quality of adjustments
and to improve occupational safety in the organization.
In the OD process we focused on working conditions
that influence the ability to make proper adjustments;
sufficient time, knowledge, competence and resources
(Hollnagel and Woods, 2005). In our early information
gathering we saw that the psychosocial work environment on Heidrun TLP also could have a significant
influence on the ability to make proper adjustments,
and therefore added this as a condition.
With this background we developed a working
model that was used to structure the interviews and
as a starting point for the discussions in the search
conference.
Knowledge and Competence was merged in
the model due to pedagogical reasons (although the
difference was explained orally) and the factor psychosocial work environment was added, resulting in
four main factors as indicated in the Figure 2 below.
Based on initial discussions with the safety staff
some external factors where identified, anticipated to
influence the safety level on Heidrun TLP. These external factors are; The safe behaviour programme (a
large safety campaign), open safety talk, cost &
Figure 2.
contracts, organisational changes, onshore support and last but not least management.
All factors in the resilience model were discussed
through approximately 40 semi-structured interviews,
mostly by offshore workers. All kind of positions
where covered; both internal StatoilHydro and contractor employees. The main findings were extracted
from the interviews and sorted by the factors in the
resilience model.
The findings from the interviews were the main
input to the creative search conference, also gathering around 40 persons. The participants represented
both on and off shore personnel and internal and external StatoilHydro personnel. The two day conference
was arranged with a mix between plenum and group
sessions to discuss and suggest safety measures. The
second day of the conference was dedicated to identification of measures; i.e. how could Heidrun become
a more resilient organization.
4
RESULTS
The safety measures identified in the search conference were sorted and defined as HSE activities. These
were presented and prioritized in a management meeting in the Heidrun organization and the end result from
the project was nine HSE activities;
Safety conversations
Buddy system
Collaboration in practise
The supervisor role
Consistent management
Clarification of the concept visible management
Meeting/session for chiefs of operation
Risk comprehension course
Visualisation of events
736
DISCUSSION
Three main qualities are required for a resilient organization. These are anticipation, attention and response.
These qualities are described in a theoretical way in the
theory section but as an introduction to the discussion
we will give a practical example related to occupation
accidents.
If a group of people onboard an oil installation
shall install a heavy valve together, they need to be
well coordinated. They need knowledge about how
to carry out the operation including who is responsible for what. Competences on the risky situations
they go through in this operation are also essential.
This knowledge represents anticipation, knowing
what to expect. When the operation proceeds they also
need to have competence on how to interpret the situation and what to look for to be aware of the risky
situation, attention is needed. When a risky situation is observed it is crucial that they respond
to it and respond in a correct way. It is not unusual
that an employee do not respond if he sees that an
employee in a higher position do not follow safety procedures. Trust is essential to secure response. Time and
resources is also important to avoid that critical situation are not responded to because they like to get the
job done in due time.
How can the identified HSE activities potentially
influence attention, anticipation and response? The
following Table 1 shows how we interpret this.
The activity Safety conversation covers all conversations where safety is an issue and the purpose is
to enhance the quality of these conversations. When
x
x
x
x
x
Attention
Response
x
(x)
(x)
(x)
(x)
(x)
(x)
(x)
x
(x)
(x)
(x)
(x)
737
CONCLUSION
REFERENCES
Creswell, J.W. 1994. Research design: Qualitative & quantitative approaches. Thousand Oaks, California: Sage
Publications.
Greenwood, D.J. & Levin, M. 1998. Introduction to action
research: social research for social change. Thousand
Oaks, California.: Sage Publications.
Heinrich, H.W. 1931. Industrial accident prevention: New
York: McGraw-Hill.
Hollnagel, E., Woods, D. 2005. Joint Cognitive Systems.
Foundations of Cognitive Systems Engineering. Taylor
and Francis, USA.
Hollnagel, E., Leveson, N., Woods, D. 2006. Resilience
Engineering Concepts and Precepts, Aldershoot, Ashgate
Hollnagel, E. 2007a. Resilience Engineering: Why, What
and How. Viewgraphs of presented at Resilient Risk
Management Course, Juan les Pins, France.
Hollnagel, E. 2007b. Principles of Safety Management
Systems: The Nature and Representation of Risk. Viewgraphs of presented at Resilient Risk Management Course,
Juan les Pins, France.
Hollnagel, E. 2008. Why we need Resilience Engineering.
Ecole des Mines de Paris, Sophia Antipolis, France
Reason, J., Hobbs, A. 2003. Managing Maintenance Error,
Ashgate, Aldershot, USA.
Weick, K., Sutcliffe, M. 2001. Managing the unexpected.
Assuring High Performance in the Age of Complexity.
University of Michigan Business School Management
Series John Wiley & Sons, Inc. USA.
Westrum, R. 1993. Cultures with Requisite Imagination. In
Verification and Validation of Complex Systems: Human
Factors Issues, ed. Wise, J, Hopkin, D and Stager, P.
New York: Springer-Verlag, pp 401416.
738
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: A model of OHS management was developed using the safe place, safe person and safe systems
framework. This model concentrates on OHS being a collective responsibility, and incorporates three different
perspectivesan operational level (safe place), an individual level (safe person) and a managerial level (safe
systems). This paper describes the qualitative methodology used in the development of the assessment tool,
including the lessons learnt from the pilot study and preliminary results. This research also promotes the use of a
new style of reporting that identifies areas of strengths as well as vulnerabilities, and uses discreet, non-emotive,
neutral language to encourage an objective, constructive approach to the way forward. The preliminary results
from the pilot study and peer review using the Nominal Group Technique were very encouraging suggesting
that this technique would be useful in directing a targeted approach to systematic OHS management, and that
the safe place, safe person, safe system framework was suitable to be taken to the next stage of wider case study
application.
INTRODUCTION
739
current control measures fail (the raw hazard profile); secondly, an assessment was to be made on the
risk remaining once existing prevention and control
strategies had been applied (the residual risk profile).
This was to give an indication of the amount of risk
reduction that had been achieved and to help identify
opportunities for improvement. This was performed
by using a risk ranking matrix factoring in a combination of both severity and likelihood and a resulting
allocation of either high, medium-high, medium or
low. It should be noted that the use of non-emotive language was deliberately selected for providing feedback
about hazard profiles as this was considered an important step in breaking down barriers to the improvement
process and avoiding blame. For example words such
as catastrophic or extreme were not used in risk
ranking labels. Where elements were handled with
expertise this was recognised and fed back to organisation by giving it a risk ranking of zeroor well done.
Also, an assessment was made on the level of formality applied to the systems invoked and whether
or not all the elements proposed by the safe place,
safe person, safe system framework had in fact been
addressed by the organisation. The level of formality
was also assessed to recognize where informal systems
were used to manage risks, but did not contain a high
Electrical
All electrical equipment should be handled appropriately by those who are suitably qualified and kept in good working order. Other electrical hazards include
electric shock; static electricity; stored electrical energy, the increased dangers of high voltage equipment and the potential for sparks in flammable/explosive
atmospheres. Where live testing is necessary, only appropriately trained and qualified personnel should do so in compliance with relevant legislation and codes.
The risk is that someone may be injured or fatally electrocuted or cause a fire/explosion by creating sparks in a flammable or explosive atmosphere.
Possible Risk Factors
Worn Cables
Isolation Procedures
Use of Authorised/ Qualified Repairers
Lead/Cable Checks
Circuit Breakers
Use of Residual Current Detectors
High Voltage Procedures
Static Electricity Use of Earthing Devices or Non-Conducting Materials
Lightning Protection
Stress Awareness
Personal skills, personality, family arrangements, coverage of critical absences, resourcing levels and opportunities for employees to have some control over
work load are factored into ongoing work arrangements so as not to induce conditions that may be considered by that particular employee as stressful. Plans are
available for dealing with excessive emails and unwelcome contacts.
The risk is that the employee becomes overwhelmed by the particular work arrangements, and is unable to perform competently or safely due to the particular
circumstances.
Possible Risk Factors
Shift Work
High Level of Responsibility
Frequent Critical Deadlines
No Control Over Work Load
Lack of Necessary Skills
Figure 1.
Examples of supporting material and prompts for elements in the revised assessment tool.
740
METHOD
741
Table 1.
Safe place
Safe person
Safe systems
OHS Policy
Ergonomic Assessments
Access/ Egress
Equal Opportunity/
Anti-Harassment
Training Needs Analysis
Inductions -Contractors/ Visitors
Plant/ Equipment
Skill acquisition
Work Organisation
Amenities/ Environment
Accommodating Diversity
Electrical
Noise
Hazardous Substances
Biohazards
Job Descriptions
Training
Behaviour Modification
Health Promotion
Networking, Mentoring,
Further Education
Conflict Resolution
Employee Assistance Programs
Radiation
Installations/ Demolition
Preventive Maintenance
ModificationsPeer
Review/Commissioning
SecuritySite /Personal
Emergency Preparedness
Housekeeping
Plant Inspections/ Monitoring
Risk Review
Consultation
Legislative Updates
Procedural Updates
Record Keeping/ Archives
Customer ServiceRecall/
Hotlines
Incident Management
Self Assessment Tool
Audits
System Review
Safe Systems
30%
Safe Person
29%
Figure 2.
Goal setting
Accountability
Due Diligence
Review/Gap Analysis
Resource Allocation/
Administration
Procurement with
OHS Criteria
Supply with OHS
consideration
Competent Supervision
Safe Working Procedures
Communication
Safe Place
37.3%
Safe Person
33.3%
Examples of hazard distributions without interventions (left) and with interventions (right).
The top five ideas were voted upon using a weighting system -five for the most important idea down to
one for the least important of the five ideas selected.
The votes were made in confidence and collected for
counting. The purpose of having all of the members
together was to provide synergy and the opportunity
for explanation by the authors of the assessment tool as
well as to share any lessons learnt from the pilot study.
742
80
70
Risk Ranking
60
50
40
30
20
10
0
Safe Place
Safe Person
Safe Systems
Figure 3. An example of the risk reduction graph in the revised report (dark columnswithout interventions; light
columnswith interventions).
InductionsContractors/Visitors
All visitors and contractors to the workplace are made aware of any hazards that they are likely to encounter
and understand how to take the necessary precautions to avoid any adverse effects. Information regarding
the times of their presence at the workplace is recorded to allow accounting for all persons should an
emergency situation arise. Entry on site is subject to acceptance of site safety rules where this is applicable.
The risk is that people unfamiliar with the site may be injured because they were unaware of potential hazards.
Procedures for contractors and visitors are working well.
Incident Management
A system is in place to capture information regarding incidents that have occurred to avoid similar incidents from recurring
in the future. Attempts are made to address underlying causes, whilst also putting in place actions to enable a quick
recovery from the situation. Root causes are pursued to the point where they are within the organisations control or
influence. Reporting of incidents is encouraged with a view to improve rather than to blame. Near miss/ hits are also
reported and decisions to investigate based on likelihood or potential for more serious consequences. Investigations are
carried out by persons with the appropriate range of knowledge and skills.
The risk is that information that could prevent incidents from recurring is lost and employees and others at the workplace
continue to contract illnesses or be injured.
This isused more as a database for reporting rather than as a problem solving tool. Selective application of
root cause analysis, corrective action and evaluation may yield significant improvements in this area.
Figure 4.
Excerpts from the preliminary report illustrating the style of reporting used (above).
themes and paradigm shifts. In this sense, the validation process for qualitative research does not try to
achieve the same goals as quantitative research; the
aim is instead to provide multiple perspectives, and
in doing so overcome the potential for bias in each
individual method. A collage is then formed to give
depth and breadth to the understanding of complex
issues (Yin, 1989).
RESULTS
authors. The final framework comprised of twenty elements for each aspect of the model, making a sixty
element matrix, three more than the original model.
A Risk Reduction graph was added to the final report
to increase clarity and assist in the interpretation of the
final results (see Figure 3).
Three new elements were added: Receipt/Despatch
to cover OHS issues associated with transportation of
materials to and from the workplace; Personal Protection Equipment to address issues related to the safe use
of PPE; and Contractor Management to ensure that all
the lines of responsibility are well understood and that
all the information necessary has been exchanged.
Details of changes within elements in the framework model were:
Training Needs Analysis was incorporated into
Training.
Work OrganisationFatigue and Stress Awareness
was modified to remove stress awareness, which
became its own element.
Noise included more information on vibration.
743
DISCUSSION
744
10
LTI's
Medical treatments
First Aid treatments
Reports
Numbers of Reports
0
Jun
Figure 5.
Jul
Aug
Sep
Oct
Month
Nov
Dec
Jan
Injury and incident results after completion of phase 2 of the pilot study.
745
CONCLUSION
The use of a pilot study and the Nominal Group Technique to trial the application of the safe place, safe
person, safe system model through the development
of an assessment tool was found to be very rewarding and worthwhile, and essential to the integrity
of the research being undertaken. The results have
enabled this research to be taken to the next level
multiple case studies which are currently in progress
and near completion. This qualitative approach is
highly recommended for this particular field of
research and preliminary results from the case studies
suggest that there is much scope for future development and further work, in particular for customising
the current OHS assessment tool for specific industry
fields. Furthermore, the application of this tool was
not limited to small to medium enterprises as originally thought, and may provide a useful benchmarking
exercise across larger organisations where they are
comprised of smaller subsidiary groups.
ACKNOWLEDGEMENTS
The authors gratefully acknowledge Dr. Carlo
Caponecchia, who was facilitator of the nominal group
session, and all contributors to the session.
REFERENCES
Delbecq, A.L., Van de Ven, A., Gustafson, D. H. (1975a)
Profile of Small Group Decision Making. In: Group Techniques for Program Planning. Glenview Illinois: Scott,
Foreman and Company. pp. 1539.
Delbeqc, A., Van de Ven, A., Gustafson, D. H. (1975b) Group
Decision Making in Modern Organisations. In: Group
Techniques for Program Planning. Glenview, Illinois:
Scott, Foresman and Company. pp. 113.
Landeta, J. (2005) Current validity of the Delphi method
in social sciences. Technological Forecasting and Social
Change In press, corrected proof.
Linstone, H.A., Turoff, M. (1975) Introduction. In: The
Delphi Method: Techniques and Applications. Reading,
Massachusetts: Addison-Wesley Publishing Company.
pp. 110.
Gallagher, C. (1997) Health and Safety Management Systems: An Analysis of System Effectiveness. A Report to
the National Occupational Health and Safety Commission: National Key Centre in Industrial Relations.
Makin, A.-M., Winder, C. (2006) A new conceptual framework to improve the application of occupational health
and safety management systems. In: Proceedings of
the European Safety and Reliability Conference 2006
(ESREL 2006), Estoril, Portugal, Taylor and Francis
Group, London.
Makin, A.-M., Winder, C. (2007) Measuring and evaluating safety performance. In: Proceedings of the European
746
Standards Australia. (2001b). AS/NZS 4804:2001 Occupational Health and Safety Management Systems
General Guidelines on Principles, Systems and
Supporting Techniques. Sydney: Standards Australia
International Ltd.
Yin, R. (1989) Case Study Research: Design and Methods.
Newbury Park, US: Sage Publications.
747
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The field of knowledge translation and exchange is growing, particularly in the area of health services. Programs that advance bench-to-bedside approaches have found success in leveraging new research into
a number of medical fields through knowledge translation strategies. However, knowledge translation remains
an understudied area in the realm of occupational health, a factor that is interesting because workplace health
research is often directly applicable to risk reduction activities. This research project investigated knowledge
translation in one occupational setting, small machine shops, where workers are exposed to Metal Working
Fluids (MWF) which are well established dermal and respiratory irritants. Using the mental models approach,
influence diagrams were developed for both scientists and were compared with qualitative interview data from
workers. Initial results indicated that the sphere of influence diagrams would benefit from the inclusion of other
stakeholders, namely policy makers and product representatives. Overall, findings from this research suggest
that there is only minimal transfer of scientific knowledge regarding the health effects of metal working to
those at the machine shop level. A majority of workers did not perceive metal working fluids to be hazardous
to their health. Of note was the finding that MWF product representatives were rated highly as key sources
of risk information. The translation of scientific knowledge to this occupational setting was poor, which may
be due to varying perceptions and prioritizations of risk between stakeholders, lack of avenues through which
communication could occur, an absence of accessible risk information and the small size of the workplaces. The
mental models approach proved successful for eliciting information in this occupational context.
INTRODUCTION
Work is a central feature of life for most adults, providing our livelihood and sense of place, and occupying
at least a third of our waking hours. Not surprisingly,
work can have a profound impact on health. There is
growing recognition that many diseases (e.g., asthma
and chronic obstructive lung disease, joint and tendon disorders, stress-related mental health problems,
and some cancers and communicable diseases) can be
caused or augmented by workplace exposures.
Because of its vital role in adult life, the workplace provides a valuable opportunity for promoting
occupational health. A key feature of occupational
health research is that it is often possible to translate research results into direct prevention activities
in the workplace. Thus, the communication or transfer of occupational health risk information has the
potential to have an immediate and profound effect
on work-related morbidity and mortality.
The process of communicating risk information to
workplaces involves workers, managers, engineers,
researchers, health experts, decision-makers, regulatory bodies and governments. Although most occupational health researchers are aware of the need to
communicate with front-line workers and decisionmakers, there has been little research on knowledge
translation or risk communication processes within
occupational settings. To date, understanding the factors affecting the application of knowledge in this field
remains extremely limited.
The primary objective of this research was to investigate how health information is translated in one
occupational setting, machine shops, where workers
are exposed to complex mixtures of chemical and
biological hazards in the form of metal working fluids (MWFs). These exposures have been linked to
occupational asthma, increased non-allergic airway
responsiveness to irritants, chronic airflow obstruction
and bronchitis symptoms (Cox et al., 2003) and have
been the focus of considerable recent regulatory attention in Canada and the US. This project is linked to an
ongoing study of risk factors for lung disease among
tradespeople in British Columbia, Canada, which has
749
2
2.1
METHODS
were done over the phone. Each interview took approximately 20-35 minutes. The respondents were asked
open-ended questions that were created using guidance from the experts Mental Model (see 2.2). Workers were queried about their work history and habits,
as well as their knowledge of MWF exposure and
the personal protection strategies they undertook in
the workplace. They were also asked questions about
health effects associated with MWFs, including where
they would look for information on health effects, and
what steps they would take to mitigate these effects.
These open-ended questions were often followed up
by probes designed to elicit more information on a
particular subject matter.
Data collection
Data was collected for this project using the Mental Models methodology developed by Morgan et al
at Carnegie Mellon University (Morgan, 2002). This
method has been previously applied in an occupational
context (Cox et al., 2003; Niewohner, Cox, Gerrard, &
Pidgeon, 2004b). The data was collected in two phases,
beginning with interviews with scientific experts in
the field of MWF exposure and effects, followed by
interviews with workers employed in machine shops.
2.1.1 Expert interviews
A member of the study team who is an academic expert
on the health effects of MWF compiled a list of experts
on MWF and health effects. The list was comprised
primarily of academic researchers from the US and
Europe, but also included US government researchers
and occupational health professionals. Of this list of
experts, the study team was able to contact 16, and 10
of these consented to participate.
The interviews, which were carried out by a single trained research assistant, were conducted over the
phone and lasted for an average of 30 minutes. The
first two interviews were used to pilot test the survey
instrument and were therefore not included in the final
analysis. The respondents were asked open-ended
questions about exposures, health effects, and mitigation strategies relating to MWF in the workplace.
They were also asked about their attitudes and practices relating to the communication of their research
results to decision-makers in industry and regulatory
agencies.
2.1.2 Worker interviews
To recruit machinists, introductory letters were sent
to 130 machine shops in the province of British
Columbia, Canada, and were followed up with at least
one phone call. Twenty-nine workers from 15 different
machine shops agreed to participate in an interview.
The interviews were conducted by four different
trained interviewers. Twenty of the interviews were
done at a private location at the worksite, and nine
2.2
2.3
Data analysis
For the health effects and information sources analysis, data for each of these constructs was abstracted
from NVivo and reviewed by two members of the
research team who had expertise in the areas of health
effects and risk communication. Data from the workers
were compared and contrasted with the expert model
and areas of both congruence and disconnect were
identified. Results were entered into tables to present
comparisons.
750
RESULTS
3.1
Demographics
Table 1.
Machinist demographics.
Characteristic
2029
3039
4049
50+
Unknown
3
10
12
2
2
10%
34%
41%
7%
7%
5 to 10
11 to 15
16 to 20
21 plus
Unkown
7
7
6
7
2
24%
24%
21%
25%
7%
<10 people
1150 people
50 plus
Unknown
5
13
10
1
17%
45%
35%
3%
Manual
CNC
Both
Unkown
Other
4
5
15
4
1
14%
17%
52%
14%
3%
Age
identify the lungs as potential site of health problems. Of note, nine of the workers (31%) described
having personal experience with either a lung effect
from MWF exposure or feeling MWF mists in their
lungs.
There was greater concurrence between experts
and workers awareness of specific dermal conditions
that can occur as a result of MWF exposure, including
rash, dry hands, itchy hands, dermatitis and eczema.
Sixty-two percent of workers could identify a specific
dermal health effect such as eczema, although a further
31% were only able to identify the skin in general as
a potential site for health effects. Forty percent of the
workers said that they had experienced adverse effects
on their hands from MWF exposure.
Four of the experts (44%) discussed the association between cancer and MWF exposure, although
proportionally fewer (17%) of the workers described
MWFs as cancer-causing agents. Of the workers who
described cancer, there was a general tendency to mention smoking and its carcinogenic potential in the same
discussion.
There were health effects that workers described
that experts did not, particularly irritation that could
occur in eyes. Two workers also suggested that MWF
could affect blood.
Within the cohort of workers, 21% stated that
MWFs were not harmful to health, even though in
some cases these workers did note that MWF exposure could cause skin problems. Finally, there were
two people in the worker group who stated that they
were unaware of any potential health effects of MWF
exposure.
3.3 Sources of information
Experts and workers were asked slightly different
questions regarding sources of health and safety
Table 2.
workers.
# of years in trade
Shop size
Health effects
Described specific health effects
that can occur in the lungs
Described specific health effects
that can occur on the skin
Described a relationship between
MWF exposure and cancer
Central nervous system depression
Eye Irritation
Problems with blood
Poisonous
Stated that MWFs do not
cause health effects
Types of machines
751
Workers Experts
(n = 29) (n = 10)
%
%
28%
70%
62%
70%
17%
3%
17%
7%
3%
40%
10%
0%
0%
0%
21%
0%
They dont
Occupational health and safety training
Trade media (magazines, pamphlets)
Union
General news media
MSDS
Govt agencies
40
40
30
30
10
10
10
86
69
66
48
41
34
28
Table 5.
WorkSafeBC
Government
MSDS
Manufacturer/supplier
Other workers
Union
Researchers
Dont know
31
17
14
14
3
3
3
3
Table 6.
Workplace management
Government health and safety agencies
Workers
Industry/Suppliers
Unions
Government (other than safety agency)
Physician
70
60
50
40
40
10
10
DISCUSSION
752
Information sources
753
. . . the communication [with workplace management] was unsuccessful in that I didnt get any
feedback [ . . . ] on what happened next.
754
CONCLUSION
755
756
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Work-related traffic accidents suppose an important economic and public health problem. In this
context, it is important to improve our knowledge on the main factors that influence these accidents, in order to
design better preventive activities. A data base coming from an insuring company is analyzed, in which information regarding personal characteristics of the individual is contained, as well as labor situation characteristics.
Cox model is used in order to construct a predictive model of the risk of work- related traffic accident. From the
obtained model we study if personal or labor characteristics act like predicting factors of work-related traffic
accidents.
INTRODUCTION
DATA
757
Table 1.
Categorical variables.
Variable
Sex
Men
Women
Situapro2cat
Waged
Autonomous
Pertemp
1
2
3
4
Topap
No
Yes
Topni
No
Yes
Topspa
No
Yes
Topspm
No
Yes
Topspp
No
Yes
Toptd
No
Yes
Table 2.
(%)
16954
5295
76.2
23.8
22053
196
99.1
0.9
20905
760
364
220
94.0
3.4
1.6
1.0
21590
659
97.0
3.0
22034
215
99.0
1.0
3802
18447
17.1
82.9
20859
1390
93.8
6.2
19803
2446
89.0
11.0
21916
333
98.5
1.5
Variable
exp()
se()
Age
Sex
Pertemp2
Pertemp3
Pertemp4
Topap
0.0537
0.4774
1.1896
1.3117
0.0707
0.4566
0.948
1.612
0.304
3.713
0.932
1.579
0.00252
0.04511
0.23726
0.17971
0.23686
0.10238
21.347
10.584
5.014
7.299
0.299
4.460
Variable
exp()
p-value
lower .95
upper .95
0.0e+00
0.0e+00
5.3e07
2.9e13
7.7e01
8.2e06
0.943
1.475
0.191
2.610
0.586
1.292
Age
Sex
Pertemp2
Pertemp3
Pertemp4
Topap
Cox model.
0.948
1.612
0.304
3.713
0.932
1.579
0.952
1.761
0.485
5.280
1.482
1.929
(1)
758
Figure 1.
Nomogram.
with respect to the situation of work centre belonging to the company. On the other hand, workers with
Pertemp = 3 have a risk 3.713 times the risk of workers with Pertemp = 1. That is to say, the risk increases
a lot if the work centre does not belong to the company and the relationship is of company of temporary
work. With respect to preventive organization, there is
in this way some conclusion: the risk if the preventive
organization is personally assumed by the employer is
1.579 times the risk if the preventive organization is
not personally assumed by the employer.
Of course, these and other conclusions that may be
extracted from the model are provisional, and further
analysis is necessary, by improving the model, or handling the information with another models, and also
looking for another data base.
4
NOMOGRAM
The model may be represented by means of nomograms. A nomogram is a graphic tool easily interpretable. So, it is an interesting tool to take advantage
of the model. Figure 1 depicts a nomogram for predicting probability of no occurrence of traffic work related
accident at one year and two years after the individual
joins the company. Typographical reasons force us to
rotate the figure, but the natural way of looking at it is
obvious.
To read nomogram draw vertical line from each tick
mark indicating predictor status to top axis (Points).
759
These statistical analyses and the nomogram were performed using S-Plus software (PC Version 2000 Professional; Insightful Corp, Redmond, WA) with additional functions (called Design)(Harrell 2001) added.
5
DISCUSSION
Bomel 2004. Safety culture and work related road accidents. Road Safety Research Report 51, Department for
Transport, London.
Boufous, S. & Williamson, A. 2006. Work-related traffic
crashes: A record linkage study. Accident Analysis and
Prevention 38 (1), 1421.
Cellier, J., Eyrolle, H. & Bertrand, A. 1995. Effects of age
and level of work experience on occurrence of accidents.
Perceptual and Motor Skills 80 (3, Pt 1), 931940.
Clarke, D., Ward, P., Bartle, C. & Truman, W. 2005. An indepth study of work-related road traffic accidents. Road
Safety Research Report 58, Department for Transport,
London.
Cox, D. R. 1972. Regression models and life tables (with
discussion). Journal of the Royal Statistical Society Series
B 34, 187220.
DfT 2003. Driving at Work: Managing workrelated road
safety. Department for Transport, HSE Books.
Harrell, F. E. 2001. Regression Modeling Strategies. With
Aplications to Linear Models, Logistic Regression, and
Survival Analysis. Springer.
Harrell, F. E., Califf, R. M. & Pryor, D. B. 1982. Evaluating
the yield of medical tests. JAMA 247, 25432546.
Hjar, M., Carrillo, C. & Flores, M. 2000. Risk factors in
highway traffic accidents: a case control study. Accident
Analysis & Prevention 32, 5, 703709.
Lawton, R. & Parker, D. 1998. Individual differences in accident liability: A review and integrative approach. Human
Factors 40, 655671.
Lewin, I. 1982. Driver training: a perceptual motor skill
approach. Ergonomics 25, 917925.
Lpez-Araujo, B. & Osca, A. 2007. Factores explicativos de la accidentalidad en jvenes: un anlisis de la
investigacin. Revista de Estudios de Juventud 79.
Prentice, R. L., Williams, B. J. & Peterson, A. V. 1981. On
the regression analysis of multivariate failure time data.
Biometrika 68, 373389.
Wei, L. J., Lin, D. Y. & Weissfeld, L. 1989. Regression
analysis of multivariate incomplete failure time data by
modeling marginal distributions. Journal of the American
Statistical Association 84, 10651073.
WRSTG 2001. Reducing atwork road traffic incidents.
Work Related Road Safety Task Group, HSE Books.
760
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Given the need of organisations to express their performance in the Health and Safety domain
by positive indicators, such as the gains within that domain and proactive actions carried out to improve work
conditions, a research effort has been carried out in order to respond to this particular need, or at least to have a
valid contribute for that purpose. As a result of this effort, a performance scorecard on occupational Health and
Safety was developed and is briefly presented in this paper.
INTRODUCTION
761
762
763
Table 1.
Example of the Performance Benchmaring Scorecard (summary) for the case study.
Weighted segment
Weighted domain
Baseline
weight
M (a)
Partial
M (b)
Partial
Analytic domain
Analytic segment
No. of
indicators
Organizational
design
Technical covering
Systemic focus
4
2
1,00
0,25
0,70
0,30
0,70
0,08
0,05
0,04
Organizational
culture
Values
Behaviour standards
Basics assumptions
3
3
4
1,00
1,00
0,92
0,50
0,20
0,30
0,50
0,20
0,28
0,20
0,20
Occupational health
services
Surveillance
Promotion
6
2
0,85
1,00
0,75
0,25
0,64
0,25
0,10
0,09
Operational service
of occupational
safety & hygiene
Organization
Accidents incidence
Formation
Prevention
Protection
3
10
6
5
3
0,40
0,67
0,76
0,49
1,00
0,05
0,15
0,25
0,40
0,15
0,02
0,10
0,19
0,19
0,15
0,25
0,16
Internal emergency
plan
Planning
Attributes and
responsibilities
Mechanisms
0,90
0,40
0,36
7
10
1,00
0,60
0,25
0,35
0,25
0,21
0,15
0,12
Control of environmental
work conditions
Mechanisms of monitoring
and/or measurement
Corrective action
0,13
0,55
< 0, 07
0,20
0,09
4
2
0,69
1,00
0,30
0,15
0,21
0,15
Maintenance
Safety instructions
4
3
1,00
0,84
0,50
0,50
0,50
0,42
0,05
0,05
Monitoring and/or
measurement
services
Safety equipments
Total
90
0,740
(a) The letter M represents the multiplier associated with the baseline weight and with the maximum score that can be obatined
in a specific segment. (b) The letter M represents the multiplier associated with the segment and with the maximum score that
can be obtained in a specific domain.
Internal emergency plan excellent basis of structuring and planning, with the organization ensuring
the main procedural mechanisms of response to
emergencies (plans, responsibilities and devices).
However, there is an operational weakness, because
the organization does not have evidences of its operational ability (for example, no fire drills were
carried out).
Monitoring and/or measurement service low level
of monitoring of the environmental conditions arising from the adopted processes. The organization
acknowledged the existence of some ergonomic
nature risk factors and occupational exposure to
harmful agents, like noise. However, have not developed specific procedures for evaluating the exposure levels of the workers. This points as become a
critical segment, due to the fact that it penalizes the
organization.
Work safety equipments great strategic importance
is given both to the acquisition and maintenance,
CONCLUSIONS
764
The performance scorecard that has been developed and implemented reflects applicability and show
technical-scientific relevance, allowing the diagnosis
of a structural H&S Management System. This
diagnosis could be carried out both in terms of work
conditions and organizational values, and in H&S
performance monitoring and/or measurement.
Finally, it is also necessary to highlight that there is
some work to be done, but it is expected that the presented tools could be improved and refined in order
to have a reliable and useful tool for performance
assessment.
REFERENCES
Armitage, H. & Scholey, C. 2004. Hands-on scorecardingHow strategy mapping has helped one organization
see better its successes and future challenges. CMA
Management 78(6): 3438.
765
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
I.A. Papazoglou
TU Delft, Safety Science Group, Delft, The Netherlands
M. Mud
RPS Advies BV, Delft, The Netherlands
M. Damen
RIGO, Amsterdam, The Netherlands
J. Kuiper
Consumer Safety Institute, Amsterdam, The Netherlands
H. Baksteen
Rondas Safety Consultancy, The Netherlands
L.J. Bellamy
WhiteQueen, The Netherlands
J.G. Post
NIFV NIBRA, Arnhem, The Netherlands
J. Oh
Ministry Social Affairs & Employment, The Hague, The Netherlands
ABSTRACT: A general logic model for fall from height has been developed under the Workgroup Occupational Model (WORM) project, financed by the Dutch government and is presented. Risk has been quantified
risk for the specific cases of fall from placement ladders, fixed ladders, step ladders, fixed scaffolds mobile
scaffolds, (dis) assembling scaffolds, roofs, floor openings, fixed platforms, holes, moveable platforms and
non moving vehicles. A sensitivity analysis assessing the relative importance of measures affecting risk is presented and risk increase and risk decrease measures are assessed. The most important measures in order to
decrease fatality risk owing to falls from fixed ladders is the way of climbing, for step ladders their location,
for roofs, floors and platforms not to work on them while being demolished, for mobile scaffolds the existence
of safety lines, for fixed scaffolds protection against hanging objects, for work near holes and (de) installing
scaffolds the use of fall arrest and for moveable platforms and non moving vehicles the existence of edge
protection.
INTRODUCTION
767
Figure 1.
into the initiating event and the safety measures aiming at preventing a fall. The initiating event represents
working on the high structure, while the primary safety
measures preventing a fall are strength and stability of
structure, user stability and the edge protection.
Strength of structure: The structure should be able
to support the imposed load by the user and the associated loads (persons or equipment). It is applicable
to all fall from height cases, with the exception of
fall in hole. It is defined as a two state event with the
following cases: success or loss of strength
Stability of structure: The structure itself through
its design and material provides the necessary stability
so that it does not tip over. It is applicable to all fall
from height cases, with the exception of fall from roof,
floor, platform and fall in a hole. It is defined as a two
state event with the following cases: success or loss of
structure stability
Use stability: Given a strong and stable structure,
the user should be able to remain on the structure without losing his stability. This measure is applicable to
all fall from height cases. It is defined as a two state
event with the following cases: success or loss of user
stability.
768
Table 1. Support safety barriers affecting primary safety barriers, for all fall from height
accidents.
Type of fall
Support barriers
STRENGTH
Type or condition of ladder
Structural Design Construction
Roof Surface Condition
LADDER
SCAFFOLD
ROOF/FLOOR/FIXED
PLATFORM
MOVEABLE PLATFORM
NON MOVING VEHICLE
STRUCTURE STABILITY
Placement and Protection
Type or condition of ladder
Anchoring
Foundation
Scaffold Protection
Foundation/Anchoring
Position of Machinery/ weight
Foundation
Load Handling
LADDER
SCAFFOLD
MOVEABLE PLATFORM
NON MOVING VEHICLE
USER STABILITY
User Ability
User Ability
User Ability
User Ability
Floor Condition
User Ability
External Conditions
Movement Control
Position of Machinery or
weight on platform
Ability
Load Handling
Working Surface
LADDER
HOLE IN GROUND
ROOF/FLOOR/FIXED PLATFORM
SCAFFOLD
MOVEABLE PLATFORM
Edge Protection: This measure includes the provision of guardrails that enhance the stability of user. It is
applicable to all fall from height cases with the exception of ladders and non moving vehicles. It can be in
one of the following states: present, failed or absent.
2.1
Condition of Lift/Support
Loading
769
Table 2.
Barrier
success
probability
0,265
Barrier
PIEs
ROOF
PROTECTION
Edge Protection
absent
No Edge
Protection Next to
non-supporting parts
0,36
Roof/working
platform/floor (parts)
being built or torn
down
Roof/working
platform/floor (parts)
not intended to
support exerted
weight
0,43
Capacity to keep
balance on roof
Walking
backwards
Unfit: unwell
Hands not free
Overstretching
Substandard
movement (slip, trip)
Outside edge protection
Weather
Slope
0,21
Collective fall
arrest (CFA)
Condition of CFA
Personal fall arrest
Anchorpoints FA
State of Maintenance
0,18
ROOF
SURFACE
CONDITION
ABILITY
PFA
0,17
0,3
0,17
0,205
0,19
0,09
0,35
0,19
0,2
0,17
0,13
0,32
0,118
0,05
0,16
0,16
0,04
770
1,00E-05
PLACEMENT LADDER
1,00E-06
Substandard movements
1,00E-07
1,00E-08
Mobile
scaffold
Fixed
scaffold
Installing
scaffold
Roof
Floor
Fixed
platform
10
Nonmoving
vehicle
Step
ladder or
steps
Hole in
ground
Fixed
ladder
Moveable
platform
Placement
ladder
1,00E-09
11
12
Fatality (/hr)
Figure 2.
hazards.
0
RISK DECREASE
10
20
RISK INCREASE
30
40
50
60
percentage %
FIXED LADDER
Substandard movements
External force exerted on the
person
Use of both hands for climbing/
descending
Substandard condition/ fitness of
person
Substandard position of person on
equipment
IMPORTANCE ANALYSIS
Safety cage
Fixed ladder is not in good condition
(damaged)
Design of fixed ladder (dimension of
rungs)
Design of arrival area (ergonomics
lay-out, etc.)
To assess the relative importance of each factor influencing the risk from fall, two importance measures
have been calculated.
1. Risk decrease: This measure gives the relative
decrease of risk, with respect to the present state, if
the barrier (or PIE) achieves its perfect state with
probability equal to unity.
2. Risk increase: This measure gives the relative
increase of risk, with respect to the present state,
if the barrier (or PIE) achieves its failed state with
probability equal to unity.
Risk decrease prioritizes the various elements of the
model for the purposes of possible improvements.
It is more riskeffective to try to improve first a
barrier with higher risk decrease effect than another
with lower risk decrease.
Risk increase provides a measure of the importance
of each element in the model to be maintained at its
present level of quality. It is more important to concentrate on the maintenance of a barrier with high risk
increase importance than one with a lesser one. The
effect each PIE has on the overall risk is presented in
Figures 314.
RISK DECREASE
RISK INCREASE
771
FIXED SCAFFOLD
Fall arrestors, safety nets
Protection of scaffold against being struck by a
vehicle
Health checks based on clear criteria for people
working on heights
No ladder placed on top of a scaffold
Safe access of scaffold
Protection against hanging/swinging objects
Footings capable of supporting the loaded scaffold
without displacement.
Scaffold on a level and firm foundation
Scaffold surfaces non-slippery and free from
obstacles
STEP LADDER
Substandard movements
Surface/ support
RISK INCREASE
20
40
60
80
100
percentage (%)
120
140
Location of ladder
20
40
60
RISK INCREASE
RISK DECREASE
80
100
percentage %
(DE) INSTALLING SCAFFOLD
Fall arrestors, safety nets
MOBILE SCAFFOLD
10
20
30
40
50 60
70
80
90 100
RISK INCREASE
10
20
30
40
50
60
70
80
90
RISK DECREASE
100
RISK INCREASE
percentage (%)
772
State of Maintenance
Anchorpoints FA
Personal fall arrest
Anchor points FA
Condition of CFA
Slope
Weather
Weather
Overstretching
Overstretching
Unfit
Unfit
Walking backwards
Walking backwards
Platform overloaded
Platform being built or demolished
EP absent
0
0
10
RISK DECREASE
20
30
40
50
RISK INCREASE
60
70
80
RISK DECREASE
20
40
RISK INCREASE
60
80
100
percentage %
percentage (%)
Figure 11. Risk fatality increase and risk decrease for fall
from platform, for various working conditions (PIEs).
FALL IN HOLE
FALL FROM FLOOR
State of Maintenance FA
Anchor points FA
Anchorpoints FA
Illumination
Substandard movement
(slip, trip)
Overstretching
Hands not free
Walking backwards
Unfit
Walking backwards
External force by
machinery or equipment
Cover absent
RISK INCREASE
20
40
60
80
RISK DECREASE
10
20
30
40
RISK INCREASE
50
60
70
80
90
100
percentage %
percentage %
Figure 10. Risk fatality increase and risk decrease for fall
from floor, for various working conditions (PIEs).
Figure 12. Risk fatality increase and risk decrease for fall
in hole, for various working conditions (PIEs).
773
PFA condition/state
PFA anchoring
Personal Fall Arrest
Collective Fall Arrest lifts
Edge Protection Absent
RISK DECREASE
100 200 300 400 500 600 700 800 900 1000
RISK INCREASE
percentage %
Figure 13. Risk fatality increase and risk decrease for fall
from moveable platform, for various working conditions
(PIEs).
Unknown surface
Slope
The most important measure in order to decrease fatality risk is avoid working on a platform that is being
demolished, which decreases risk by 23% if used 100%
of time while working on affixed platform. The most
important measures to maintain risk is to maintain the
edge protection of platforms. If this is not done risk
increases by 70%.
No grip
Round/rolling parts
Unbalanced loading
Unsecured load
Loading and stabilising
Position of the road
Corrosion
20
40
60
80
100
120
140
160
percentage %
RISK DECREASE
RISK INCREASE
Figure 14. Risk fatality increase and risk decrease for fall
from non moving vehicle, for various working conditions
(PIEs).
The most important measure in order to decrease fatality risk is the existence of fall arrest which decreases
risk by 45% if used 100% of time while working near a
hole. The most important measure in order to maintain
risk is the existence of edge protection. If it is absent
risk will increase risk by 88%.
774
3.7
The most important measure in order to decrease fatality risk is the existence of edge protection, which
decreases risk by 65% if used 100% of time while
working near on a moveable platform. The most important measure in order to maintain risk is the fixation
of the platform, since its absence will increase risk 9
times.
3.8
The most important measure in order to decrease fatality risk is the existence of edge protection which
decreases risk by 39% if used 100% of time while
climbing on a non moving vehicle. The most important measure in order to maintain risk is securing and
balancing load, since their absence will increase risk
by 139% and 137% respectively.
4
CONCLUSIONS
A general logical model has been presented for quantifying the probability of fall from height and the various
types of consequences following all fall from height
accidents. The model has been used for risk reducing
measures prioritization, through the calculation of two
risk importance measures: the risk decrease and the
risk increase. The calculations were made for fatality
risk.
REFERENCES
Ale B.J.M., Baksteen H., Bellamy L.J., Bloemhof A.,
Goossens L., Hale A.R., Mud M.L., Oh J.I.H., Papazoglou
I.A., Post J., and Whiston J.Y.,2008. Quantifying occupational risk: The development of an occupational risk
model. Safety Science, Volume 46, Issue 2: 176-185.
775
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
M. Mud
RPS Advies BV, Delft, The Netherlands
M. Damen
RIGO, Amsterdam, The Netherlands
H. Baksteen
Rondas Safety Consultancy, The Netherlands
L.J. Bellamy
White Queen, The Netherlands
J.G. Post
NIFV NIBRA, Arnhem, The Netherlands
J. Oh
Ministry Social Affairs & Employment, The Hague, The Netherlands
ABSTRACT: Chemical explosions pose a serious threat for personnel in sites producing or storing dangerous
substances. The Workgroup Occupational Risk Model (WORM) project financed by the Dutch government
aims at the development and quantification of models for a full range of potential risks from accidents in
the workspace. Sixty-three logical models have been developed each coupling working conditions with the
consequences of accidents owing to sixty-three specific hazards. The logical model for vapour/gas chemical
explosions is presented in this paper. A vapour/gas chemical explosion resulting in a reportable-under the Dutch
law-consequence constitutes the centre event of the model. The left hand side (LHS) of the model comprises
specific safety barriers, that prevent the initiation of an explosion and specific support barriers that influence the
adequate functioning of the primary barriers. The right hand side (RHS) of the model includes the consequences
of the chemical explosion. The model is quantified and the probability of three types of consequences of an
accident (fatality, permanent injury, recoverable injury) is assessed. A sensitivity analysis assessing the relative
importance of each element or working conditions to the risk is also presented.
INTRODUCTION
777
778
This barrier (PSB1 in figure 1) models explosions taking place due to uncontrolled flammable substance
release and the introduction or existence of ignition
sources in the same space. This barrier belongs to
type 1 explosion safety barriers and has one success
state and three failure states:
State 1: Success state corresponding to no explosion
since substance release has been prevented (no release
of flammable substance).
State 2: Failure state that models the release of flammable substance and subsequent explosion given
that an ignition source will be introduced by a human
activity.
State 3: Failure state that models the release of flammable substance and subsequent explosion due to
ignition source introduced by an equipment malfunction. This state models the joint event of flammable
substance release and the introduction of ignition source because of equipment malfunction (e.g. sparks,
shorts).
State 4: Failure state that models the release of flammable substance and subsequent explosion due to
failure to separate the released flammable vapour from
779
(normally) existing ignition sources. This state models the joint event of flammable substance release and
the failure to isolate this vapour from existing ignition
sources.
When this barrier is in any of the three failure states
an explosion of type 1 may occur.
3.2
This barrier (PSB2 in figure 1) models explosions taking place in closed spaces where flammable vapours
are produced and are supposed to be removed by a
ventilation system. Absence or failure of the ventilation system allows the built-up of the explosive vapour
and the explosion takes place either because of the
erroneous introduction of ignition source (by human
activity or equipment malfunction) or failure to detect
the presence of the vapour and the separation from normally present ignition sources. This barrier prevents
type 2 explosions and has six states (1 success state
and 5 failure states) as follows:
State 1: Success state resulting in no explosion since
no ignition sources have been introduced or separation
barriers are in full function.
State 2: Explosion takes place given that flammable
vapours exist and an ignition source is introduced by
human activity.
State 3: Explosion takes place given that flammable
vapours exist and an ignition source is introduced
owing to equipment failure. This involves equipment
which fails and forms an ignition source (e.g. not
possible to turn it off, electrical defect, missing insulation) or which is the wrong type of equipment (use
of non-explosion proof equipment).
State 4: Explosion takes place given normally present ignition sources. Flammable vapours are generated and remained undetected because no provisions or possibilities for the indication/detection
of the presence of explosive mixtures have been
taken.
State 5: Explosion takes place given normally present ignition sources. Indication/detection provisions
are present but failed or diagnosis/response has failed
so flammable vapours have been generated.
State 6: Explosion takes place given normally present ignition sources. Flammable vapours are introduced where the ignition sources are from other areas
through the removal of barriers.
When this barrier is in any of the states 25 above
an explosion of type 2 may occur if flammable vapours
are created.
3.3
780
State 1: Success state resulting in no explosion because process is designed in a way that no explosive
atmosphere is generated.
State 2: Failure state corresponding to generation of ;explosive atmosphere and subsequent explosion given that an ignition source will be introduced
by a human activity.
State 3: Failure state corresponding to generation of
explosive atmosphere and subsequent explosion due
to an ignition source introduced by an equipment
malfunction.
When this barrier is in any of the two failure states
an explosion of type 3 may occur.
3.4
3.5
These barriers (PSB6, PSB8, PSB9 in Figure 1) represent the introduction of an ignition source through
various human activities (e.g. hot works, human errors,
smoking). Three barriers have been introduced for the
three types of explosions that are due to human ignition
causes (type 1, type 3 and type 4 explosions). The barriers have two states namely success and failure
which correspond to the successful and failed respectively prevention of introduction of ignition sources by
human activities.
Failure of such a barrier can result in an explosion if combined with the corresponding flammable
Ventilation systems
Age of operator
781
PIES and their values as well as the failure probability for the barrier they influence for the logic model
of vapour/gas chemical explosions are presented in
Table 1.
QUANTIFICATION PROCESS
Table 1.
Barrier
PIE
PIE characteristics
PSB6-
PSB8-
PSB9SSB1-
1
1
SSB2-
SSB3-
SSB4-
SSB5-
(1)
782
PIE
value
Barrier
failure
0.17
0.17
0.08
0.08
0.06
0.06
0.06
0.3
0.06
0.3
0.26
0.25
0.24
0.2
0.23
0.23
0.24
0.24
0.24
0.17
0.17
Risk rates per type of explosion and Overall risk decrease for each safety barrier.
Safety barrier
Base case
Prevention of ignition source (type 1 explosion)
Prevention of ignition source (type 3 explosion)
Prevention of ignition source (type 4 explosion)
Ventilation systems
Personal Protective Equipment
Other protection than PPE
Type 1
explosion
Type 2
explosion
Type 3
explosion
Type 4
explosion
Overall
risk rate
Risk
decrease
2.65E-08
2.22E-08
1.79E-08
1.05E-08
2.28E-08
7.78E-08
7.35E-08
7.24E-08
7.65E-08
6.60E-08
4.69E-08
5.02E-08
5.53%
6.94%
1.58%
15.08%
39.66%
35.49%
5.13E-09
2.16E-08
1.04E-08
1.60E-08
6.17E-09
1.28E-08
1.19E-08
783
7.50E-09
7.00E-09
1.63E-08
1.52E-08
Table 3.
Risk rates per type of explosion and Overall risk increase for each safety barrier.
Safety barrier
Base case
Prevention of ignition source (type 1 explosion)
Prevention of ignition source (type 3 explosion)
Prevention of ignition source (type 4 explosion)
Ventilation systems
Personal Protective Equipment
Other protection than PPE
Table 4.
Type 2
explosion
Type 3
explosion
Type 4
explosion
Overall
risk rate
Risk
increase
2.65E-08
4.76E-08
1.79E-08
1.05E-08
2.28E-08
7.78E-08
9.89E-08
1.39E-07
9.72E-08
2.61E-07
1.50E-07
1.60E-07
27.19%
79.35%
25.00%
235.91%
92.52%
106.15%
7.22E-08
4.22E-08
6.43E-08
5.80E-08
2.02E-07
2.98E-08
3.58E-08
1.75E-08
2.10E-08
3.81E-08
4.57E-08
Base case
Prevention of ignition source (type 1 explosion)
Prevention of ignition source (type 3 explosion)
Prevention of ignition source (type 4 explosion)
Ventilation systems
Personal protective equipment
Other protection than PPE
Emergency response
Table 5.
Type 1
explosion
Recoverable
injuries
Risk
decrease
(%)
Permanent
injuries
Risk
decrease
(%)
Lethal
injuries
Risk
decrease
(%)
4.47E-08
4.14E-08
4.19E-08
4.41E-08
3.90E-08
2.53E-08
2.85E-08
3.73E-08
7.38
6.26
1.34
12.75
43.40
36.24
16.55
2.38E-08
2.31E-08
2.12E-08
2.33E-08
2.10E-08
1.56E-08
1.56E-08
1.82E-08
2.94
10.92
2.10
11.76
34.45
34.45
23.53
9.26E-09
8.96E-09
9.26E-09
9.13E-09
6.03E-09
6.02E-09
6.06E-09
8.64E-09
3.24
0.001
1.40
34.88
34.99
34.56
6.70
Recoverable
injuries
Risk
increase
(%)
Permanent
injuries
Risk
increase
(%)
Lethal
injuries
Risk
increase
(%)
4.47E-08
6.09E-08
7.61E-08
5.34E-08
1.33E-07
8.99E-08
9.31E-08
6.95E-08
36.24
70.25
19.46
197.54
101.12
108.28
55.48
2.38E-08
2.73E-08
5.41E-08
3.24E-08
6.82E-08
4.30E-08
4.85E-08
4.27E-08
14.71
127.31
36.13
186.55
80.67
103.78
79.41
9.26E-09
1.07E-08
9.26E-09
1.14E-08
6.00E-08
1.68E-08
1.87E-08
1.14E-08
15.55
0.01
23.11
547.95
81.43
101.94
23.11
Base case
Prevention of ignition source (type 1 explosion)
Prevention of ignition source (type 3 explosion)
Prevention of ignition source (type 4 explosion)
Ventilation systems
Personal protective equipment
Other protection than PPE
Emergency response
section 3. For this specific case the most effective measure for reducing risk is the increase in the percentage
of time that PPEs are used.
This conclusion might change if a different mission
split is considered or if an explosion type is considered
in isolation. For explosion type 2, for example, presence of Ventilation systems at 100% of the cases
decreases the risk rate by 65.5%, whereas the PPE
and Other protection by 28.5% and 33.5% respectively. Thus the risk of a type 2 explosion is reduced
more by the increasefrom the present level- of the
use of ventilation systems rather than the PPE or other
protective measures.
784
CONCLUSIONS
REFERENCES
Ale B.J.M., Baksteen H., Bellamy L.J., Bloemhof A.,
Goossens L., Hale A.R., Mud M.L., Oh J.I.H., Papazoglou
I.A., Post J., and Whiston J.Y., 2008. Quantifying occupational risk: The development of an occupational risk
model. Safety Science, Volume 46, Issue 2: 176185.
Aneziris O.N. Papazoglou I.A., Baksteen H., Mud M.L., Ale
B.J.M, Bellamy L.J., Hale A.R., Bloemhoff A., Post J.,
Oh J.I.H., 2008a. Quantified risk assessment for fall from
height. Safety Science, Volume 46, Issue 2: 198220.
Aneziris O.N., Papazoglou I.A., Mud M.L., Damen M.,
Kuiper J., Baksteen H., Ale B.J.M., Bellamy L.J.,
Hale A.R., Bloemhoff A., Post J.G., Oh J.I.H, 2008b.
Towards risk assessment for crane activities Safety Science. doi:10.1016/j.ssci.2007.11.012
Baksteen, H., Samwe, M., Mud, M., Bellamy, L., Papazoglou, I.A., Aneziris, O., Konstandinidou, M., 2008
ScenarioBowtie modeling BT 27 explosions, WORM
Metamorphosis Report.
Bellamy L.J., Ale B.J.M., Geyer T.A.W., Goossens L.H.J.,
Hale A.R., Oh J.I.H., Mud M.L., Bloemhoff A, Papazoglou I.A., Whiston J.Y., 2007. StorybuilderA tool for
the analysis of accident reports, Reliability Engineering
and System Safety 92: 735744.
GISAI, 2005. Geintegreerd Informatie Systeem Arbeids
Inspectie: Integrated Information System of the Labor
Inspection in the Netherlands.
OSHA 2008. Bureau of Labor statistics http://data.bls.gov/
GQT/servlet/InitialPage.
Papazoglou I.A., Ale B.J.M., 2007. A logical model for quantification of occupational risk, Reliability Engineering &
System Safety 92 (6): 785803.
Papazoglou I.A, L.J. Bellamy, K.C.M. Leidelmeijerc, M.
Damenc, A. Bloemhoffd, J. Kuiperd, BJ.M. Alea, J.I.H.
Oh, Quantification of Occupational Risk from Accidents, submitted in PSAM 9.
RIVM 2008 WORM Metamorphosis Consortium. The Quantification of Occupational Risk. The development of
a risk assessment model and software. RIVM Report
620801001/2007 The Hague.
785
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
O. Doudakmani
Center for Prevention of Occupational Risk, Hellenic Ministry of Employment and Social Affairs, Thessaloniki, Greece
ABSTRACT: This paper presents the quantification of occupational risk in an aluminium plant producing
profiles, located in Northern Greece. Risk assessment is based on the Workgroup Occupational Risk Model
(WORM) project, developed in the Netherlands. This model can assess occupational risk at hazard level, activity
level, job level and overall company risk. Twenty six job positions have been identified for this plant, such as
operators of press extruders, forklift operators, crane operators, painters, and various other workers across the
process units. All risk profiles of workers have been quantified and jobs have been ranked according to their
risk. Occupational risk has also been assessed for all plant units and the overall company.
INTRODUCTION
OCCUPATIONAL RISK
Figure 1.
787
Next the risk at the activity level is calculated. A general assumption is that if during one of the actions an
accident occurs resulting in a consequence (recoverable injury, permanent injury or death) the Activity is
interrupted and the overall consequence of the activity
is the same. That is no more exposure to the same or
additional hazards is possible.
Let
n = 1, 2, . . ., Nm is an index over all the dangers of
the mth activity.
p1,k : Probability of No-Consequence in the kth
hazard
p2,k : Probability of recoverable Injury in the kth
hazard
p3,k : Probability of permanent injury in the kth
hazard
p4,k : Probability of death in the kth hazard
p1,m : Probability of No consequence for activity m
p2,m : Probability of recoverable injury for activity m
p3,m : Probability of permanent injury for activity m
p4,m : Probability of death for activity m
Then
P1,m =
Nm
p1,n
n=1
P2,m =
Nm
p2,n
n=1
P3,m =
Nm
Nm
n=1
p1,r
r=1
p3,n
n=1
P4,m =
n1
n1
p1,r
r=1
p4,n
n1
p1,r
(1)
r=1
aP4,m =
788
(2)
M
aP1,m
m=1
R2 = 1 R1 R3 R4
R3 =
M
aP3,m
m1
(aP1,m + aP2,m )
m=1
R4 =
M
r=1
aP4,m
m1
m=1
(aP1,m + aP2,m )
(3)
r=1
where: R1, R2, R3, R4 total annual probability of no consequence, recoverable injury, permanent injury and
death.
Again the assumption is made that recoverable
injury during activity (m) does not preclude undertaking of the remaining activities during the year.
STORAGE
Given a company with N jobs and Tn workers performing the nth job, the overall risk is approximated by the
expected number of workers to suffer each of the three
sequences.
R2,o =
FURNACE 450
BILLET CUT-
R2 T2
R3,o =
R3 T3
STRETCHING
R4,o =
R4 T4
(4)
PROFILE
AGEING FURNACE
DIES
EXTRUDING
SURFACE TREATMENT
PLANT DESCRIPTION
ANODIZING
PACKAGING
PROFILE STORAGE
.
Figure 2.
789
PAINTING
TOOLS
c)
d)
e)
f)
4.1 Extrusion
There are four job positions in this unit: extruder
press operator, extruder worker, stretching and cutting
operators.
a) Press extruder operator: He is responsible for
the cut of the billets to the required length, their loading and unloading on the press extruder, the operation
of the press and completing the required documents.
Therefore his job is decomposed into four activities
(cut of the billets, loading/unloading billets on press,
press operation and completing documents), as presented in Figure 3 and Table 1. While cutting billets,
which occurs every day for two hours, he is exposed
to the following hazards: fall on same level (for 2
hours), contact with falling object from crane (for 0.2
hours), contact with hanging or swinging objects (for
0.2 hours), contact with moving parts of machine (for
0.5 hours), trapped between objects (for 0.6 hours) and
contact with hot surface (for 2 hours). Figure 2 presents
the decomposition of this job into activities and their
associated hazards, while Table 1 presents also the frequency and duration of the activities and the exposure
to hazards. Similar Tables are provided for all jobs
described in this section and presented by (Doudakmani 2007). There are 6 press extruder operators in
this plant working on an eight hour shift basis.
b) extruder worker: His activities are to heat the
dies, install them in the press extruder, and transport
them either by crane or by trolley to the die section.
He is exposed to the same hazards as the extruder
press operator, but with different durations. There are 6
extruder workers working on an eight hour shift basis.
COMPANY
FORKLIFT OPERATOR
LOADING/UNLOADING
CUT OF
PAINTER
PRESS
Figure 3.
790
DOCUMENTS
Table 1.
Trapped
between
objects
Contact
with
moving
parts of a
machine
-operating
Fall on
the
same
level
Contact
with
falling
object
-crane
or load
Contact
with
hanging/
swinging
objects
Contact
with hot
or cold
surfaces
or open
flame
0.6
0.5
.2
0.2
0.3
0.3
.2
0.2
0.1
1
c) stretching operator: He checks the profiles arriving to the stretching press, is responsible for the press
operation and the transportation of the profiles to the
cutting machines. He is exposed to the same hazards
as the extruder press operator but with different duration. There are 6 stretching operators working on an
eight hour shift basis.
d) cutting operator: He operates the saw and cuts
the profiles to the required length. He manually moves
the cut profiles and arranges them on special cases,
which are transported by crane to the ageing furnace.
He is exposed to the following hazards: fall on same
level, contact with falling object from crane, contact
with hanging or swinging objects, contact with moving
parts of machine, trapped between, contact with falling
object during manual operation. There are 6 cutting
operators working on an eight hour shift basis.
4.2
Surface treatment
791
Packaging
Storage areas
Machinery section
Occupational risk has been calculated for all job position of the plant and is presented in Figures 4 and 5
and Table 2. Figure 4 presents present annual risk
of death and Figure 5 annual permanent and recoverable injury for all job positions in this plant. The
operator at the entrance of the painting unit has the
highest probability of death (3.25105 /yr) followed
792
3,10E-05
2,60E-05
2,10E-05
1,60E-05
1,10E-05
6,00E-06
extruder operator
extruder worker
cutting operator
stretching opertaor
forklift operator -billets
forklift operator -profiles
worker in storage
die installation
die sandblasting
die chemical
die hardening
op. die machine tools
operator entrance painting unit
painter
cleaner
forklift operator- painting
manual handling painting
crane operator-anodizing
worker -entrance anodizing
operator of press for tools
operator of machine tools
insulation fitter
helper packaging
operator of packaging machi
worker packaging
carpenter
1,00E-06
Figure 4.
(/year).
2,50E-04
2,00E-04
1,50E-04
1,00E-04
0,00E+00
extruder operator
extruder worker
cutting operator
stretching opertaor
forklift operator -billets
forklift operator -profiles
worker in storage
die installation
die sandblasting
die chemical
die hardening
op. die machine tools
operator entrance painting unit
painter
cleaner
forklift operator- painting
manual handling painting
crane operator-anodizing
worker -entrance anodizing
operator of press for tools
operator of machine tools
insulation fitter
helper packaging
operator of packaging machi
worker packaging
carpenter
5,00E-05
Permanent Injury
Recoverable Injury
793
Table 2.
Position
in company
Job
positions
Fatality
Permanent
injury
Recoverable
injury
Extruder operator
Extruder worker
Cutting operator
Stretching op.
Forklift op.-billets
Forklift op.-prof.
Worker storage
Die installation
Die sandblasting
Die chemical
Die hardening
Op. die machine tools
Operator entrance painting unit
Painter
Cleaner
Forklift op.paint
Manual painting
Crane op.anodizing
Worker-entrance anodizing
Operator of press for tools
Operator of machine tools
Insulation fitter
Helper packaging
Operator of packaging machi
Worker packaging
Carpenter
6
6
6
6
2
6
10
1
1
1
1
1
5
1
1
1
1
1
4
1
1
1
1
4
4
4
5,44E-06
8,75E-06
5,28E-06
6,53E-06
5,94E-06
8,40E-06
2,18E-05
6,50E-06
1,91E-05
1,23E-05
1,23E-05
9,00E-06
3,25E-05
5,99E-06
3,02E-06
3,60E-06
3,39E-06
1,37E-05
1,54E-05
1,08E-05
2,77E-06
4,90E-06
7,41E-06
7,21E-06
7,10E-06
2,71E-06
8,38E-05
1,11E-04
1,04E-04
1,01E-04
4,28E-05
5,59E-05
1,22E-04
6,33E-05
1,85E-04
1,01E-04
1,01E-04
1,43E-04
2,03E-04
4,98E-05
5,86E-05
2,46E-05
6,78E-05
1,06E-04
1,10E-04
1,34E-04
1,10E-04
1,22E-04
2,22E-04
1,36E-04
9,95E-05
1,74E-04
5,30E-05
7,18E-05
6,15E-05
9,33E-05
7,92E-05
1,14E-04
1,92E-04
5,40E-05
1,17E-04
1,08E-04
1,08E-04
7,53E-05
2,33E-04
1,20E-04
8,70E-05
3,23E-05
1,86E-04
7,01E-05
7,74E-05
6,28E-05
6,47E-05
6,99E-05
1,08E-04
9,11E-05
8,75E-05
7,21E-05
1,56E-04
2,80E-04
5,92E-05
2,54E-04
2,93E-05
6,47E-05
8,44E-04
2,40E-03
1,64E-03
5,93E-04
1,76E-03
1,06E-03
1,16E-03
8,63E-03
1,68E-03
2,76E-03
4,62E-04
1,97E-03
4,86E-04
8,22E-04
8,18E-03
Extrusion unit
Storage area
Die section
Surface treatment
Packaging unit
Machinery
Overall Risk
3,50E-05
3,00E-05
2,50E-05
2,00E-05
1,50E-05
1,00E-05
5,00E-06
0,00E+00
3,00E-03
2,50E-03
2,00E-03
Death
Permanent Injury
Recoverable Injury
1,50E-03
worker in storage
operator entrance
painting unit
1,00E-03
die sandblasting
5,00E-04
Figure 6. Risk of each hazard (/year) for worker in storage, operator in entrance of painting unit and operator at die
sandblasting.
IN
G
C
HI
N
ER
Y
M
A
G
CK
A
PA
D
IE
CE
R
FA
SU
EX
TR
U
DE
R
ST
R
O
R
A
G
E
TR
.
0,00E+00
Figure 7. Annual risk (death, permanent injury, non permanent injury) in each unit.
794
3,50E-01
3,00E-01
2,50E-01
2,00E-01
1,50E-01
1,00E-01
5,00E-02
0,00E+00
ER
Y
AC
H
IN
G
IN
G
PA
C
FA
SU
R
Risk
KA
TR
IE
CE
R
O
ST
R
EX
TR
AG
DE
R
REFERENCES
Number of accidents
795
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The objective of the research presented in this paper is to improve understanding of risk regulation
regimes in new European Union member states. In order to do so, the research focuses on drinking water safety
regulation in Estonia. This paper tests the importance of rules, cultures, capacities and design of regulatory
bureaucracies in determining the processes and outcomes of a risk regulation regime. The effect of fragmented
nature of the regime, deep-rooted dominating pressures, institutional capacities, and regulatory actors present
in Estonian drinking water regulation are discussed in this paper.
1.1
797
utilities were transformed into public limited companies. Public water supply companies serving larger
settlements belong to the Association of Water Works.
Some smaller municipalities have joined in a municipal syndicate in order to be able to cope with expensive
development of the water supply systems.
This article tries to find explanations for current
functioning of the drinking water safety regulation.
The risk regulation regime approach will be applied in
order to address the mechanics and dynamics as well
as the outcomes of the regulation.
Organised
pressure groupso
Public
salience
Bureaucratic
cultures
RISK REGULATION
BUREAUCRACY
Standard-setting
haviour-modification
ormation gathering
Knowledge
controversies
Figure 1.
Capacities
Figure 2.
798
Nature of
rules
Functionality of
Risk
Bureuaucracy
Regulatory
design
local and regional administrations of accession countries .(Weale et al., 2000; Homeyer 2004; Kramer
2004). The bureaucratic overload and the rushing for
adoption of EU policies has been described as one of
the main reasons for insufficient policy analysis and
the poor quality of legislation in Estonia (Raik 2004).
Due to the disproportionate development of different scientific agendas in the past, (Massa and Tynkkynen 2001), environmental health impact assessments
are often lacking or incomplete in Eastern Europe.
Inadequate scientific capacities on national level
encourage regulators in smaller countries to copy the
research and policy innovations of larger countries
(Hoberg 1999).
Above sections demonstrate the importance of regulatory design, embedded cultures and capacities in
determining the functioning of regulation in EU accession countries. The next section will look at how the
importance of these determinants was assessed in case
of Estonia.
3
799
800
3.3
the clients of smaller water suppliers has been compromised due to negligence of health priorities that
should be promoted by the Ministry of Social Affairs.
Another set of enforcement problems is related
to the way regulation works. Investing regulatory
attention into the larger companies may be explained
through the Estonian bureaucratic culture, which prioritizes good performance with respect to the European Commission. As the records that interest the
European Commission most (those on larger company
compliance) show conformity with EU requirements,
the Inspectorates are seen by Brussels to be efficient.
Thus, credibility among the public seem to be a less
important driver for the Inspectorates. This is especially as they are used to top-down decision-making
traditions.
One could presume that if those smallest suppliers
and private wells were not monitored, there would be a
system for informing the individual well users to take
protective measures. Yet, there have not been any information campaigns nor has there been any demand for
more information regarding the drinking water from
the users. A consideration that the private wells are
something that the owners have to manage themselves
is prevailing. The support schemes and information
dissemination for addressing the drinking water quality in individual wells has not been institutionalized
through the Ministry orders either.
4
CONCLUSIONS
Aligning national safety standards with the European Union rules requires from bureaucracies careful
decisions about the organisation of regulatory responsibilities, about the approaches for attaining their
objectives, as well as choices about practical allocation
of ever-scarce capacities to employ these strategies.
This paper focused on the bureaucratic determinants of drinking water safety regulation efficiency
in Estonia. The standard-setting, monitoring and
enforcement activities associated with drinking water
regulation in Estonia may be described as an EU allegiance striving process. Search for power, patronage
and reputation are the main compliance-driving forces
of inspectors on national level, but may also determine
the state bureaucracies behaviour on European level.
Deeply rooted bureaucratic cultures may function
as gatekeepers for the take-up or neglect of more
innovative non-hierarchical modes of enforcement.
National scientific incapacities have carried over to
poor bureaucrats awareness on drinking water safety
issues leading to insufficient local policy analysis and
simply application of preset rules. Available financial
and administrative capacities have led to a reinterpretation of the set standards and some neglect of smaller
operators quality control. Allocating scarce resources
801
Hood, C., Rothstein, H. & Baldwin, R. 2004. The Government of Risk. Understanding Risk Regulation Regimes.
Oxford: Oxford University Press.
Karro, E., Indermitte, E., Saava A., Haamer, K. & Marandi,
A. 2006. Fluoride occurrence in publicly supplied drinking water in Estonia. Environmental Geology 50(3):
389396.
Kramer, J.M. 2004. EU enlargement and the environment:
six challenges. Environmental Politics 13(1): 290311.
Lust, M., Pesur E., Lepasson, M., Rajame, R. & Realo, E.
2005. Assessment of Health Risk caused by Radioactivity
in Drinking Water. Tallinn: Radiation Protection Centre.
Lfstedt, R.E. & Anderson, E.L. 2003. European risk policy
issues. Risk Analysis 23(2): 379.
Massa, I. & Tynkkynen, V.P. 2001. The Struggle for Russian
Environmental Policy. Helsinki: Kikimora Publications.
Ministry of Social Affairs. 2001. Requirements for Drinking
Water Quality and Control, and Analysis Methods. In RTL
2001, 100, 1369. Riigi Teataja: Tallinn.
Pavlinek, P. & Pickles, J. 2004. Environmental pasts & environmental futures in post-socialist Europe. Environmental
Politics 13(1): 237265.
Perkins, R. & Neumayer, E. 2007. Implementing multilateral
environmental agreements: an analysis of EU Directives.
Global Environmental Politics 7(3): 1341.
Raik, K. 2004. EU accession of Central and Eastern European
countries: democracy and integration as conflicting logics.
East European Politics & Societies 18(4): 567594.
Rothstein, H., Huber, M. &, Gaskell, G. 2006. A theory
of risk colonization: The spiralling regulatory logics of
societal and institutional risk. Economy and Society 35:
91112.
Sadikova, O. 2005. Overview of Estonian Drinking Water
Safety. Tallinn: Health Inspectorate.
Skjaerseth, J.B. & Wettestad, J. 2006. EU Enlargement
and Environmental Policy: The Bright Side. FNI Report
14/2006. Lysaker: The Fritjof Nansen Institute.
Swyngedouw, E. 2002. Governance, Water, and Globalisation: a Political-Ecological Perspective. Meaningful
Interdisciplinarity: Challenges and Opportunities for
Water Research. Oxford: Oxford University Press.
802
Organization learning
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
S. Hbrekke
SINTEF, Norway
ABSTRACT: We have explored accident data from British Rail in the period from 1946 through 2005. Our
hypothesis has been that safety is improved through learning from experience. Based on a quantitative analysis
this hypothesis is tested for the data using a simple regression model. We have discussed the model and its
limitations, benefits and possible improvements. We have also explored our findings based on qualitative theory
from the field of organisational learning, and have suggested key issues to be explored to improve safety and
resilience during changes, such as safety cases, standardisation of training, unambiguous communication and
sharing of incidents and mitigating actions.
1
1.1
INTRODUCTION
Learning from experience
805
2
2.1
MODEL DESCRIPTION
Individual and organisational learning
Based on the preceding discussion our proposed nullhypothesis is: In the long run accidents follow an
exponential decreasing regression line based on experience, where experience is expressed by time.
In our quantitative analysis we have elaborated the
hypotheses, see section 2.6.
In our proposed quantitative model the level of historical accidents at time t (years after 1946), A(t) is
following a regression line on the form
A(t) = a ebt
(1)
(2)
806
Hypothesis testing
DATA ANALYSIS
3.1
Presentation of data
1 The
value of 0.80 should not be considered as a normative value. It is just a value in order to explore the goodness
of fit in addition to what can be seen from the plotted estimated regression line together with the observed data.
In this case we are satisfied with 80 percent or more
explanation of the data variation in the model.
807
3.2
Data analysis
1,6
0,5
1,4
1,2
-0,5
-1
-1,5
0,8
-2
0,6
-2,5
0,4
-3
0,2
0
1940
Figure 1.
-3,5
1940
1950
1960
1970
1980
1990
2000
1950
1960
1970
1980
1990
2000
2010
2010
808
4
4.1
technology. An example is the implementation of electric motors replacing steam engines. At first the large
steam engines were replaced by large electric motors
in factories, using long chain drives to distribute the
power in the factory. Later small electrical motors were
distributed directly in the production process as they
were needed. This removed the danger from the long
chain drives. Thus the efficient and safe use of new
technology was based on exploration and learning in a
period that could take years. This is discussed further
in Utterback (1996).
Improvements in working procedures and safety
thus usually take place after some years of experiences
with new technology.
4.2 Consequences of our model of learning
from accidents
We have proposed a hypothesis, suggesting a model
where the levels of accidents are dependent on experience; more precisely the levels of accidents per year fit
an exponential trend. We have based our model on the
experiences from the Railways in Great Britain. Our
simple model of accidents at time t, is A(t) = a ebt .
The hypothesis indicates that experience and the
level of learning is a key element in reducing accidents.
If we are using the iceberg theory from Heinrich (1959) we are assuming that accidents, near
accidents/incidents and slips are attributed to the same
factors. The distribution between these categories was
suggested by Heinrich to be:
1 Major accident
29 Minor accidents
300 incidents, no-injury accidents
3000(?) slips, unsafe conditions or practices.
809
4.3
We can use the suggested model and perform an analysis on different organisational levels and of different
industries.
By different organisational levels we mean either:
(1) within an organisation, looking on accident data
from different parts of the organisation analysing
which part having the better organisational learning
and thus safer practices, (2) comparing same type
of organisations within a countryidentifying what
organisations having the better organisational learning
system and (3) comparing same type of organisations
between countriestrying to identify what countries
having the better organisational learning systems.
Organisations could be compared to discuss which
kind of organisation has the better accident learning
system. As an example, within aviation you could
identify the airlines having the better accident learning
system.
Different industries (e.g. aviation and shipping)
could be compared to discuss which kind of industries
has the better accident learning system.
In a system with better organisational learning, the
consequences should be that the whole system is learning faster and accidents are decreasing faster, e.g. a
greater value of b. Improved organisational learning
in this context should mean more information sharing
among the involved actors in the whole system and
better understanding of the causes of accidents.
In a system with initially safer technology, the initial
value of a should be lower, meaning that the levels of
accidents are initially lower.
In the airway industry there is international sharing of incidents and accidentsa system that seems
to have improved organisational learning, due to more
incidents and more learning experiences across countries and different organisations. Thus it should be
checked if the airway industry is learning faster and
accidents are decreasing faster, e.g. a greater value of
b in the same period than other industries.
Note that the values of a and b from the regression
results analysing different industries are only comparable when considering the same type of industry
810
811
English could be one way of exploring possible differences in understanding and increase learning in cross
border cooperation between Great Britain and France.
Rules and procedures that are actually used should be
kept as living documents, meaning that the documents should be updated by the working professionals
themselves.
5. Support learning based on experiences and incident reporting across organisations: It should be a
clear obligation to report any condition that could
imply a risk for other companies. All parties must share
their databases regarding events that could improve or
degrade safety, and also share the resulting recommendations. This would ensure a possibility for common
learning and an increased level of safety for all operators. Both interfacing organisations will benefit from
the ability to admit that they are different without inferring value or preference. One partners solution is not
necessary the only right solution, one should share
experiences (both from accidents, fatalities and good
practices) to provide an opportunity to learn from each
other.
6. Organisational learningcommon procedures
and perceptions: Harmonisation of procedures by
project teams cross organisational boundaries should
be done. Experience shows that groups with representatives from each of the companies (or countries)
involved in operations should be established and meet
face to face, to create confidence, common understanding and a good learning environment, and establish harmonised procedures. As part of organisational
REFERENCES
Duffey, R. and Saull, J. 2002, Know the Risk: Learning from Errors and Accidents: Safety and Risk in
Todays Technology Butterworth-Heinemann ISBN-13:
978-0750675963.
Evans, A.W. 2007. Rail safety and rail privatization in
Britain, Accident Analysis & Prevention 39: pages
510523.
Heinrich, H.W. 1959. Industrial accident prevention
A scientific approach Mc Graw Hill, New York.
Schn, D.A. 1983 Organisational Learning in Morgan
G (1983) Beyond Method Sage, Beverly Hills, pages
114129.
Utterback, J.M. 1996 Mastering the Dynamics of Innovation HBS, Boston.
Weick, K.E (1991). The non-traditional quality of organizational learning, Organization Science.
Yelle, L.E. 1979. The Learning Curve: Historical Review
and Comprehensive Survey, Decision Sciences 10:
302328.
812
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Geir Guttormsen
SINTEF Technology and Society, Trondheim, Norway
ABSTRACT: In this article we argue that consequence analysis is about organisational change and thereby
methodologically should be treated as part of that. Traditionally methods of consequence analysis is not sufficient
or enough. HSE (Health, Safety and Environment) is also about Organisational Development (OD). Both in
information and data gathering, in decision and participation, and in safe and secure implementation of suggested
changes, we consider this argument to be important.
The article is based on R&D projects done in the Norwegian oil company StatoilHydro ASA under the heading
of Integrated Operations (IO). The strategy was to choose several pilot projects in one asset to be analysed by
consequences as far as HSE was concerned. The idea was further to spread the successfully pilots to other assets
after a successful Consequence Analysis (CA).
Our approach to understand organizations is inspired by Science and Technology Studies (STS) and sees
organisations as complex seamless networks of human and nonhuman actants (Actor Network Theory, (ANT)
(Latour 1986)). We understand organisations as the ongoing process created by the interests of different actants
like ICT, rooms, work processes, new ways to work, being organised and managed. This in addition to an
understanding of communities of practice (Levy & Venge 1989) is the point of starting to discuss CA as part of
OD. Another used method is based on the risk analysis tool HAZID (Hazard Identification) witch is used in the
Norwegian offshore industry as a planning tool to identify hazardous factors and to evaluate risk related to future
operations. HAZID were used as a basis for collecting qualitative data in our concept of consequence analysis.
Different method was used to identify positive and negative consequences related to implementation of (IO) in
two cases, the steering of smart wells from onshore, and a new operation model on an offshore installation.
We observed that the methods had qualities beyond just evaluation of consequences. During the interviews
on smart well different groups of actants started to mobilize according to the change process from pilot to
broad implementation, new routines and improvements of the pilot were suggested by the production engineers
even though they have been operating along these lines for years. But now as the pilot might go to broad
implementation, different interests initiated a change of the pilot from the process engineers.
During the interviews and the search conferences on the case of a new operational model, we observed that
the discussions generated a new common understanding among the informants about the pilot, the whole change
process. The method helped to clarify what the changes would mean in day to day operation, how they were
going to work and what the potential consequences could be. It also generated a new understanding of why
changes were proposed.
All these questions are important issues in change management and elements that can be discussed related to
organisational learning. Consequence analysis can be a useful change management and organisational learning
tool, if the traditional design and use of such analysis can be changed.
INTRODUCTION
technology. Traditional work processes and organisational structures are challenged by more efficient
and integrated approaches to exploration and production. The new approaches reduce the impact of
traditional obstacleswhether they are geographical,
813
Parallel
Single discipline
Multidiscipline
teams
Dependent of
Independent of
physicallocation
physical
location
physicallocation
physical
location
Decisionsbased
Decisions
based
Decisionsbased
Decisions
based
onexperience
on
experience
onrealtime
on
realtimedata
data
data
Figure 1.
814
CA methodvisualization of consequence
categories
815
IO-project reports and descriptions of the new proposed IO-models in was the basis for the document
studies in both cases. As a basis for the interviews and
the search conferences we used the proposed change
measures needed to implement the new model as an
interview guide. In addition we initially ask for the
history of the initialisation of the pilot to get access to
the main actants and controversies, and to get in touch
with the most important arguments.
The group interviews have the basic aim to gather
information, and might be less time-consuming than
individual interviews, but might get access to more
supervision information. The search conference as
such is more a technique to create an arena for a
common dialogue on especial issues. A combination of these techniques is often been seen to be
fruitful.
In the IO change processes we have seen conflicting interests between management representatives and
trade unions. The search conference can be a useful tool in order to overcome these change process
barriers. The search conference can create openness among the participants (show that things are
what they appear to be); create an understanding of a
shared field (the people present can see they are in the
same world/situation); create psychological similarity
among the representatives; and it can generate a mutual
trust between parties. All these elements are found to
be important in order to achieve effective communication within and between groups (Asch, 1952), and in
this case to bring the planned change process forward
in a constructive direction.
2.4
In order to sort hypothetical negative and positive consequences after implementation of the suggested pilot,
we used a matrix to help us sort the informants statements within the categories organization and management, personnel and competence, operations
and regularity, HSE, economy and company
reputation. To the positive consequences we tried to
describe which presumptions underlying these statements, and to the challenges found we tried to suggest
compensating actions.
New ICT (Information and Communication Technology), with the use of large screens gives new
possibilities for search conferences. In group interviews and in the search conferences these matrixes
might be collective created, showed on a large screen,
which might give a good enthusiasm, participation
and founding of the results. In a case like this there
will be many arguments and the matrixes gives a nice
way to tidy the messy arguments and easily give an
overview. The concurrent production of this matrix in
a search conference might in addition be timesaving.
A further step might be to use the search conference
Analysing data
To analyse data, one of the methodologically starting points was to find the controversies and paradoxes
about the smart well technology, and to identify the
different groups of actants that are involved in the
controversies. By identifying the different controversies one also identifies the interests that are connected
to the controversies, and the constellations of interests that the different actants are chained in. Interests
are to be seen as the driving forces for changes.
Interests are what makes things happen both in a positive and a negative way, e.g. interests are also what
make things not happen. If one wants to understand
the OD aspects of a CA, one has to understand the
main interests. And if one want to do Organizational
Change one has to be able to play with the main interests or to be able to play the game, to chain in with the
different interests in different enrolements and translations (Latour, 1986) to make a strong enough chain
to be able to do Change Management, if not it is all in
vane.
Part of the analysis was also to describe which
presumptions underlying the positive consequences
found, and to suggest compensating actions to the
challenges found. The main stakeholder in the analysis is in these cases SINTEF. Consequence analysis
is something in-between an evaluation and scenario
thinking, and trained skilled methodological and analytical skills are of course required. But a higher degree
of participation in the analysis, and to test out the
analysis might be a fruitful idea, and with the search
conference as a tool, a possibility that is not so far away.
But the last responsibility for the analysis should be the
action researchers.
In addition to identify the different aspects of potentially consequences of the pilot mentioned above,
positive as negative, the CA has to do a ranging of
the different arguments by importance. E.g. by risk
level or sometimes by interests (As seen in figure 4).
One way might be to find what argumentations and
chain of argumentations that are used by visualizing
the arguments by cluster analysis. We often end up
with only a few central arguments, as the basis for
conclusion.
The cluster analysis aimed to find clusters in
statements regarding proposed negative consequences
related to one or several IO-model measures. As
a result it was easier to see how several IO-model
measures could cause interaction effects (e.g. severe
negative consequences) within the different categories
shown in figure 2.
816
817
818
with a separate control system as in the pilot. But security is said to be well handled at the statoil@plant.
It also involves larger development cost to integrate
than the separate solution of the pilot. The pilot has a
script that makes an operation from onshore preferred.
An integration in OS opens for a symmetry between
on and offshore operations, and thereby might conserve status quoe, as far as todays situation on who
should operate the valves, whether the pilot might push
a change.
As the CA started a discussion within the pilot
weather the pilot initially was good enough evaluated
to become a pilot or not. While the interviews came
about the production engineers then starts to create
suggestions of what can be changed in the pilot as
they realize that this might be the reality for many colleagues in other assets, and that theirs practice might
be the standardized proactive, even tough they have
not done anything to improve this or to come with the
same suggestions the two three years in-between now
and after the pilot was evaluated.
4
CONCLUSIONS
819
820
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper aims to discuss how the use of advanced information and communication technology
impacts leadership practice. The paper is based on a research study accomplished at the Kristin asset on the
Norwegian continental shelf. The technology we explore is Integrated Operations (IO), and how organizations
can benefit from using this kind of technology. We discuss the results of our study, focusing on virtual cooperation
among leadership teams located onshore and offshore in the Kristin organization. To date, some research on how
to succeed in virtual teams exists, but few studies explore leadership in virtual teams. The strength of this study
is the in-depth insight of how and why IO shapes the work practice of leaders and operators/technicians. So far,
few empirical research studies shed light on how IO functions and is experienced by the people involved. The
research has mostly focused on the theoretical models of IO.
INTRODUCTION
1.1
Integrated operations
Today, several oil companies on the Norwegian continental shelf have implemented IO as a strategic tool to
achieve safe, reliable, and efficient operations. There
are a variety of concepts describing IO, also called
e-Operations and Smart operations. IO allows for a
tighter integration of offshore and onshore personnel,
operator companies, and service companies, by working with real-time data from the offshore installations.
The Norwegian Ministry of Petroleum and Energy
(in white paper no. 38) defines IO as: Use of information technology to change work processes to achieve
improved decisions, remote control of processes and
equipment, and to relocate functions and personnel to
a remote installation or an onshore facility. Thus, IO
is both a technological and an organizational issue,
focusing on the use of new and advanced technology as well as new work practices. The IO technology
implementation is not considered to be a major obstacle in StatoilHydro. The most challenging issue is to
821
METHOD
The empirical data for this study is comprised of observations and interviews. We have interviewed managers
both onshore and offshore, operators offshore within
all functions, and disciplines represented at the platform (electro, mechanic, automation, instrument,
process, among others) and technical lead engineers
onshore within most of the disciplines. The collected
material comprises semi-structured interviews with a
total of 69 informants, as well as extensive participating observations both onshore and offshore. Analyses
of interviews were conducted based on the principles
of grounded methodology (Strauss and Corbin, 1998)
with qualitative coding techniques. Examples of participating observations are being present at formal and
informal meetings in the collaboration rooms both
onshore and offshore, as well as following the work
of the operators when they were out in the process
plant doing maintenance and operation tasks.
The research approach has been co-generative
learning (Elden & Levin, 1991). The basic idea is
that practitioners and researchers create new practices
together and parallel to developing a deeper understanding of the research question in focus. Examples
of this close interaction between practitioners and
researcher in our study are as follows:
During the project period, the researchers and key
practitioners met on a regular basis (every month)
for working sessions. This was central to the development of the analysis of the IO work practice. At
work sessions, difficult issues could be discussed,
misunderstandings were sorted out, and findings
that needed interpretation were discussed.
By holding informal meetings, being embedded
during data collection, etc., the project contributed
to co-learning and reflection on work practices
between researchers and practitioners. A set of
shared notions and concepts was developed, and
thus also a higher awareness of critical organizational aspects.
In addition, the researchers presented and discussed
the project results closely with all people involved in
the projectoffshore operators, onshore and offshore
management, and onshore technical lead discipline
822
According to Bass (1990), transformational leadership means that a leader communicates a vision, which
is a reflection of how he or she defines an organizations goals and the values which will support it.
Transformational leaders know their employees and
inspire and motivate them to view the organizations
vision as their own (Bass and Avioli, 1994). Such
leadership occurs when one or more persons engage
with others in such a way that leaders and followers lift each other to higher levels of motivation. At
Kristin, the concept of integrated operationswhat it
really means for this organizationinvolved defining
a vision and values concerning how the work ought
to be performedon board, and between the offshore
and onshore personnel. Kristin has a lean and competent organization, where the operators/technicians in
the Operation & Maintenance (O&M) team offshore
possess expertise not necessarily found among their
superiors. The motivation at Kristin has been empowerment, which has affected the autonomous work of
the operators and the delegating leadership style.
Another leadership characteristic found at Kristin is
situational leadership, which means that leaders allow
for flexible solutions and actions adapted to the special conditions and situations in the organization. The
lean offshore organization at Kristin with few persons
within each discipline necessitates flexible problemsolving, which includes cooperation across disciplines
to support each others work. Situational leadership is
the opposite of trying to generalize or standardize work
practices and routines. Situational leadership theories
in organization studies presume that different leadership styles are better in different situations, and that
leaders must be flexible enough to adapt their style to
the situation in which they find themselves. A good
situational leader is one who can quickly adapt his or
her leadership style as the situation changes. Hersey
and Blanchard (1977) developed situational leadership
theory. They categorize leadership style according to
the amount of control exerted and support given in
terms of task and relationship behaviours; persuasive,
instructive, participating, and delegating behaviour.
Instructive behaviour means giving precise instructions and controlling execution. Persuasive behaviour
involves defining tasks, but seeking ideas and suggestions from the workers. A participating leadership
style is when the leader facilitates and takes part in
decisions. A delegating behaviour means that leaders delegate the responsibility for decision-making and
execution.
The level of competence among workers will influence whether a supportive or controlling leadership
behaviour is adopted. To lead personnel with low
degree of competence, a manager will define tasks
and supervise the employees closely. On the other
hand, leading highly skilled workers involves delegating tasks and responsibility, and the control lies with
823
3.2
We have examined which kinds of work practices, benefits, and outcomes the Kristin leadership teams, both
offshore and onshore, have achieved by the use of integrated operations. First, we present and discuss how
the management teams actually work in the collaboration rooms. Then we discuss the benefits and how
they are achieved.
At Kristin, the management is organized as follows:
There are two management teams, one onshore and
one offshore, each located in a collaboration room.
The collaboration is supported by the use of video
conferencing and data sharing facilities, where both
management teams can see each other at all times
during the workday. Also, process data is online and
available at both locations and can be shared.
The offshore management team at Kristin is comprised of four managers: a Platform Manager, an Operation Supervisor (O&M), an Operations Engineer, and
a Hotel & Administration Supervisor (H&A). The
management team offshore manages maintenance and
operations in close collaboration with the management onshore. The onshore management is comprised
of a Platform Manager, an Operation Supervisor, an
Operation Engineer, and a Technical Support Supervisor. They share the collaboration room with some
technical engineers, who support operations and modifications offshore. Both the offshore and onshore
management uses the collaboration room on a permanent basis, as their office, and not as a meeting room
like several other assets on the Norwegian continental
shelf do.
The onshore management team is responsible for
giving day-to-day operational support to the offshore
organization, and for the planning of maintenance
programs and tasks on a long-term basis. This takes
place through formal daily meetings and through
informal and ad-hoc dialogue during the day. Each
morning the offshore and onshore management teams
have shared virtual meetings to inform and discuss the last 24 hours of operation and the next 24
hours to come. Here, representatives from different
824
Figure 1.
825
At Kristin, the operation and maintenance tasks performed by offshore operators are based on remote support from technical engineers onshore. Their function
is not the management of people, but the management
of technical tasks within operation and maintenance.
For example, there is one domain engineer within
the electro discipline who is planning and supporting the work of the electricians on the platform. This
engineer is a domain expert and system responsible.
A similar situation exists for the other disciplines on
board (mechanics, automation among others). He/she
remotely assists the operations performed on the platform on a daily and long term basis, such as the
planning and prioritizing of operation and maintenance tasks. The crew on the platform is very much
dependent on the skills and knowledge of these system
responsible engineers, and on their availability in the
daily decision-making and task-solving processes.
The work practice and virtual cooperation between
technical engineers onshore and operators offshore is
characterized by telephone meetings, telephone conversations, e-mails, and face-to-face cooperation on
the platform. Meetings rarely take place in the collaboration rooms. For example, the electricians and
mechanics on the platform have weekly telephone
meetings with their technical engineers onshore. In
addition, the technical engineers go offshore to Kristin
23 times a year, on average. This results in personnel onshore and offshore knowing each other well,
and they develop a shared situational awareness of the
operation and maintenance conditions.
We find a lot of shared characteristics between
the different disciplines, such as the close cooperation between operators within different disciplines and
technical engineers. The relation is characterized by
mutual trust, and they refer to each other as good colleagues:
I have a close cooperation with the operators
on Kristin. They are highly skilled, work independently, and know the platform very well. Im in daily
dialogue with them, and we have weekly telephone
meetings. Together we discuss technical challenges
and problems. (Technical engineer)
We are very satisfied with the technical support
from our discipline expert. We appreciate it when he
826
CONCLUSIONS
827
situational awareness, which is crucial for obtaining efficient problem-solving and decision-making
concerning safety, production and maintenance.
In our study, we find that IO enhances the
experience of integration and common understanding between the onshore and offshore organizations,
where the virtual contact through the use of collaboration is experienced as being in the same room. This
results in better and faster decisions, because both the
onshore and the offshore managements have in-depth
knowledge about the situations/problems.
The challenging aspects with the use of collaboration rooms is that it can impede the managers handson relationships with people outside this room, such
as the relations with the operators/technicians offshore
and the technical engineers onshore. Both groups have
expressed a wish for more involvement from their
management onshore and offshore in problem-solving
tasks.
Our focus has been on how organizations can benefit from the use of new and advanced technology.
The challenge is not the technology itself, but the
organizational aspects, such as developing hands-on
leadership practices, clear roles and tasks, common
goals, trust, and knowledge and skills. These elements
are essential for developing an efficient organization
with motivated and skilled employees and managers.
REFERENCES
Alberts, D.S. and Hayes, R.E. (2005), Power to the Edge,
Command and Control in the Information Age, CCRP
Publication Series.
Artman, H. (2000), Team Situation Assessment and Information Distribution, Ergonomics, 43, 11111128.
Bass, B.M. (1990), From transactional to transformational
leadership: Learning to share the vision. Organizational
Dynamics, (Winter), 1931.
Bass, B.M. and Avolio, B. (1994), Improving Organizational Effectiveness Through Transformational Leadership. Thousand Oaks, Calif.: Sage, 1994.
Eldon, M. and Levin, M. (1991), Co-generative learning: Bringing participation into action research. In
William Foote Whyte (Ed.), Participatory action research
(pp.127142). Newbury Park, CA: Sage.
French, H.T., Matthew, M.D. and Redden, E.S. Infantry
Situation awareness. In Banbury, S. and Temblay (ed)
(2004), A Cognitive Approach to Situation Awareness;
theory and application, Ashgate Publishing Company,
Burlington VT, USA.
828
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper presents a framework about management in maintenance outsourcing in a service
provider company. It proposes key aspects for taking decisions in a well-established and controlled organization.
Cost is not the most important aspect to consider in outsourcing, the decision has to be a global and strategic
idea inside the company. Of course, not only the directors must take part, but also the technical personnel of
maintenance. We are trying to offer a basic guide to establish an outsourcing service, with guidelines and possible
evolution. It is based on a practical view of knowledge management over ten years of professional experience
focused in networks. Below there is a case study which demonstrates a methodology for decision-making and
shows how to optimize the organization without losing differing levels of knowledge. For this, we employ
quantitative and qualitative criteria to obtain a wide consensus and acceptance.
INTRODUCTION
OUTSOURCING
Improving Quality
Improving Security
Reducing Cost
Optimizing Resources
829
830
SOURCE
Primary
Connections
Secundary
Connections
Tertiary
Connections
Customer
Link
Figure 1.
831
1 MISSION AND
OBJECTIVES
2 DEPARTMENT
STRATEGY
5 CHANGE
MANAGEMENT
3 PROCESSES
AND ACTIVITIES
4 CONTROL
SYSTEM
5 SELECTION
SUPPLIER
Figure 3.
to the characteristics of the services providers companies: guarantee the service, to ensure the proper
functioning of the services supplied. And starting
from this mission, we define responsibilities to get
the objectives of the department.
4.2
This is where it should establish maintenance control system and where it defines the way to assess
832
4.6
Supplier selection
Experience in a sector
Flexibility on demand for services
Confidence
Technical and economic solvency
Will of collaboration oriented to services as strategic
support
Transparency of suitable pricing
Management of changes
Planning correct transition is important, it is a learning phase oriented to the supplier for fulfilling agreed
service levels. On the other hand, to ensure business
continuity in outsourcing, it should also be considered
a transitional phase and, a possible reversion distinguishing if it occurs in the transitional phase, at any
time, or at the end of contract.
To work with an outsourcing model of these characteristics implies important changes for everyone,
especially those teams responsible that have to take
a much more participatory role in management.
5
A CASE STUDY IN A
TELECOMMUNICATIONS COMPANY
833
Same
Weak
1/3
Slightly less
Strong
1/5
Less
Proven
1/7
Much less
Absolute
1/9
Absolute less
Table 2.
2 3
IC
ICrandom
0.1
max n
n1
Quality
Cost
Production
Man agement
Security
Imp rovement
Quality
3.09
0.50
2.40
0.46
1.12
Cost
0.32
0.30
1.51
0.22
0.55
Production
1.99
3.37
3.36
0.93
1.70
Man agement
0.42
0.66
0.30
0.17
0.46
wij
Security
2.19
4.57
1.07
5.73
1.26
Improvement
0.89
1.82
0.59
2.18
0.79
6.8
14.5
3.8
16.2
3.6
6.1
(1)
(2)
1. Goal
2. Maintenance objectives as criteria
3. Activities as alternatives
For valuing objectives, it is used an expert group
poll with qualitative criteria depending on their strategic importance. Each technician of a six group compares them employing table 1 and after, the resulting
matrix (Fig. 5) is built weighing the average of individual values (Fig. 4), e.g. 0.21 (in the second cell
of the first row in the figure 5 is calculated dividing
834
Cost
Production
Management
Security
Improvement
Manage Incident
0.117
0.058
0.076
0.057
0.056
0.040
Monitoring
0.094
0.104
0.107
0.068
0.109
0.145
On demand
activities
0.026
0.022
0.021
0.028
0.0266
0.027
Preventive
0.074
0.085
0.099
0.074
0.122
0.125
Predictive
0.166
0.109
0.121
0.121
0.115
0.185
Perfective
0.126
0.112
0.122
0.148
0.093
0.162
Logistics
0.039
0.069
0.072
0.031
0.039
0.050
Budget and
Human R.
0.147
0.310
0.27
0.285
0.110
0.073
Security
0.064
0.048
0.037
0.040
0.115
0.069
Documentation
0.148
0.083
0.074
0.149
0.214
0.123
Table 4. Matrix of activities rates according to their importance in relation with strategic objectives.
Qual- Cost
ity
Manage
incident 0.019 0.004 0.019
Monitoring 0.015 0.008 0.027
On demand
activities 0.004 0.002 0.005
0.004
0.004
0.016 0.006
0.032 0.023
0.002
0.008 0.004
Preventive
0.005
0.036 0.019
Predictive
0.008
0.034 0.029
Perfective
0.020 0.008 0.031
Logistics
0.006 0.005 0.019
Budget and
Human R. 0.023 0.023 0.070
0.009
0.002
0.027 0.025
0.012 0.008
0.018
0.032 0.011
Security
Documentation
0.002
0.034 0.011
0.009
0.063 0.019
wij
Quality
Cost
Production
Management
Security
Improvement
Quality
0,15
0,21
0,13
0,15
0,13
0,18
Cost
0,05
0,07
0,08
0,09
0,06
0,09
Production
0,29
0,23
0,27
0,21
0,26
0,28
Management
0,06
0,05
0,08
0,06
0,05
0,08
Security
0,32
0,31
0,28
0,35
0,28
0,21
Improvement
0,13
0,13
0,16
0,13
0,22
0,16
0,159
0,073
0,257
0,062
0,294
0,156
CONCLUSION
835
REFERENCES
AEM, Asociacin Espaola de Mantenimiento. 2005. El
Mantenimiento en Espaa: Encuesta sobre su situacin
en las empresas espaolas.
Alexander M. & Young D. 1996. Strategic outsourcing. Long
Range Planning 29 (1): 116119.
Benot Iung 2006. CRAN Laboratory Research Team
PRODEMAS in Innovative Maintenance and Dependability. Nancy UniversityNancy Research Centre
for Automatic Control (CRAN). CNRS UMR 7039
(http://www.cran.uhp-nancy.fr).
Bourne M. & Neely A. 2003. Performance measurement
system interventions: the impact of parent company initiatives on success and failure. Journal of Operation and
Management.
Campbell J.D. & Jardine A. 2001. Maintenance excellence.
New York: Marcel Dekker. 2001.
Carter, Russell A. 2001. Shovel maintenance gains from
improved designs, tools and techniques. Elsevier Engineering Information.
Click R.L. & Duening T.N. 2005. Business Process
Outsourcing: The competitive Adventage. John Wiley &
Sons, Inc.
CMMI Product Team. Software Engineering Institute 2007.
CMMI for Development, Version 1. CMMI-DEV, V1.2,
CMU/SEI-2006-TR-008, ESC-TR-2006-008.
COBIT [Control Objectives for Information and related Technology] 1992. Objetivos de Control para la informacin
y Tecnologas relacionadas. Asociacin para la Auditora
y Control de Sistemas de Informacin, (ISACA, Information Systems Audit and Control Association), y el Instituto
de Administracin de las Tecnologas de la Informacin
(ITGI, IT Governance Institute).
Crespo M.A., Moreu de L.P. & Sanchez H.A. 2004. Ingeniera de Mantenimiento. Tcnicas y Mtodos de Aplicacin a la Fase Operativa de los Equipos. Aenor,
Espaa.
Crespo M.A. 2007. The Maintenance Management Framework. Models and Methods for Complex Systems Maintenance. Londres, Reino Unido. Springer.
Davenport T. 1993. Process innovation: Reengineering
work through Information Technology. Harvard Business
School Press.
Dixon J.R. 1966. Design engineering: inventiveness, analysis, and decision making. New York, McGraw-Hill,
Inc.
Dyer R.F. & Forman E.H. 1992. Group decision support with
the Analytic Hierarch Process. Decision Support Systems.
Earl M.J. 1994. The New and the Old of Business Process Redesign. Journal of Strategic Information Systems,
vol. 3.
Earl M.J. 1996. The Risks of Outsourcing IT. Sloan Management Review. 37. 2632.
Elfing T. & Baven G. 1994. Outsourcing technical services:
stages of development. Long Range Planning 27 (5):
4251.
EN 13306:2001. Maintenance Terminology. European Standard. CEN (European Committee for Standardization),
Brussels.
Fixler D.J. & Siegel D. 1999. Outsourcing and Productivity Growth in Services. Structural Change and Economic
Dynamics.
836
837
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J. Hovden
Department of Industrial Economics and Technology Management,
Norwegian University of Science and Technology (NTNU), Trondheim, Norway
ABSTRACT: This paper presents and discusses four safety rule modification processes in the Norwegian
railway system. It focuses upon the impact from the processes upon railway knowledge and in particular the
ambitions to change from predominantly experience based prescriptive rules towards risk based outcome oriented
rules, i.e. a deductive top-down approach to rule development.
The cases met this challenge with an inductive bottom-up approach to rule development, a strategy given the
name reverse invention. Discussions about the new approach and the processes of reverse invention stimulated
inquiries into railway knowledge that revived this knowledge. It remained uncertain whether the inquires resulted
in actual new knowledge. The new approach also stimulated a reduction of relational and contextual elements
of the railway knowledge. According to theory these elements are important for the ability to decode theoretical
knowledge and to judge its relevance for future use.
1.1
INTRODUCTION
Rules constitute an important record of the organizations learning about its operational dangers (Reason,
1997; Reason et al. 1998). Hale (1990) argues that
the greatest value of safety rules lies in the process of
actually finding out and writing down the rules. The
organization can retain this value by treating the rules
as a living repository of lessons learned in the life of
the system. Accordingly, safety rules are not only a
result of a learning process, they can be seen as a part
of that process.
There is limited scientific knowledge about performing safety rule modifications (Hale & al., 2003).
This also implies that there is limited scientific knowledge of the close relationship between safety rules and
knowledge of activities and related risks in a regulated
area and the possible impact from rule modification
upon this knowledge.
The purpose of this paper is to explore how safety
rule modification can influence organizational knowledge about operational dangers. It presents results
from a case study of four safety rule modification
processes in the Norwegian railway system. Special
attention is given to consequences of the ambition
to change from predominantly experience based prescriptive rules towards risk based outcome oriented
rules.
839
The Norwegian railway system has a tradition of prescriptive rules directed at the operative staff at the
lower levels of organizational hierarchies. The rules
has been developed with growing knowledge of the
systems technology, activities and interactions and
with experiences of unwanted events or accidents
(Gulowsen & Ryggvik, 2004; Ryggvik, 2004). Much
of the knowledge has been derived from practice and
consisted of collective tacit and explicit knowledge.
This knowledge was shared through an internal educational system, practice oriented trainee programs and
socialization. Here the rules served an important role
for the structure of the education and as knowledge
carriers.
In 1996, steps were taken to open the Norwegian
railway system to new traffic operators. The Norwegian
state owned railway company (NSB) was divided
into the National Railway Administration, which was
responsible for infrastructure management, and NSB
BA, a state owned traffic operator. An independent
regulatory body, the Norwegian Railway Inspectorate,
was established.
The railway sector, and in particular the Norwegian
Railway Inspectorate, has been influenced by the
safety management traditions of the Norwegian oil
industry (Ryggvik 2004). This tradition has emphasized internal control principles with extensive use of
risk analyses and outcome oriented rules. The development has resulted in initiatives, especially from the
Norwegian Railway Inspectorate, to change the tradition of experience based, prescriptive rules towards
outcome oriented rules based on results from risk
analyses.
The intentions of a development towards a deductive and risk based top-down approach to safety-rule
modifications was evident in two different projects.
One project was established for modification of the
traffic safety rules. In general, these rules were
detailed prescriptive action rules that coordinated
the activities of the operative staff involved in traffic
operations. The management of this project encouraged the rule developers to think new and develop
outcome-oriented rules formulated as goals and to
base these upon risk analyses. From the beginning, the
840
rule-imposers were the Norwegian Railway Administration. Later this responsibility was transferred to the
Norwegian Railway Inspectorate.
The other project had as purpose to improve
the management of infrastructure maintenance. One
element in this project was to modify the maintenance rules. These rules were organized in different sets for each subsystem of the infrastructure.
They were mainly detailed prescriptive action or state
rules directed at the operative staff that served both
safety and other purposes. The different subsystems
had varying characteristics regarding time sequencing
of activities, communication and coordination. The
project organized subprojects for the modification of
each rule set.
Also, in this project the management encouraged
the rule developers to think new. This meant to
increase the use of triggering requirements and to
base the rules on risk analyses. The triggering
requirements should define conditions in the
infrastructures that should trigger off maintenance
activities, i.e. define outcomes for maintenance activities. The rule-imposers were the Norwegian Railway
Administration.
On this background, the Norwegian railway system
represented an opportunity to study implementation of
the ongoing changes in rule traditions and its impact
upon knowledge about operations of the system and
associated dangers.
In this paper the term railway knowledge refers
to the individual and collective understanding of functions and interactions of the railway system. This
includes knowledge of the system itself, its activities
and their interactions, the inherent risks and preventive
means.
2
In all cases the participants of the modification processes tried to think new and to use the intended
deductive top-down approach to rule development, i.e.
to start with the development of higher order outcome
oriented rules based on knowledge from risk analyses. The chosen methods for the risk analyses used
experienced top events as outset for the analyses. The
core attention was directed at the operative level of the
railway system.
However, the cases critically judged outcome oriented rule solutions and risk analytic results through
inquiries into experience based railway knowledge.
This reflected a precautionary concern for safety
(Blakstad, 2006). For example, one of the persons
involved in the Traffic-rule case explained how one
of the railway professionals of the work group always
expressed his worries for safety. He did this even when
he was not able to express why he was worried. These
expressed worries led to inquiries and discussions that
revealed the foundations for these worries. In this way
experience based railway knowledge, even when it was
tacit, served as reference for safe solutions.
The cases soon abandoned the deductive and risk
based top-down strategy to the rule development.
Instead, all cases used a bottom-up approach where
existing low level prescriptive rules and associated
knowledge were used as outset for the development
of outcome-oriented rules. This strategy is given the
name reverse invention in this study.
When the cases changed into processes of reverse
invention the work built upon the railway knowledge of
the existing rules, i.e. knowledge directly expressed in
841
3.2
842
The descriptions above reveal that the existing prescriptive rules served as an important knowledge base
for the rule development.
The descriptions also illustrates that the processes
of the cases can be seen as a revival of railway
knowledge. Railway knowledge was spread around
the organization. It was sometimes difficult to retrieve
because some of it had a more or less individual and
tacit form. Therefore the inquiries and the following
work with the achieved knowledge implied an articulation of this knowledge and that more people had
access to it. It remained uncertain whether the inquiries
contributed with actually new knowledge.
The knowledge retrieved from the inquiries were
combined and sorted out, discussed, systematized and
to some extent documented. This implied a direction
of attention where some knowledge became more in
focus than others and therefore included in the work.
The processes were governed by a combination of the
incitements to the rule solutions, the frameworks that
the risk analytic methods provided and the risk perception of the participants. For instance, the final report
of the Traffic-rule case comments that the risk analyses did not contribute with any particular unknown
conditions. However, it had a systematizing function, contributed with an overview and drew attention
to conditions known from before. An interviewee of
the Maintenance-rule cases made some of the same
reflections:
At least the systematizing of it [the risk analyses, authors comment] forces one to evaluate and
document what one does. And he continues: . . .
before it was very much based on individualsthe
experience one had within the areas.
The interviewees were asked who the core contributors to the work were. Their answers revealed that
those performing the rule development and the risk
analyses were the main contributors. Their networks
contributed as important supplements. However, there
were differences between the cases. The organizing of
the cases and the tasks differed and influenced how
much the knowledge of these actors became articulated, made collective and combined. There were also
DISCUSSION
The results above reveal that the cases used railway knowledge as the core knowledge base for the
rule modification process. The rule modification process revived railway knowledge by making formerly
tacit knowledge explicit. The processes increased the
confidence in this knowledge.
4.1
The cases did not adopt the rationalistic, deductive topdown strategy that was intended for the modification
work. The main explanation was that existing experience based railway knowledge, and in particular the
knowledge associated with existing prescriptive rules,
was seen as too valuable for safety to be abandoned.
Hence, they are on line with Lindbloms critic of the
rationalistic strategy (Lindblom, 1959).
Furthermore, Reason (1997) argues that the stage
reached in an organizations life history will influence
the opportunities for feed forward and feedback control of activities. The Norwegian railway system was
old enough to have the necessary experience to develop
prescriptive rules in accordance with feed forward
control principles.
843
844
845
846
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The prevention of Major Accidents is the core of high hazard industries activities. Despite
the increasing safety level, industrial sites are asking for innovative tools. The notion of weak signals will
enable the industries to anticipate danger and improve their safety management. Our preliminary results show
the huge interest and relevance of the weak signals but also the difficulty to, concretely, treat them within
the Safety Management. We found out that organizational features are weak signals blockers: Bureaucratic
management of Safety, linear and bottom-up communication, and a reactive Safety Management. In order to
favor weak signals treatment, we should act on these organizational factors. This is the main objective of this
PhD research.
INTRODUCTION
DEFINITION
847
2.2
848
3
3.1
849
4.1
Methodology
850
As we mentioned previously, weak signals are obvious after the accident. Once identified, people learnt
lessons from weak signals, particularly in the new
design of damaged installations. For instance, Design
Department, in charge of the bunker building, took
Expert recommendations into account for implanting new safety barriers and better detection systems
(acetylene and temperature). However, we must admit
that weak signals are still ignored before they lead to
an accident. The explanation seems to be found on the
organizational playing a role of blockers. This hypothesis will be tested in the last case studies period we will
carry out (from April 2008).
CONCLUSION
As a conclusion, weak signals seem to have a practical
relevance in such a site and considered as difficult to
pick up. This difficulty does not lie on detection but
on the capacity to give sense to several pieces of information and possibility to transmit them. We found out
that weak signals were indeed detected, and identified
channels which enable to bring the information to relevant people. However, factors would block this process
and impede the opportunities to learn. The next stage
of the research will be precisely to analyze the data and
reveal what factors (organizational, cultural, individual) block the opportunities to prevent the accidents
investigated.
ACKNOWLEDGEMENTS
I grandly thank the French Foundation for a Safety
Culture (FonCSI) which provides financial sponsors
and fieldwork.
REFERENCES
Amalberti R. et Barriquault C., 1999, Fondements et limites du Retour dExprience, in Annales des Ponts et
Chausses: Retours dExprience, n. 91, pp. 6775.
851
852
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Author index
Asada, Y. 33
Astapenko, D. 2021
Astarita, G. 27
Aubry, J.F. 2549
Auder, B. 2107
Augutis, J. 1867, 2569,
2575, 3101
Ault, G.W. 2601
Aven, T. 323, 365, 1207,
1335, 2081
Avram, D. 477
Azi, M. 611
Bada, F.G. 575
Baggen, J.H. 1519
Baicu, F. 1027
Baksteen, H. 767, 777
Balakrishnan, N. 1915
Bank, M. 351, 2675
Baraldi, P. 2101
Barbarini, P. 1049
Barker, C. 619
Barnert, T. 1463
Barone, S. 2251
Barontini, F. 2345
Barros, A. 2003, 3125
Bartlett, L.M. 2021
Basco, A. 3085
Basnyat, S. 45
Bayart, M. 3245
Beard, A.N. 2765
Becerril, M. 1415
Bedford, T. 515, 987
Belhadaoui, H. 1829,
2549
Bellamy, L.J. 767, 777,
2223
Benard, V. 3245
Benjelloun, F. 2369,
3067
Berenguer, C. 3125
Brenguer, C. 469, 531,
593, 2003
Berg, H.P. 1439
Bernhaupt, R. 45
Berntsen, P.I.B. 3311
Berrade, M.D. 575
853
Bertsche, B. 875,
2233
Beyer, S. 1539
Bhattacharya, D. 1915
Bianco, C. 2727
Bigham, J. 3109
Birkeland, G. 365
Bischof, N. 2789
Bladh, K. 227
Blakstad, H.C. 839
Bloemhoff, A. 777
Blokus-Roszkowska, A.
2269
Bockarjova, M. 1585,
2781, 2817
Boek, F. 2613
Bolado Lavn, R. 2899
Bonanni, G. 2501
Bonvicini, S. 1199
Bordes, L. 593
Borell, J. 83, 3061
Borgia, O. 211
Borsky, S. 973
Bosworth, D.A. 2353
Bouissou, C. 3191
Bouissou, M. 1779
Boussouf, L. 2135
Bveda, D. 1533
Brndl, M. 2773, 2789
Braarud, P.. 267
Braasch, A. 2239, 2245
Bragatto, P.A. 137
Brandowski, A. 3331
Brandt, U.S. 3031
Bri, R. 489
Brik, Z. 1937
Brissaud, F. 2003
Brunet, S. 3007
Buchheit, G. 2549
Buderath, M. 2175
Bnzli, E. 2641
Burciu, Z. 3337
Burgazzi, L. 1787, 2899
Burgherr, P. 129
Busby, J.S. 415, 993, 1251,
1325
Bye, R.J. 1377
Costescu, M. 99
Coulibaly, A. 1001
Courage, W.M.G. 2807
Cozzani, V. 1147, 1199,
2345, 2397, 2749, 3153
Craveirinha, J. 2627
Crespo Mrquez, A. 669,
929
Crespo, A. 687, 829
Cugnasca, P.S. 1503
DAuria, F. 2899
Damaso, V.C. 497
Damen, M. 767, 777
Dandache, A. 2549
da Silva, S.A. 243
David, J.-F. 981
David, P. 2259
de Almeida, A.T. 627, 1165
De Ambroggi, M. 1431
De Carlo, F. 211
de M. Brito, A.J. 1165
De Minicis, M. 1495
De Souza, D.I. 919
De Valk, H. 2609
de Wit, M.S. 1585
Debn, A. 2447
Debray, B. 3191
Dehghanbaghi, M. 2379
Dehombreux, P. 2117
Deleuze, G. 1309, 3093
Deloux, E. 469
Delvenne, P. 3007
Delvosalle, C. 2369, 3067
Denis, J.-B. 2609
Depool, T. 2409, 2415
Dersin, P. 2117, 3163
Despujols, A. 531
Destercke, S. 697, 905
Deust, C. 3191
Di Baldassarre, G. 2749
Di Gravio, G. 1495
Di Maio, F. 2873
Dien, Y. 63
Dijoux, Y. 1901
Diou, C. 2549
Dohnal, G. 1847
Domnech, E. 2275,
2289
Dondi, C. 2397
Dong, X.L. 2845
Doudakmani, O. 787
Downes, C.G. 1739, 1873
Driessen, P.P.J. 369
Duckett, D.G. 1325
Duffey, R.B. 941, 1351
Dunj, J. 2421
854
Dutfoy, A. 2093
Dutta, B.B. 3323
Dutuit, Y. 1173
Dvork, J. 2613
Dwight, R.W. 423
Ebrahimipour, V. 1125,
2379
Egidi, D. 2397
Eide, K.A. 1747, 2029
Eisinger, S. 365, 2937
El-Koujok, M. 191
Engen, O.A. 1423
Erdos, G. 291
Eriksson, K. 83, 3061
Esch, S. 1705
Escriche, I. 2275, 2289
Escrig, A. 2743
Espern, J. 3, 121
Espi, E. 2609
Espluga, J. 1301, 1371,
2867
Eusgeld, I. 2541
Eustquio Beraldo, J. 1273
Expsito, A. 3, 121
Eymard, R. 155
Faber, M.H. 1567
Faertes, D. 2587
Fallon, C. 1609, 3007
Fan, K.S. 2757
Faragona, B. 3217
Farr, J. 1301
Fako, P. 1671
Faure, J. 2185, 2217
Fechner, B. 147
Fernndez, A. 1533
Fernndez, I. 3, 121
Fernndez, J. 205, 1395
Fernndez-Villodre, G.
1755
Fernandez Bacarizo, H. 559
Ferreira, R.J.P. 1165
Ferreiro, S. 2175
Feuillard, V. 2135
Fivez, C. 2369, 3067
Figueiredo, F.A. 627
Finkelstein, M.S. 1909
Flage, R. 1335, 2081
Flammini, F. 105
Fleurquin, G. 2117
Fodor, F. 1309
Forseth, U. 3039, 3047
Fouladirad, M. 567, 593,
2003
Frackowiak, W. 3331
Franzoni, G. 1049
855
Hryniewicz, O. 581
Hsieh, C.C. 1267
Huang, W.-T. 1651
Hurrell, A.C. 749
Huseby, A.B. 1747, 2029,
2199
Hwang, M. 2861
Iacomini, A. 2501
Ibez, M.J. 2743
Ibez-Llano, C. 2051
Idasiak, V. 2259
Ide, E. 1901
Innal, F. 1173
Iooss, B. 2107, 2135,
2899
Ipia, J.L. 727
Isaksen, S.L. 1747, 1891,
2029, 2937
Izquierdo, J.M. 121, 163
Jallouli, M. 2549
Jamieson, R. 1447
Jammes, L. 305
Janilionis, V. 1819
Jarl Ringstad, A. 813
Jeong, J. 2619
Jiang, Y. 863, 1663
Jo, K.T. 913
Jzwiak, I.J. 1929
Jodejko, A. 1065
Joffe, H. 1293
Johansson, J. 2491
Johnsen, S.O. 805
Jongejan, R.B. 1259
Jnsson, H. 2491
Jord, L. 727
Jore, S.H. 3077
Joris, G. 1609
Jzwiak, I.J. 1455
Jzwiak, K. 1455
Jun, L. 1943
Jung, K. 1629, 1635
Jung, W. 221
Jung, W.S. 2913
Juocevicius, Virg. 1641
Juocevicius, Virm. 1641
Juocevicius, V. 1677
Kalusche, W. 2431
Kamenick, J. 891
Kangur, K. 797
Kanno, T. 33
Kar, A.R. 3323
Karlsen, J.E. 1595
Kastenholz, H. 361
Kayrbekova, D. 2955
Kazeminia, A. 2245
Kellner, J. 2613
Kermisch, C. 1357
Khan, F.I. 1147
Khatab, A. 641
Khvatskin, L. 483
Kim, K.Y. 2913
Kim, M.C. 2909
Kim, S. 2851
Kiranoudis, C. 281
Kleyner, A.V. 1961
Kloos, M. 2125
Kobelsky, S. 1141
Kohda, T. 1035
Koivisto, R. 2511
Kollmann, E. 2641
Kolowrocki, K. 1969, 1985
Konak, A. 2657
Kongsvik, T. 733
Konstandinidou, M. 281,
767, 777
Kontic, B. 2157
Korczak, E. 1795
Kortner, H. 1489
Kosmowski, K.T. 249, 1463
Kosugi, M. 2305, 2311
Koucky, M. 1807, 1813
Koutras, V.P. 1525
Kovacs, S.G. 99
Kowalczyk, G. 449
Kratz, F. 2259
Kriktolaitis, R. 2575, 3101
Krger, W. 2541
Krummenacher, B. 2773
Kubota, H. 2305, 2311
Kudzys, A. 1677, 1685
Kuiper, J. 767, 777
Kujawski, K. 1929
Kulot, E. 1049
Kulturel-Konak, S. 2657
Kurowicka, D. 2223
Kuttschreuter, M. 1317
Kvernberg Andersen, T.
3039, 3047
Labeau, P.E. 455
Labeau, P.-E. 559, 1357
Laclemence, P. 3093
Laheij, G.M.H. 1191
Lamvik, G.M. 2981
Landucci, G. 3153
Langbecker, U. 3275
Langeron, Y. 3125
Larisch, M. 1547
Laulheret, R. 2185, 2217
Le Bot, P. 275
Le Guen, Y. 2987
856
Massaiu, S. 267
Mateusz, Z. 3237
Matuzas, V. 2569
Matuziene, V. 2575
Matuziene, V. 3101
Mavko, B. 1771
Mazri, C. 3191
Mazzocca, N. 105
McClure, P. 2295
McGillivray, B.H. 993
McMillan, D. 2601
Mearns, K. 1415
Medina, H. 1073
Medonos, S. 1239
Medromi, H. 2549
Mehers, J.P. 2317
Mehicic Eberhardt, S. 2431
Meier-Hirmer, C. 3183,
3231
Melndez, E. 121, 2051
Meli, J.L. 243, 1415
Membr, J.-M. 2295
Mendes, J.M. 1577
Mendizbal, R. 379, 2827,
2837, 2891
Meneghetti, A. 2727
Menoni, S. 3023
Mercier, S. 155, 603
Merz, H.M. 2773
Meyer, P. 275
Meyna, A. 2239
Mikulov, K. 1671
Milazzo, M.F. 1019, 3143
Miles, R. 1251
Mnguez, R. 2473, 2689
Minichino, M. 2501
Missler-Behr, M. 2431
Mlynczak, M. 57
Mock, R. 2641
Moeller, S. 2431
Molag, M. 3153
Moltu, B. 813
Monfort, E. 2743
Monteiro, F. 2549
Montoro-Cazorla, D. 1955
Montoya, M.I. 1089
Moonis, M. 2353
Morales, O. 2223
Moreno, J. 3
Moreu de Len, P. 669,
687, 829, 929
Morra, P. 2345
Mosleh, A. 113
Motoyoshi, T. 2685
Muoz, M. 1119
Muoz-Esco, F.D. 1539
Mud, M. 767, 777
Mulley, C. 299
Mullor, R. 441
Muslewski, L. 2037
Mutel, B. 1001
Nsje, P. 259, 821, 1407
Nsje, P.C. 2981
Nkland, T.E. 1207, 2929
Naked Haddad, A. 919
Napolitano, N. 1495
Natvig, B. 1747, 2029
Navajas, J. 1301, 1371,
2867
Navarro, E. 757
Navarro, J. 1915
Navarro-Esbr, J. 175
Navrtil, J. 2613
Nebot, Y. 2827, 2873
Nedelec, B. 3191
Neto, H.V. 761
Newby, M. 619
Nguyen, H. 3331
Nicholls, J. 291
Nicol, A.-M. 749
Nieto, F. 2051
Niezgoda, T. 449
Nivolianitou, Z. 281
Nj, O. 3077
Nogueira Daz, E. 899
Norberg, T. 1041
Nordgrd, D.E. 2561
Nowakowski, T. 1055,
1065, 1455, 1929
Nunes, E. 587
Nez Mc Leod, J.E. 1129,
1135
Nez, N. 1949
Nuti, C. 2519
Oh, J. 767, 777
Oliveira, A. 3177
Oliveira, L.F.S. 1919, 2587
Olmos-Pea, S. 11
Oltedal, H.A 1423
Oltra, C. 1301, 1371, 2867
Or, I. 3257
Osrael, J. 1539
zbas, B. 3257
Palacios, A. 1119
Palanque, P. 45
Pandey, M.D. 431
Pantanali, C. 2727
Papazoglou, I.A. 767, 777,
787
Paridaens, J. 89
Park, J. 221, 2909
857
Quigley, J. 987
Quijano, A. 1395, 1401
Rabbe, M. 2199
Rachel, F.M. 1503
Raffetti, A. 3217
Rajabalinejad, M. 717
Rakowsky, U.K. 2045,
3055
Raman, R. 1239
Raschky, P.A. 965, 973
Rasulo, A. 2519
Rauzy, A. 1173, 1937, 2051
Real, A. 175
Reer, B. 233
Reinders, J. 3153
Remenyte-Prescott, R. 1739
Renaux, D. 3245
Renda, G. 3135
Revilla, O. 3223
Rey-Stolle, I. 1949
Rezaie, K. 1125, 2379
Rhee, T.J. 703
Rheinberger, C.M. 1365
Riedstra, D. 1191
Rietveld, P. 2817
Rimas, J. 1819
Rivera, S.S. 1129, 1135
Robin, V. 331
Rocco S., C.M. 1803
Rocha Fonseca, D. 919
Rodrguez, G. 3, 121
Rodrguez, V. 2707
Rodrguez Cano, D. 899
Red, W. 2929
Roelen, A.L.C. 2223
Rohrmann, R. 1567
Romn, Y. 2013
Romang, H. 2789
Rosn, L. 1041
Rosness, R. 839
Roussignol, M. 155
Rowbotham, A.L. 2353
Rubio, B. 727
Rubio, G. 757
Rubio, J. 727
Rcker, W. 1567
Rudolf Mller, J. 2665
Ruiz-Castro, J.E. 1755
Runhaar, H.A.C. 369
Stre, F. 2635
Sabatini, M. 1199
Sadovsk, Z. 1671
Sagasti, D. 727
Saleh, J.H. 659
Salzano, E. 3085
858
Cepin,
M. 1771, 2883
Veiga, F. 205
Verga, S. 315
Verhoef, E.T. 2817
Verleye, G. 3117
Vetere Arellano, A.L. 341,
2593
Vikland, K.M. 1377
Vlchez, J.A. 2421
Viles, E. 205
Villamizar, M. 505
Villanueva, J.F. 2827,
2837
Vinnem, J.E. 1181
Vintr, Z. 1813
Vivalda, C. 305
Vleugel, J.M. 1519
Voirin, M. 63
Vojtek, M. 1671
Volkanovski, A. 1771
Volovoi, V. 1961
Vrancken, J.L.M. 1519
Vrijling, J.K. 1259, 2797
Vrouwenvelder, A.C.W.M.
2807
Wagner, S. 2541
Walls, L. 987
Walter, M. 1705, 1829
Wang, C. 113
Wang, J. 3109
Wang, W. 523
Wang, Y. 863
Wemmenhove, E. 2295
Weng, W.P. 1267
Wenguo Weng, B. 79
Werbinska, S. 1055,
1851
Wiencke, H.S. 2929
Wiersma, T. 1223
Wiesner, R. 351
Wijnant-Timmerman, S.I.
1223, 2363
Wilday, A.J. 2353
Wilson, S.P. 949
Winder, C. 739, 1081
Winther, R. 183, 2635
Woltjer, R. 19
Woropay, M. 2037
Wu, C.C. 2757
Wu, S.H. 71, 1267
Wu, S.-H. 39, 2405
Xu, S. 2845
Xuewei Ji, A. 79
Yamaguchi, T. 1839
Yamamoto, H. 1715, 1839
Yang, J.-E. 2861
Yannart, B. 2369, 3067
Yeung, T.G. 3171
Yoon, C. 2861
Yu, L.Q. 2845
Yufang, Z. 1943
Yukhymets, P. 1141
Zaitseva, E. 1995
Zajicek, J. 635
859
Zanelli, S. 1199
Zanocco, P. 2899
Zendri, E. 2501
Zerhouni, N. 191
eleznik, N. 3015
Zhang, C. 863, 1663
Zhang, T. 649
Zhao, X. 593
Zhu, D. 113
Zieja, M. 449
Zilber, N. 3231
Zille, V. 531
Zio, E. 477, 703, 709,
1861, 2081, 2101, 2873
Zubeldia, U. 3223
Zuo, M.J. 1723
Zurek,
J. 449
utautaite-eputiene, I.
1867
Zwetkoff, C. 1609
Editors
Sebastin Martorell
Department of Chemical and Nuclear Engineering,
Universidad Politcnica de Valencia, Spain
C. Guedes Soares
Instituto Superior Tcnico, Technical University of Lisbon, Lisbon, Portugal
Julie Barnett
Department of Psychology, University of Surrey, UK
VOLUME 2
CRC Press/Balkema is an imprint of the Taylor & Francis Group, an informa business
2009 Taylor & Francis Group, London, UK
Typeset by Vikatan Publishing Solutions (P) Ltd., Chennai, India
Printed and bound in Great Britain by Antony Rowe (A CPI-group Company), Chippenham, Wiltshire.
All rights reserved. No part of this publication or the information contained herein may be reproduced, stored
in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by photocopying,
recording or otherwise, without written prior permission from the publisher.
Although all care is taken to ensure integrity and the quality of this publication and the information herein, no
responsibility is assumed by the publishers nor the author for any damage to the property or persons as a result
of operation or use of this publication and/or the information contained herein.
Published by: CRC Press/Balkema
P.O. Box 447, 2300 AK Leiden, The Netherlands
e-mail: [email protected]
www.crcpress.com www.taylorandfrancis.co.uk www.balkema.nl
ISBN: 978-0-415-48513-5 (set of 4 volumes + CD-ROM)
ISBN: 978-0-415-48514-2 (vol 1)
ISBN: 978-0-415-48515-9 (vol 2)
ISBN: 978-0-415-48516-6 (vol 3)
ISBN: 978-0-415-48792-4 (vol 4)
ISBN: 978-0-203-88297-9 (e-book)
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Table of contents
Preface
XXIV
Organization
XXXI
Acknowledgment
XXXV
Introduction
XXXVII
VOLUME 1
Thematic areas
Accident and incident investigation
A code for the simulation of human failure events in nuclear power plants: SIMPROC
J. Gil, J. Espern, L. Gamo, I. Fernndez, P. Gonzlez, J. Moreno, A. Expsito,
C. Queral, G. Rodrguez & J. Hortal
11
Comparing a multi-linear (STEP) and systemic (FRAM) method for accident analysis
I.A. Herrera & R. Woltjer
19
Development of a database for reporting and analysis of near misses in the Italian
chemical industry
R.V. Gagliardi & G. Astarita
27
33
39
Formal modelling of incidents and accidents as a means for enriching training material
for satellite control operations
S. Basnyat, P. Palanque, R. Bernhaupt & E. Poupart
45
57
Organizational analysis of availability: What are the lessons for a high risk industrial company?
M. Voirin, S. Pierlot & Y. Dien
63
71
79
83
89
Decision support systems and software tools for safety and reliability
Complex, expert based multi-role assessment system for small and medium enterprises
S.G. Kovacs & M. Costescu
99
105
113
121
129
137
Dynamic reliability
A dynamic fault classification scheme
B. Fechner
147
155
163
175
183
191
Fault detection and diagnosis in monitoring a hot dip galvanizing line using
multivariate statistical process control
J.C. Garca-Daz
201
205
VI
211
Human factors
A study on the validity of R-TACOM measure by comparing operator response
time data
J. Park & W. Jung
An evaluation of the Enhanced Bayesian THERP method using simulator data
K. Bladh, J.-E. Holmberg & P. Pyy
221
227
233
243
Functional safety and layer of protection analysis with regard to human factors
K.T. Kosmowski
249
259
Incorporating simulator evidence into HRA: Insights from the data analysis of the
international HRA empirical study
S. Massaiu, P.. Braarud & M. Hildebrandt
267
Insights from the HRA international empirical study: How to link data
and HRA with MERMOS
H. Pesme, P. Le Bot & P. Meyer
275
Operators response time estimation for a critical task using the fuzzy logic theory
M. Konstandinidou, Z. Nivolianitou, G. Simos, C. Kiranoudis & N. Markatos
281
291
299
305
315
323
331
On some aspects related to the use of integrated risk analyses for the decision
making process, including its use in the non-nuclear applications
D. Serbanescu, A.L. Vetere Arellano & A. Colli
341
VII
351
361
365
369
379
391
399
407
415
423
A stochastic process model for computing the cost of a condition-based maintenance plan
J.A.M. van der Weide, M.D. Pandey & J.M. van Noortwijk
431
441
Aging processes as a primary aspect of predicting reliability and life of aeronautical hardware
J. Zurek,
M. Zieja, G. Kowalczyk & T. Niezgoda
449
455
463
469
477
483
489
VIII
497
505
515
Modelling different types of failure and residual life estimation for condition-based maintenance
M.J. Carr & W. Wang
523
531
541
Non-homogeneous Markov reward model for aging multi-state system under corrective
maintenance
A. Lisnianski & I. Frenkel
551
559
567
575
581
Optimal periodic inspection of series systems with revealed and unrevealed failures
M. Carvalho, E. Nunes & J. Telhada
587
593
Optimal replacement policy for components with general failure rates submitted to obsolescence
S. Mercier
603
611
619
Preventive maintenance planning using prior expert knowledge and multicriteria method
PROMETHEE III
F.A. Figueiredo, C.A.V. Cavalcante & A.T. de Almeida
Profitability assessment of outsourcing maintenance from the producer (big rotary machine study)
P. Fuchs & J. Zajicek
627
635
641
Study on the availability of a k-out-of-N System given limited spares under (m, NG )
maintenance policy
T. Zhang, H.T. Lei & B. Guo
649
IX
659
669
675
687
697
703
709
717
Occupational safety
Application of virtual reality technologies to improve occupational & industrial safety
in industrial processes
J. Rubio, B. Rubio, C. Vaquero, N. Galarza, A. Pelaz, J.L. Ipia, D. Sagasti & L. Jord
Applying the resilience concept in practice: A case study from the oil and gas industry
L. Hansson, I. Andrade Herrera, T. Kongsvik & G. Solberg
727
733
Development of an assessment tool to facilitate OHS management based upon the safe
place, safe person, safe systems framework
A.-M. Makin & C. Winder
739
Exploring knowledge translation in occupational health using the mental models approach:
A case study of machine shops
A.-M. Nicol & A.C. Hurrell
749
757
New performance indicators for the health and safety domain: A benchmarking use perspective
H.V. Neto, P.M. Arezes & S.D. Sousa
761
767
777
787
797
Organization learning
Can organisational learning improve safety and resilience during changes?
S.O. Johnsen & S. Hbrekke
805
813
821
829
839
847
Author index
853
VOLUME 2
Reliability and safety data collection and analysis
A new step-stress Accelerated Life Testing approach: Step-Down-Stress
C. Zhang, Y. Wang, X. Chen & Y. Jiang
863
869
Collection and analysis of reliability data over the whole product lifetime of vehicles
T. Leopold & B. Bertsche
875
881
891
899
905
913
Life test applied to Brazilian friction-resistant low alloy-high strength steel rails
D.I. De Souza, A. Naked Haddad & D. Rocha Fonseca
919
Non-homogeneous Poisson Process (NHPP), stochastic model applied to evaluate the economic
impact of the failure in the Life Cycle Cost Analysis (LCCA)
C. Parra Mrquez, A. Crespo Mrquez, P. Moreu de Len, J. Gmez Fernndez & V. Gonzlez Daz
929
Risk trends, indicators and learning rates: A new case study of North sea oil and gas
R.B. Duffey & A.B. Skjerve
941
Robust estimation for an imperfect test and repair model using Gaussian mixtures
S.P. Wilson & S. Goyal
949
XI
957
965
973
981
987
993
1001
1009
1019
1027
1035
1041
1049
1055
1065
1073
Chemical risk assessment for inspection teams during CTBT on-site inspections of sites
potentially contaminated with industrial chemicals
G. Malich & C. Winder
1081
1089
Conceptualizing and managing risk networks. New insights for risk management
R.W. Schrder
XII
1097
1103
1113
1119
1125
1129
1135
Geographic information system for evaluation of technical condition and residual life of pipelines
P. Yukhymets, R. Spitsa & S. Kobelsky
1141
1147
1157
1165
1173
On causes and dependencies of errors in human and organizational barriers against major
accidents
J.E. Vinnem
1181
Quantitative risk analysis method for warehouses with packaged hazardous materials
D. Riedstra, G.M.H. Laheij & A.A.C. van Vliet
1191
1199
1207
Risk analysis in the frame of the ATEX Directive and the preparation of an Explosion Protection
Document
A. Pey, G. Suter, M. Glor, P. Lerena & J. Campos
1217
1223
1231
Why ISO 13702 and NFPA 15 standards may lead to unsafe design
S. Medonos & R. Raman
1239
1251
1259
XIII
1267
1273
1283
1293
Do the people exposed to a technological risk always want more information about it?
Some observations on cases of rejection
J. Espluga, J. Farr, J. Gonzalo, T. Horlick-Jones, A. Prades, C. Oltra & J. Navajas
1301
1309
1317
1325
1335
1341
Risk management measurement methodology: Practical procedures and approaches for risk
assessment and prediction
R.B. Duffey & J.W. Saull
1351
1357
1365
The social perception of nuclear fusion: Investigating lay understanding and reasoning about
the technology
A. Prades, C. Oltra, J. Navajas, T. Horlick-Jones & J. Espluga
1371
Safety culture
Us and Them: The impact of group identity on safety critical behaviour
R.J. Bye, S. Antonsen & K.M. Vikland
1377
Does change challenge safety? Complexity in the civil aviation transport system
S. Hyland & K. Aase
1385
1395
1401
Empowering operations and maintenance: Safe operations with the one directed team
organizational model at the Kristin asset
P. Nsje, K. Skarholt, V. Heps & A.S. Bye
XIV
1407
1415
Local management and its impact on safety culture and safety within Norwegian shipping
H.A Oltedal & O.A. Engen
1423
1431
1439
1447
1455
1463
Drawing up and running a Security Plan in an SME type companyAn easy task?
M. Gerbec
1473
1481
1489
1495
Some safety aspects on multi-agent and CBTC implementation for subway control systems
F.M. Rachel & P.S. Cugnasca
1503
Software reliability
Assessment of software reliability and the efficiency of corrective actions during the software
development process
R. Savic
1513
1519
1525
1533
1539
1547
1555
XV
1567
Building resilience to natural hazards. Practices and policies on governance and mitigation
in the central region of Portugal
J.M. Mendes & A.T. Tavares
1577
Governance of flood risks in The Netherlands: Interdisciplinary research into the role and
meaning of risk perception
M.S. de Wit, H. van der Most, J.M. Gutteling & M. Bockarjova
1585
Public intervention for better governanceDoes it matter? A study of the Leros Strength case
P.H. Linde & J.E. Karlsen
1595
1601
Using stakeholders expertise in EMF and soil contamination to improve the management
of public policies dealing with modern risk: When uncertainty is on the agenda
C. Fallon, G. Joris & C. Zwetkoff
1609
1621
1629
1635
1641
1651
1655
1663
1671
1677
1685
Author index
1695
VOLUME 3
System reliability analysis
A copula-based approach for dependability analyses of fault-tolerant systems with
interdependent basic events
M. Walter, S. Esch & P. Limbourg
XVI
1705
1715
1723
A new approach to assess the reliability of a multi-state system with dependent components
M. Samrout & E. Chatelet
1731
1739
1747
1755
1763
Application of the fault tree analysis for assessment of the power system reliability
A. Volkanovski, M. Cepin
& B. Mavko
1771
1779
1787
Calculating steady state reliability indices of multi-state systems using dual number algebra
E. Korczak
1795
1803
1807
1813
1819
Efficient generation and representation of failure lists out of an information flux model
for modeling safety critical systems
M. Pock, H. Belhadaoui, O. Malass & M. Walter
1829
Evaluating algorithms for the system state distribution of multi-state k-out-of-n:F system
T. Akiba, H. Yamamoto, T. Yamaguchi, K. Shingyochi & Y. Tsujimura
1839
1847
1851
1861
1867
XVII
1873
1881
1891
1901
1909
1915
1919
1929
1937
1943
1949
1955
Reliability prediction using petri nets for on-demand safety systems with fault detection
A.V. Kleyner & V. Volovoi
1961
Reliability, availability and cost analysis of large multi-state systems with ageing components
K. Kolowrocki
1969
Reliability, availability and risk evaluation of technical systems in variable operation conditions
K. Kolowrocki & J. Soszynska
1985
1995
2003
2013
2021
2029
The operation quality assessment as an initial part of reliability improvement and low cost
automation of the system
L. Muslewski, M. Woropay & G. Hoppe
2037
XVIII
2045
Variable ordering techniques for the application of Binary Decision Diagrams on PSA
linked Fault Tree models
C. Ibez-Llano, A. Rauzy, E. Melndez & F. Nieto
2051
2061
2071
2081
2093
2101
2107
2117
2125
2135
2143
Reliability assessment under Uncertainty Using Dempster-Shafer and Vague Set Theories
S. Pashazadeh & N. Grachorloo
2151
Types and sources of uncertainties in environmental accidental risk assessment: A case study
for a chemical factory in the Alpine region of Slovenia
M. Gerbec & B. Kontic
Uncertainty estimation for monotone and binary systems
A.P. Ulmeanu & N. Limnios
2157
2167
2175
2185
XIX
2191
2199
2207
The Preliminary Risk Analysis approach: Merging space and aeronautics methods
J. Faure, R. Laulheret & A. Cabarbaye
2217
Using a Causal model for Air Transport Safety (CATS) for the evaluation of alternatives
B.J.M. Ale, L.J. Bellamy, R.P. van der Boom, J. Cooper, R.M. Cooke, D. Kurowicka, P.H. Lin,
O. Morales, A.L.C. Roelen & J. Spouge
2223
Automotive engineering
An approach to describe interactions in and between mechatronic systems
J. Gng & B. Bertsche
2233
2239
2245
2251
Towards a better interaction between design and dependability analysis: FMEA derived from
UML/SysML models
P. David, V. Idasiak & F. Kratz
2259
2269
2275
2285
2289
Exposure assessment model to combine thermal inactivation (log reduction) and thermal injury
(heat-treated spore lag time) effects on non-proteolytic Clostridium botulinum
J.-M. Membr, E. Wemmenhove & P. McClure
2295
2305
Public information requirements on health risks of mercury in fish (2): A comparison of mental
models of experts and public in Japan
H. Kubota & M. Kosugi
2311
Review of diffusion models for the social amplification of risk of food-borne zoonoses
J.P. Mehers, H.E. Clough & R.M. Christley
XX
2317
Risk perception and communication of food safety and food technologies in Flanders,
The Netherlands, and the United Kingdom
U. Maris
Synthesis of reliable digital microfluidic biochips using Monte Carlo simulation
E. Maftei, P. Pop & F. Popentiu Vladicescu
2325
2333
2345
2353
2363
Influence of safety systems on land use planning around seveso sites; example of measures
chosen for a fertiliser company located close to a village
C. Fivez, C. Delvosalle, N. Cornil, L. Servranckx, F. Tambour, B. Yannart & F. Benjelloun
2369
2379
2389
2397
2405
Reliability study of shutdown process through the analysis of decision making in chemical plants.
Case of study: South America, Spain and Portugal
L. Amendola, M.A. Artacho & T. Depool
2409
2415
2421
Civil engineering
Decision tools for risk management support in construction industry
S. Mehicic Eberhardt, S. Moeller, M. Missler-Behr & W. Kalusche
2431
2441
2447
2453
2463
XXI
2473
Critical infrastructures
A model for vulnerability analysis of interdependent infrastructure networks
J. Johansson & H. Jnsson
Exploiting stochastic indicators of interdependent infrastructures: The service availability of
interconnected networks
G. Bonanni, E. Ciancamerla, M. Minichino, R. Clemente, A. Iacomini, A. Scarlatti,
E. Zendri & R. Terruggia
Proactive risk assessment of critical infrastructures
T. Uusitalo, R. Koivisto & W. Schmitz
2491
2501
2511
Seismic assessment of utility systems: Application to water, electric power and transportation
networks
C. Nuti, A. Rasulo & I. Vanzi
2519
Author index
2531
VOLUME 4
Electrical and electronic engineering
Balancing safety and availability for an electronic protection system
S. Wagner, I. Eusgeld, W. Krger & G. Guaglio
Evaluation of important reliability parameters using VHDL-RTL modelling and information
flow approach
M. Jallouli, C. Diou, F. Monteiro, A. Dandache, H. Belhadaoui, O. Malass, G. Buchheit,
J.F. Aubry & H. Medromi
2541
2549
2561
Incorporation of ageing effects into reliability model for power transmission network
V. Matuzas & J. Augutis
2569
2575
2581
Security of gas supply to a gas plant from cave storage using discrete-event simulation
J.D. Amaral Netto, L.F.S. Oliveira & D. Faertes
2587
2593
2601
XXII
2609
2613
The estimation of health effect risks based on different sampling intervals of meteorological data
J. Jeong & S. Hoon Han
2619
2627
2635
2641
2649
2657
2665
2675
2685
2689
Manufacturing
A decision model for preventing knock-on risk inside industrial plant
M. Grazia Gnoni, G. Lettera & P. Angelo Bragatto
2701
Condition based maintenance optimization under cost and profit criteria for manufacturing
equipment
A. Snchez, A. Goti & V. Rodrguez
2707
2715
Mechanical engineering
Developing a new methodology for OHS assessment in small and medium enterprises
C. Pantanali, A. Meneghetti, C. Bianco & M. Lirussi
2727
2735
XXIII
2743
Natural hazards
A framework for the assessment of the industrial risk caused by floods
M. Campedel, G. Antonioni, V. Cozzani & G. Di Baldassarre
2749
2757
2765
Decision making tools for natural hazard risk managementExamples from Switzerland
M. Brndl, B. Krummenacher & H.M. Merz
2773
How to motivate people to assume responsibility and act upon their own protection from flood
risk in The Netherlands if they think they are perfectly safe?
M. Bockarjova, A. van der Veen & P.A.T.M. Geurts
2781
2789
Risk based approach for a long-term solution of coastal flood defencesA Vietnam case
C. Mai Van, P.H.A.J.M. van Gelder & J.K. Vrijling
2797
2807
2817
Nuclear engineering
An approach to integrate thermal-hydraulic and probabilistic analyses in addressing
safety margins estimation accounting for uncertainties
S. Martorell, Y. Nebot, J.F. Villanueva, S. Carlos, V. Serradell, F. Pelayo & R. Mendizbal
2827
Availability of alternative sources for heat removal in case of failure of the RHRS during
midloop conditions addressed in LPSA
J.F. Villanueva, S. Carlos, S. Martorell, V. Serradell, F. Pelayo & R. Mendizbal
2837
2845
Distinction impossible!: Comparing risks between Radioactive Wastes Facilities and Nuclear
Power Stations
S. Kim & S. Cho
2851
Heat-up calculation to screen out the room cooling failure function from a PSA model
M. Hwang, C. Yoon & J.-E. Yang
Investigating the material limits on social construction: Practical reasoning about nuclear
fusion and other technologies
T. Horlick-Jones, A. Prades, C. Oltra, J. Navajas & J. Espluga
2861
2867
Neural networks and order statistics for quantifying nuclear power plants safety margins
E. Zio, F. Di Maio, S. Martorell & Y. Nebot
2873
M. Cepin
& R. Prosen
2883
2891
XXIV
2899
2909
2913
2921
2929
FAMUS: Applying a new tool for integrating flow assurance and RAM analysis
. Grande, S. Eisinger & S.L. Isaksen
2937
2945
Life cycle cost analysis in design of oil and gas production facilities to be used in harsh,
remote and sensitive environments
D. Kayrbekova & T. Markeset
Line pack management for improved regularity in pipeline gas transportation networks
L. Frimannslund & D. Haugland
2955
2963
Optimization of proof test policies for safety instrumented systems using multi-objective
genetic algorithms
A.C. Torres-Echeverria, S. Martorell & H.A. Thompson
2971
2981
Preliminary probabilistic study for risk management associated to casing long-term integrity
in the context of CO2 geological sequestrationRecommendations for cement plug geometry
Y. Le Guen, O. Poupard, J.-B. Giraud & M. Loizzo
2987
2997
Policy decisions
Dealing with nanotechnology: Do the boundaries matter?
S. Brunet, P. Delvenne, C. Fallon & P. Gillon
3007
3015
Risk futures in Europe: Perspectives for future research and governance. Insights from a EU
funded project
S. Menoni
Risk management strategies under climatic uncertainties
U.S. Brandt
XXV
3023
3031
3039
Stop in the name of safetyThe right of the safety representative to halt dangerous work
U. Forseth, H. Torvatn & T. Kvernberg Andersen
3047
3055
Public planning
Analysing analysesAn approach to combining several risk and vulnerability analyses
J. Borell & K. Eriksson
Land use planning methodology used in Walloon region (Belgium) for tank farms of gasoline
and diesel oil
F. Tambour, N. Cornil, C. Delvosalle, C. Fivez, L. Servranckx, B. Yannart & F. Benjelloun
3061
3067
3077
3085
3093
3101
3109
3117
3125
On the methods to model and analyze attack scenarios with Fault Trees
G. Renda, S. Contini & G.G.M. Cojazzi
3135
3143
3153
3163
Dynamic maintenance policies for civil infrastructure to minimize cost and manage safety risk
T.G. Yeung & B. Castanier
3171
3177
XXVI
Impact of preventive grinding on maintenance costs and determination of an optimal grinding cycle
C. Meier-Hirmer & Ph. Pouligny
3183
3191
Optimal design of control systems using a dependability criteria and temporal sequences
evaluationApplication to a railroad transportation system
J. Clarhaut, S. Hayat, B. Conrard & V. Cocquempot
3199
RAM assurance programme carried out by the Swiss Federal Railways SA-NBS project
B.B. Stamenkovic
3209
3217
Safety analysis methodology application into two industrial cases: A new mechatronical system
and during the life cycle of a CAFs high speed train
O. Revilla, A. Arnaiz, L. Susperregui & U. Zubeldia
3223
3231
3237
3245
Waterborne transportation
A simulation based risk analysis study of maritime traffic in the Strait of Istanbul
B. zbas, I. Or, T. Altiok & O.S. Ulusu
3257
3265
3275
3285
Design of the ship power plant with regard to the operator safety
A. Podsiadlo & W. Tarelko
3289
3295
Modeling of hazards, consequences and risk for safety assessment of ships in damaged
conditions in operation
M. Gerigk
3303
Numerical and experimental study of a reliability measure for dynamic control of floating vessels
B.J. Leira, P.I.B. Berntsen & O.M. Aamo
3311
3319
3323
XXVII
3331
The analysis of SAR action effectiveness parameters with respect to drifting search area model
Z. Smalko & Z. Burciu
3337
3343
Author index
3351
XXVIII
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Preface
This Conference stems from a European initiative merging the ESRA (European Safety and Reliability
Association) and SRA-Europe (Society for Risk AnalysisEurope) annual conferences into the major safety,
reliability and risk analysis conference in Europe during 2008. This is the second joint ESREL (European Safety
and Reliability) and SRA-Europe Conference after the 2000 event held in Edinburg, Scotland.
ESREL is an annual conference series promoted by the European Safety and Reliability Association. The
conference dates back to 1989, but was not referred to as an ESREL conference before 1992. The Conference
has become well established in the international community, attracting a good mix of academics and industry
participants that present and discuss subjects of interest and application across various industries in the fields of
Safety and Reliability.
The Society for Risk AnalysisEurope (SRA-E) was founded in 1987, as a section of SRA international
founded in 1981, to develop a special focus on risk related issues in Europe. SRA-E aims to bring together
individuals and organisations with an academic interest in risk assessment, risk management and risk communication in Europe and emphasises the European dimension in the promotion of interdisciplinary approaches of
risk analysis in science. The annual conferences take place in various countries in Europe in order to enhance the
access to SRA-E for both members and other interested parties. Recent conferences have been held in Stockholm,
Paris, Rotterdam, Lisbon, Berlin, Como, Ljubljana and the Hague.
These conferences come for the first time to Spain and the venue is Valencia, situated in the East coast close
to the Mediterranean Sea, which represents a meeting point of many cultures. The host of the conference is the
Universidad Politcnica de Valencia.
This year the theme of the Conference is "Safety, Reliability and Risk Analysis. Theory, Methods and
Applications". The Conference covers a number of topics within safety, reliability and risk, and provides a
forum for presentation and discussion of scientific papers covering theory, methods and applications to a wide
range of sectors and problem areas. Special focus has been placed on strengthening the bonds between the safety,
reliability and risk analysis communities with an aim at learning from the past building the future.
The Conferences have been growing with time and this year the program of the Joint Conference includes 416
papers from prestigious authors coming from all over the world. Originally, about 890 abstracts were submitted.
After the review by the Technical Programme Committee of the full papers, 416 have been selected and included
in these Proceedings. The effort of authors and the peers guarantee the quality of the work. The initiative and
planning carried out by Technical Area Coordinators have resulted in a number of interesting sessions covering
a broad spectre of topics.
Sebastin Martorell
C. Guedes Soares
Julie Barnett
Editors
XXIX
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Organization
Conference Chairman
Dr. Sebastin Martorell Alsina
Conference Co-Chairman
Dr. Bls Galvn Gonzlez
Leira, BertNorway
Levitin, GregoryIsrael
Merad, MyriamFrance
Palanque, PhilippeFrance
Papazoglou, IoannisGreece
Preyssl, ChristianThe Netherlands
Rackwitz, RuedigerGermany
Rosqvist, TonyFinland
Salvi, OlivierGermany
Skjong, RolfNorway
Spadoni, GigliolaItaly
Tarantola, StefanoItaly
Thalmann, AndreaGermany
Thunem, Atoosa P-JNorway
Van Gelder, PieterThe Netherlands
Vrouwenvelder, TonThe Netherlands
Wolfgang, KrgerSwitzerland
Badia G, Spain
Barros A, France
Bartlett L, United Kingdom
Basnyat S, France
Birkeland G, Norway
Bladh K, Sweden
Boehm G, Norway
XXXI
Webpage Administration
Alexandre Janeiro
Le Bot P, France
Limbourg P, Germany
Lisnianski A, Israel
Lucas D, United Kingdom
Luxhoj J, United States
Ma T, United Kingdom
Makin A, Australia
Massaiu S, Norway
Mercier S, France
Navarre D, France
Navarro J, Spain
Nelson W, United States
Newby M, United Kingdom
Nikulin M, France
Nivolianitou Z, Greece
Prez-Ocn R, Spain
Pesme H, France
Piero B, Italy
Pierson J, France
Podofillini L, Italy
Proske D, Austria
Re A, Italy
Revie M, United Kingdom
Rocco C, Venezuela
Rouhiainen V, Finland
Roussignol M, France
Sadovsky Z, Slovakia
Salzano E, Italy
Sanchez A, Spain
Sanchez-Arcilla A, Spain
Scarf P, United Kingdom
Siegrist M, Switzerland
Srensen J, Denmark
Storer T, United Kingdom
Sudret B, France
Teixeira A, Portugal
Tian Z, Canada
Tint P, Estonia
Trbojevic V, United Kingdom
Valis D, Czech Republic
Vaurio J, Finland
Yeh W, Taiwan
Zaitseva E, Slovakia
Zio E, Italy
XXXII
Universidad de Granada
Universidad Politcnica de Valencia
Universidad Politcnica de Valencia
Universidad de Las Palmas de Gran Canaria
XXXIII
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Acknowledgements
The conference is organized jointly by Universidad Politcnica de Valencia, ESRA (European Safety and
Reliability Association) and SRA-Europe (Society for Risk AnalysisEurope), under the high patronage of
the Ministerio de Educacin y Ciencia, Generalitat Valenciana and Ajuntament de Valencia.
Thanks also to the support of our sponsors Iberdrola, PMM Institute for Learning, Tekniker, Asociacin
Espaola para la Calidad (Comit de Fiabilidad), CEANI and Universidad de Las Palmas de Gran Canaria. The
support of all is greatly appreciated.
The work and effort of the peers involved in the Technical Program Committee in helping the authors to
improve their papers are greatly appreciated. Special thanks go to the Technical Area Coordinators and organisers
of the Special Sessions of the Conference, for their initiative and planning which have resulted in a number of
interesting sessions. Thanks to authors as well as reviewers for their contributions in the review process. The
review process has been conducted electronically through the Conference web page. The support to the web
page was provided by the Instituto Superior Tcnico.
We would like to acknowledge specially the local organising committee and the conference secretariat and technical support at the Universidad Politcnica de Valencia for their careful planning of the practical arrangements.
Their many hours of work are greatly appreciated.
These conference proceedings have been partially financed by the Ministerio de Educacin y Ciencia
de Espaa (DPI2007-29009-E), the Generalitat Valenciana (AORG/2007/091 and AORG/2008/135) and the
Universidad Politcnica de Valencia (PAID-03-07-2499).
XXXV
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Introduction
The Conference covers a number of topics within safety, reliability and risk, and provides a forum for presentation
and discussion of scientific papers covering theory, methods and applications to a wide range of sectors and
problem areas.
Thematic Areas
XXXVII
Nuclear Engineering
Offshore Oil and Gas
Policy Decisions
Public Planning
Security and Protection
Surface Transportation (road and train)
Waterborne Transportation
XXXVIII
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Step-stress ALT is a widely used method in life validation of products with high reliability. This
paper presents a new step-stress ALT approach with an opposite exerting sequence of stress levels in contrast to
traditional step-stress, so-called Step-Down-Stress (SDS). The testing efficiency of SDS ALT is compared with
step-stress ALT by Monte-Carlo simulation and contrastive experiment. This paper also presents a statistical
analysis procedure for SDS ALT under Weibull distribution. A practical ALT on bulb is given in the end to
illustrate the approach. SDS ALT may advance testing efficiency of traditional step-stress ALT remarkably
when applied in life validation of products with high reliability. It consumes less time for same failure number,
and gets more failures in same testing time than traditional step-stress ALT with identical testing plan. It also
helps to introduce the statistical analysis procedure accordingly which establishes a uniform analysis procedure
and can be applied to different acceleration equation easily.
ACRONYMS
simplies statistical(ly)
ALT
accelerated life testing
AST
accelerated stress testing
SDS
step-down-stress
CEM
cumulative exposure model
CDF
cumulative distribution function
IPL
inverse power law
NOTATION
k
number of stress levels
Si
stress levels, i = 0, . . ., k
X
X = f (S)
Fi (t)
CDF of failures under Si
m
shape parameter of Weibull distribution
a. Constant-stress ALT
b. Step-stress ALT
INTRODUCTION
863
Methods of ALT.
subjected to a specified constant stress for a specified length of time; after that, they are subjected to a
higher stress level for another specified time; the stress
on specimens is thus increased step by step[18] .
This paper presents a new step-stress ALT approach
with an opposite exerting sequence of stress levels in
contrast to traditional step-stress, so-called step-downstress (SDS), from the assumption that the change in
exerting sequence of stress levels will improve testing efficiency. The validity of SDS ALT is discussed
through comparison with tradition step-stress ALT by
Monte-Carlo simulation and contrastive experiment,
and it concludes that SDS ALT takes less time for same
failure number and gets more failures in same time.
A new s-analysis procedure is constructed for SDS
ALT, which is applicable to different acceleration
equation and may be programmed easily.
The rest of this paper is organized as follows:
section 2 describes SDS ALT including basic assumptions, the definition, s-analysis model, and MonteCarlo Simulation; section 3 presents an s-analysis
procedure for SDS ALT; section 4 gives a practical
example; section 5 concludes the paper.
2
Figure 2.
Figure 3.
t>0
(1)
n
aj Xj
(2)
j=0
Definition
SDS ALT is shown in Figure 1d. In SDS ALT, specimens are initially subjected to the highest stress level
Sk for rk failures, then stress is stepped down to Sk1
for rk1 failures, and then stress is stepped down
again to Sk2 for rk2 failures, and so on. The test is
terminated at the lowest stress S1 until r1 failures occur.
From the description above, SDS ALT is symmetrical to traditional step-stress ALT with different varying
864
Table 1.
No.
(m, )
Step-down stress
Step-stress
1
2
3
4
5
6
7
8
1.318
1.338
1.399
3.916
2.276
4.699
1.696
2.902
2.3
s-Analysis model
[2]
STATISTICAL ANALYSIS
(3)
865
(4)
ln[uni (m
i , i )/uj (m
i , i )] = ni 1
(12)
j=1
1/m i
ni
m
xj i ( i ) + (n ni )xnmii ( i )/ni
i =
j=1
(13)
mi = mj
Kij = j /i
(5)
xi = i (ti+1 /i+1 )
(6)
(7)
(j = 1, 2, . . . , ni+1 )
(j = 1, 2, . . . , ri )
(9)
So ni (ni = rk + rk1 + + ri ) equivalent time-tofailures at Si are obtained, which form quasi samples
xi (i ) under Weibull distribution
x1 (i ) < x2 (i ) < xni (i )
EXPERIMENT RESULT
(8)
(10)
Approach Si (V)
(n, r)
Testing time
at each stress
level (h)
Step up
40,
[5 5 20]
40, [20
5 5 5]
(14)
Total
testing
time (h)
uj (mi , i ) =
xkmi (i ) + (n j)xjmi (i )
k=1
( j = 1, 2, . . . , ni )
(11)
866
Step
down
250 270
287 300
300 287
270 250
Table 3.
mi = mj
Si (V)
300
287
270
250
m
(h)
3.827
14.690
3.836
83.294
3.827
97.248
3.821
285.160
Kij = j /i
B Numerical solution algorithm of (12) & (13).
To simplify the procedure, (12) & (13) is transformed as
CONCLUSION
An SDS ALT and its s-analysis procedure are presented in this paper for life validation by AST. The
validity of this new step-stress approach is discussed through Monte-Carlo simulation and contrastive experiment. It shows that SDS ALT may
advance testing efficiency remarkably and accordingly
decrease the cost of test. The s-analysis procedure for
SDS ALT under Weibull distribution is constructed,
and the validity is also demonstrated through ALT
on bulb.
Future efforts in the research of SDS ALT may
include further discussion on efficiency, improvement
of s-analysis procedure, and optimal design of test plan
for SDS ALT.
ln ui (mi , i ) = ni
i=1
(15)
n
i 1
(19)
i = [uni (mi , i )/ni ]
1/mi
(20)
(21)
n
i 1
ln ui (mi , i )
(22)
i=1
APPENDIX
A Proof of (5)
The CDF under Weibull distribution is
F(t) = 1 exp[(t/)m ],
t>0
(16)
(17)
(18)
Figure A1.
867
(23)
(24)
(25)
868
[2] Nelson, W. 1980. Accelerated Life TestingStepstress Models and Data Analysis. IEEE Trans. on
Reliability R-29: 103108.
[3] Tyoskin, O. & Krivolapov, S. 1996. Nonparametric
Model for Step-Stress Accelerated Life Testing. IEEE
Trans. on Reliability 45(2): 346350.
[4] Tang, L. & Sun, Y. 1996. Analysis of Step-Stress
Accelerated-Life-Test Data: A New Approach. IEEE
Trans. on Reliability 45(1): 6974.
[5] Miller, R. & Nelson, W. 1983. Optimum Simple Stepstress Plans for Accelerated Life Testing. IEEE Trans.
on Reliability. 32(1): 5965.
[6] Bai, D. & Kim, M. & Lee, S. 1989 Optimum Simple
Step-stress Accelerated Life Tests with censoring. IEEE
Trans. on Reliability. 38(5): 528532.
[7] Khamis, I. & Higgins, J. 1996. Optimum 3-Step
Step-Stress Tests. IEEE Trans. on Reliability. 45(2):
341345.
[8] Yeo, K. & Tang, L. 1999. Planning Step-Stress LifeTest with a Target Acceleration-Factor. IEEE Trans. on
Reliability. 48(1): 6167.
[9] Zhang, C. & Chen, X. 2002. Analysis for Constantstress Accelerated Life Testing Data under Weibull Life
Distribution. Journal of National University of Defense
Technology (Chinese). 24(2): 8184.
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The lognormal distribution is commonly used to model certain types of data that arise in several
fields of engineering as, for example, different types of lifetime data or coefficients of wear and friction.
However, a generalized form of the lognormal distribution can be used to provide better fits for many types of
experimental or observational data. In this paper, a Bayesian analysis of a generalized form of the lognormal
distribution is developed. Bayesian inference offers the possibility of taking expert opinions into account. This
makes this approach appealing in practical problems concerning many fields of knowledge, including reliability
of technical systems. The full Bayesian analysis includes a Gibbs sampling algorithm to obtain the samples from
the posterior distribution of the parameters of interest. Empirical proofs over a wide range of engineering data
sets have shown that the generalized lognormal distribution can outperform the lognormal one in this Bayesian
context.
Keywords: Bayesian analysis, Generalized normal distribution, Engineering data, Lognormal distribution,
Markov chain Monte Carlo methods.
1
INTRODUCTION
869
2 x ( 1s )
with x > 0, < < +, > 0 and s 1. Note
that denotes the gamma function.
This distribution has the lognormal distribution as
a particular
case by taking s = 2 and changing
BAYESIAN ANALYSIS
0.4
Bayesian analyses with both noninformative and informative prior distributions are addressed in this section.
0.2
Density
0.6
0.8
s=1.0
s=1.5
s=2.0
s=3.0
0.0
Figure 1. Probability density functions for logGN distributions with = 0 and = 1, and several values of s.
870
2 (1/s)
0
0
L(, , s, u|x)
=
I (, , s)
(s1)s(11/s)
A
s11/s
,
2
(1+1/s) (1+1/s)+A2 1
A
s11/s
s3
s1+2/s
n
1/s
1/s
1 1
I [e ui < xi < e+ ui ].
(2 )n i=1 xi ui1/s
f (, , s, u|x)
n
sn1
eui
I
n+1 n ( 1s ) i=1 xi
1/s
1/s
[e ui < xi < e+ ui ].
() 1,
The full conditional distributions are derived:
1
,
(1 + 1/s) (1 + 1/s) + A2 1
(s)
,
s3
( )
f (| , s, u, x) 1,
1/s
1/s
f ( |, s, u, x)
1
, > max
i
n+1
| log(xi )|
1/s
ui
(2)
f (s|, , u, x)
n1
s
,
n (1/s)
iS +
f (ui |, , s, x) eui ,
| log(xi ) | s
, i = 1, 2, . . . , n,
ui >
(3)
(4)
0 .4
0 .5
0 .6
I s
2 3s
(1)
0 .2
0 .3
0 .1
ai =
Figure 2.
Comparison of functions
10
log(ui )
, i = 1, 2, . . . , n.
log(| log(xi )|/ )
Random variates from these densities can be generated by using standard methods. Note that the densities
given in (??), (2) and (??) are uniform, Pareto and
exponential, respectively, and they are generated by
using the inverse transformation method. The density
871
given in (??) is non-standard, but it can also be easily generated by using the rejection method (see, e.g.,
Devroye (1986)).
Iterative generations from the above conditional
distributions produce a posterior sample of (, , s).
1/s
(8)
(a0 +n+1) b0 /
< < +,
(c0 +1) d0 /s
(5)
>0
(6)
s 1,
n
eui
i=1
xi
f (s|, , u, x)
snc0 1 ed0 /s
,
n (1/s)
(9)
(10)
iS +
f (ui |, , s, x) eui ,
| log(xi ) | s
ui >
,
i = 1, 2, . . . , n.
(11)
(7)
f ( |, s, u, x)
e
| log(xi )|
> max
1/s
i
ui
1/s
I [e ui < xi < e+ ui ].
872
150
0 50
Posterior density
60
40
20
0
Posterior density
Posterior density
0.06
0.07
0.08
0.09
0.10
8
6
Density
4
2
0.7
0.8
3.0
3.5
s
Figure 3.
4.0
1.1
1.2
2.5
1.0
1
U 0 =
log(p0 (xi ))
n i=1
2.0
0.9
Data
0.05
to make inferences. This corresponds to the informative case. Then, the historical information and the prior
information provided by the engineer are embedded in
the posterior distribution.
The following step is to choose the hyperparameter values for the prior distributions of the parameters
of interest , , and s. There are many ways to elicit
these values. One possibility is to specify the values
according to previous direct knowledge on the parameters (see, e.g., Berger (1985) and Akman and Huwang
(2001)). Another one consists in using partial information elicited by the expert. In this case, there are
many criteria to obtain the hyperparameters values as,
for example, maximum entropy and maximum posterior risk (see Savchuk and Martz (1994)). A third
possibility considered here is to use expert information on the expected data and not on the parameters.
This is easier for engineers who are not familiarized
with parameters but have an approximate knowledge
of the process. Finally, it is remarkable that noninformative prior distributions can be used for any of the
parameters and informative prior distributions for the
remaining ones.
In this application, the hyperparameters are obtained by using a method similar to the one presented by Gutirrez-Pulido et al. (2005). In this
case the expert is asked to provide occurrence
intervals for some usual quantities as the mode,
median and third quartile. The expert considered
that these quantities should be in the following intervals: [LMo , UMo ] = [0.935, 0.955], [LMe , UMe ] =
[0.95, 0.96], and [LQ3 , UQ3 ] = [0.97, 0.985]. By using
the informative prior distributions presented in subsection 3.2, with N (0 , 0 ), and following the
development in Gutirrez-Pulido et al. (2005), the
1
U 1 =
log(p1 (xi )),
n i=1
n
873
CONCLUSION
874
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
B. Bertsche
Institute of Machine Components, Universitt Stuttgart, Germany
ABSTRACT: One of the biggest challenges in quantitative reliability analyses is a complete and significant
data basis, which describes the complete product lifetime. Today, this data basis, in regards to the demanded
and needed information, is not given. Especially the circumstances, that lead to a failure while the field usage,
e.g. operational and ambient information before and while failure occurrence, are of highest importance. In the
development phase of products much more detailed data is collected and documented, compared to the amount
in general customer field usage. Today, one of the most important data basis to describe the real field behavior
are warrantee and goodwill data. For an optimal correlation between failures that occur while testing and while
field usage, these data are not sufficient. In order to improve this situation, an approach was developed, with
which the collection of reliability relevant data during costumer usage is enabled over the whole product lifetime
of vehicles. The basis of this reliability orientated data collection is the consideration of already available CAN
signals in modern vehicles.
INTRODUCTION
875
CAN
node 2
CAN
node 1
CAN
node 3
CAN-bus line
CAN
node 4
Figure 1.
line
termination
CAN
node 5
Connection via
diagnostics interface
resolution of the
data elements
Configuration of a CAN-bus
Enhancement of an
existing control unit
number of the
data elements
2.1
low effort
of implementation
876
Monitor CAN-Bus
Data management
877
Time scale
ABS Active
#121
#201
Data of month
Log file
719270_t1_20071017 110
719270_d_20071025 111
719270_m_20071031 11
110
Data of day
ABS Active
#121
#201
Data of trip
Number of vehicles
Figure 5.
Non-recurring
N
data
Ki of vehicle: Truck
Kind
Vehicle Identification No.
V
W
WDB9340
ABS Active
#121
#201
software as well as additional unit costs and recurring costs for the data transfer from the vehicle to the
company.
The fixed costs for the development of the detailed
investigated concepts, the direct data transfer via radio
networks and the indirect data transfer via outstations, are nearly the same. In both cases, an existing
ECU has to be enhanced with sufficient storage capacity, interfaces and the software, which manages the
data processing in the vehicle. The unit costs for
the broadcasting modules differ only slightly. The
initial investment of the concept with the indirect
data transfer is higher because of the costs for the
outstations.
The main economic differences between the two
concepts are the recurring costs for broadcasting the
data. With rising number of vehicles, which directly
transfer the data from the vehicle to the company, the
broadcasting costs are also increasing. In contrast to
the broadcasting costs of the direct data transfer, the
costs for the indirect data transfer are not rising with
higher number of equipped vehicles. The reason is that
costs causing data transfers via radio network arise
only between outstations and the company. The number of outstations is nearly constant, because the single
outstations just have to broadcast more data volume.
The principal trend of the total costs for both
concepts can be seen in Figure 5.
It is to mention that the graphical visualization of the
costs is a first estimation. The very important conclusion is that a field data collection with many vehicles
causes fewer costs when the data transfer is realized
with the indirect concept.
4
3
DATA PROTECTION
ECONOMIC CONSIDERATION
878
35
30
25
20
15
10
5
0
<0
0-9
9-18
> 90
Figure 6.
vehicles.
25
Fraction
action of total operating time [%]
20
15
10
0
<0
0-9
9-18 18-27 27-36 36-45 45-54 54-63 63-72 72-81 81-90 > 90
Figure 7.
vehicles.
30
25
20
15
10
5
0
<0
0-9
9-18
> 90
belongs to a distribution vehicle, except the high fraction of the standstill period and interurban operation.
The typical character of a distribution vehicle is the
high fraction of middle-sized vehicle speeds.
Another example of the analysis of the CANbus messages is a comparison between the differences of high and low time scales of the
classed data. A very impressive comparison is the
investigation of braking activities of a commercial vehicle. The collection of the classed data
was done for one month (high time scale) and for
879
The shown approach of using the existing information, that are already available in modern vehicles, is
an appropriate solution to a collection of field data for
reliability analyses. Data about failed and non-failed
parts are available as well as information about operational and ambient conditions over the whole lifetime
of vehicles.
The most important steps of the data processing
in the vehicle lead to a significant reduction of the
huge amount of data and enable a reasonable data
transfer between vehicle and company. The consideration of different time scales has to be considered,
as it is realized by the data management and shown
by an example. The order of transferring the data files
to the company is executed by the file management
according to a defined logic of prioritization.
A short economic consideration shows the principal
trend of the total costs for two concepts. The concept of
indirect data transfer via outstations is more economic
than the direct data transfer via radio networks.
Finally, legal aspects of the federal data protection act of Germany do not appear to be an estoppel,
although only a short summary of the investigation is
shown.
Therefore, a collection and analysis of reliability
data over the whole product lifetime of vehicles is
possible and has to be realized in forward-looking
companies.
30
25
20
15
10
0
<0
0-9
9-18
> 90
Figure 9.
vehicle.
1800
1600
1400
1200
1000
800
600
400
se
n[
io
D
ur
9 01 00
8 090
6 070
[%]
7 080
4 050
5 060
2 030
3 040
0 -1
0
1 020
Brake peda
l position
at
2 0-4 0
1 5-2 0
1 0-1 5
5 -10
0 -5
c]
200
Figure 10. Difference of high and low time scales for brake
pedal position.
REFERENCES
one day (low time scale). The brake pedal position describes the braking activity. The differences
of the results of the classed data are shown in
Figure 10.
The differences are recognizable clearly, partly up
to a factor of 16, which can be seen as an evidence
for regarding unequal time scales. The differences can
get quite higher if the differences of the loading of the
vehicles increase within different days.
6
SUMMARY
880
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
C. Guedes Soares
CENTEC, Instituto Superior Tcnico, Technical University of Lisbon, Portugal
ABSTRACT: The parametric estimation is performed for two models based on the Weibull distribution: the
mixture of Weibull distributions and the additive Weibull model. The estimation is carried out by a method
which use the Kolmogorov-Smirnov distance as the objective. Phase-type distributions are introduced and a
comparison is made between the fitted Weibull models and the phase-type fit to various sets of life data.
INTRODUCTION
881
(4)
,
(5)
1
2
F (t) = 1 exp (t/) , t 0,
3
(1)
(2)
n
i=1
pi
t i
1 exp
,
i
(3)
PHASE-TYPE DISTRIBUTIONS
882
of failure.
(7)
ESTIMATION METHOD
i 0.3
, i = 1, . . . , n,
n + 0.4
(8)
(9)
NUMERICAL APPLICATIONS
883
Table 1.
Estimated parameters.
1 = 0.3072,
2 = 2.0816,
1 = 113.1296,
2 = 903.0699,
p1 = 0.211
p2 = 0.789
0.003184
0
0
0.005175
L = 0.003184
0
0
0.134512
0
0.003462
0.003184
0
0
0.000947
0
0.144241
Figure 2.
Weibull mixture
1 = 0.5
1 = 100
p1 = 0.5
1 = 0.5
1 = 100
p1 = 0.75
1 = 0.5
1 = 100
p1 = 0.5
1 = 0.5
1 = 100
p1 = 0.5
1 = 0.5
1 = 500
p1 = 0.5
1 = 0.5
1 = 1000
p1 = 0.5
Figure 3.
Estimation
K-S test
2 = 2
2 = 1000
1 = 0.475
1 = 79.25
p1 = 0.4388
2 = 2.433
2 = 937.05
D0.95 = 0.136
0, 136 > 0, 0419+
2 = 2
2 = 1000
1 = 0.542
1 = 110.6
p1 = 0.7036
1 = 0.822
1 = 97.65
p1 = 0.4488
2 = 1.416
2 = 894.9
D0.95 = 0.136
0, 136 > 0, 0441+
2 = 4.197
2 = 1021
D0.95 = 0.136
0, 136 > 0, 0393+
2 = 1
2 = 1000
1 = 0.62
1 = 225.8
p1 = 0.5452
2 = 1.039
2 = 894.9
D0.95 = 0.136
0, 136 > 0, 0497+
2 = 2
2 = 1000
1 = 0.231
1 = 346.5
p1 = 0.1998
1 = 0.492
1 = 968.04
p1 = 0.655
2 = 1.264
2 = 854.67
D0.95 = 0.136
0, 136 > 0, 0535+
2 = 2.642
2 = 946.64
D0.95 = 0.136
0, 136 > 0, 0465+
2 = 4
2 = 1000
2 = 2
2 = 1000
884
Figure 4.
Table 3.
PH-representation
= 0 0.924658 0.075342 0
0.003365
0
0
0.057758
0.005832
0.036542
T =
0
0
0.004333
0
0
0.003365
K-S test
0.003365
0
0
0.003365
= 0.036284 0.963716
0.005016
0.004900
T =
0.029753
0.048887
= 0 1 0 0
0.003733
0
0.013203
0.008083
T =
0
0.000378
0
0
D0.95 = 0.136
0, 136 > 0, 0461+
0
0
0.003733
0.003733
0.003733
0
0
0.003733
= 0.121536 0.160842 0.717622
0.085995
0.002930
0.082800
0.002107
0.000659
T = 0.001061
0.515296
0.019454
0.571630
= 0 0.208189 0 0.791811
0.000210
0
0
0
0.027253
0.000325
T =
0.000331
0.000405
0.002275
0
0.000002
0.002268
D0.95 = 0.136
0, 136 > 0, 1039+
D0.95 = 0.136
0, 136 > 0, 0525+
= 0.025961 0.974039
0.025961
0.000187
T =
0.000067
0.001484
D0.95 = 0.136
D0.95 = 0.136
0, 136 > 0, 0830+
0.000210
0
0
0.002270
885
D0.95 = 0.136
0, 136 > 0, 0630+
Table 4.
Estimated parameters.
1 = 0.2682,
2 = 1.8272,
3 = 2.533,
Figure 5.
Figure 6.
1 = 10.5641,
2 = 150.2917,
3 = 919.6259,
p1 = 0.2006
p2 = 0.2444
p3 = 0.555
= 0
Figure 8.
Figure 7.
0.249664
0.024926
0.996349
T = 0.007600
0.002870
0.011298
0.001439
0.005432
9.29289
0.003714
0.003963
0.000688
0.406906
0.341991 ,
0.003831
0.728256
0.014657
0.007491
0.001906
0.009449
1.778256
0.001057
0.109279
0.000456
0.006022
3.402062
0.001943
0.072219
0.014351
886
Table 5.
Estimated parameters.
1 = 0.6486,
2 = 5.6458,
Figure 9.
1 = 103.8854
2 = 849.1117
Figure 11.
Figure 12.
Table 6.
Figure 10.
1 = 0.5555,
2 = 2.042,
3 = 2.0373,
0.040346 0.003227
0.028734
L = 0.002439 0.004663 0.002056
0.488996
0.037781 0.621015
Estimated parameters.
1 = 100.1662
2 = 628.2805,
3 = 628.2824,
887
Figure 13.
Figure 14.
Figure 15.
Figure 16.
DISCUSSION OF RESULTS
888
CONCLUSIONS
889
[6] Jiang, R. & Murthy, D.N.P. 2001. n-fold Weibull multiplicative model. Reliab. Engng. Syst. Safety 74:
211219.
[7] Ling, J. & Pan, J. 1998. A new method for selection
of population distribution and parameter estimation.
Reliab. Engng. Syst. Safety 60: 247255.
[8] Navarro, J. & Hernndez, P.J. 2004. How to obtain
bathub-shaped failure models from normal mixtures.
Probability in the Engineering and Informational
Sciences 18: 511531.
[9] Montoro-Cazorla, D., Prez-Ocn, R. & Segovia
M.C. 2007. Shock and wear models under policy N
using phase-type distributions. Applied Mathematical
Modelling. Article in press.
[10] Murthy, D.N.P. & Jiang, R. 1997. Parametric study of
sectional models involving two Weibulls distributions.
Reliab. Engng. Syst. Safety 56: 151159.
[11] Neuts, M.F. 1981. Matrix geometric solutions in
stochastic models. An algorithmic approach. Univ.
Press, Baltimore.
[12] Prez-Ocn, R. & Segovia M.C. 2007. Modeling lifetimes using phase-type distributions. Risk, Reliability
and Societal Safety. Terje Aven & Jan Erik Vinnem,
Taylor & Francis. Stavanguer, Norway 1: 463469.
[13] Sun, Y.S., Xie, M., Goh, T.N & Ong, H.L. 1993.
Development and applications of a three-parameter
Weibull distribution with load-dependent location and
scale parameters Reliab. Engng. Syst. Safety 40:
133137.
[14] Xie, M. & Lai, C.D. 1995. Reliability analysis using
an additive Weibull model with bathub-shaped failure
rate function. Reliab Engng. Syst. Safety 52: 8793.
[15] Xie, M., Tang, Y. & Goh, T.N. 2002. A modified
Weibull extension with bathub-shaped failure rate
function. Reliab Engng. Syst. Safety 76: 279285.
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Electric power is an essential condition of every modern economy. But it is not enough only to
maintain existing power plants, it is also necessary to develop the new ones. Companies need to make machines
with the highest possible reliability (the lowest failure rate, lowest repair timelowest unavailability) by this
development. It is very complicated to estimate availability of developed engine which have not worked yet.
We can use an estimation of reliability parameters for older machines with similar design. It is also possible to
allocate weak parts of these older machines and make some improvements.
To find such parts of machine we have to analyze it. It is usually done by statistics methods. So the problem
is where to get the input data for the analysis? The presented methodology tells us how to collect relevant data,
how to differentiate and remove unwanted data and of course how to solve the meaningful rest of data. There is
shown making of failure frequency histogram, appointment of exponential distribution for mean time between
failures of equipment inclusive of chi-square test of this hypothesis. In addition to these parts there is shown
how to perform ABC analysis of failure consequences. Very important part of this paper is Appendix 1, where
it is shown an example of methodology application.
A
CR
MTBF
MTBFL
MTBFU
MTTR
MTTRL
MTTRU
U
L
U
T
t
TTR
r
ri
Tp
L
U
2
2
MOTIVATION
DATA PART
891
3.3
d
T
[1]
(1)
m
(ri A)2
i=1
[1]
(2)
1 10%
T
=
r
n
i=1 ti
[h]
(3)
892
[h1 ]
(4)
0,2 05 (2r)
[h1 ]
2T
0,2 95 (2r + 2)
2T
(5a)
[h1 ]
(5b)
Where 2 (v) means fractile of distribution function of 2 distribution with v degrees of freedom.
1
U
1
MTBFU =
L
MTBFL =
[h]
(6a)
[h]
(6b)
Lower and upper limits of mean time between failures with 90% confidence interval indicate that with
90% probability the real MTBF will lay inside of
interval MTBFL , MTBFU .
The next observed reliability indicator is mean time
to repair of equipment function (MTTR). This parameter could be estimated as sum of all times to repair
divided by number of failures. Of course data must be
available.
Tp
MTTR =
r
[h]
(7)
2T
0,2 95 (2r + 2)
2T
0,2 05 (2r)
[h]
[h]
(8a)
(8b)
[1]
ANALYSIS FLOW-CHART
(9)
893
Table 1.
Failure
mode
Sum
of subs
Submode 1
Submode 2
Submode 3
Submode 4
Mode 1
Mode 2
Mode 3
74
11
23
27
5
9
23
3
7
22
3
5
2
2
ACKNOWLEDGEMENT
140
120
Number of failures
This research was supported by the Ministry of Education, Youth and Sports of the Czech Republic, project
No. 1M06059Advanced Technologies and Systems
for Power Engineering and it was solved by Technical
University of Liberec, Faculty of Mechatronics and
Interdisciplinary Engineering Studies.
100
80
60
40
REFERENCES
20
CSN
IEC 50(191) (010102) Mezinrodn elektrotechnick
slovnkkapitola 191: Spolehlivost a jakost slueb.
CSN
IEC 60605-4 (01 0644-4) Zkouen bezporuchovosti
zarzenCst
4: Statistick postupy pro exponenciln rozdelenBodov odhady, konfidencn intervaly,
predpovedn intervaly a tolerancn intervaly.
0
Mode1 Mode 3 Mode 2 Mode 4 Mode 6 Mode 8 Mode 5 Mode 7
Figure 1.
Figure 2.
894
Table A1.
PUMP
PWR_P
HTC
DAY
CNSQ
RSN
BS_C
AD_C
SKR_C
ST
MWH
GJ
TIME
290
317
322
339
357
360
361
ECH
ECH
ECH
ECH
ECH
ECH
ECH
B3
B3
B4
B2
B4
B2
B2
1.7.82
9.7.82
15.7.82
23.7.82
30.7.82
31.7.82
31.7.82
RS20
RE10
RH10
RS20
RE11
RE11
RX10
90
61
61
64
64
61
19
4320
2222
4320
2222
4320
2222
2222
4900
4900
4600
4900
4600
4900
4800
312
314
311
312
311
314
2
2
2
2
2
2
2
55
120
162
12
260
600
0
0
0
0
0
0
0
0
0
0,6
0,9
0
1,3
3
0
Table A2.
Description
Pr. system
WRK
EQ type
Equipment
Date
TTR
Req. date
SOVP
SO
CER
1VC01D00 1
17.04.00
200
17.12.99
SOVP
SOVP
SOVP
SO
SO
SO
CER
CER
CER
1VC01D00 1
1VC01D00 1
1VC01D00 1
04.10.00
16.10.00
23.11.00
12
24
20
12.06.00
11.07.00
11.08.00
SOVP
SO
CER
1VC01D00 1
01.05.01
01.01.00
29.01.01
SOVP
SOVP
SO
SO
CER
CER
1VC01D00 1
1VC01D00 1
22.05.01
30.04.01
24.01.00
17.04.00
Table A3.
failures.
ID number
Repair description
Table A4.
TTR
Unique ID
P_P1_B1_P1
3.8.1985
P_P1_B1_P2
18.12.1986
P_P1_B1_P3
15.2.1990
P_P1_B1_P4
25.4.1986
P_P1_B2_P1
18.9.1987
P_P1_B2_P2
6.4.1988
P_P1_B2_P3
25.9.1987
P_P1_B2_P4
17.8.1987
Total cumulated operational time
Total number of failures
Est. operational
time [h]
184 248
172 368
145 080
177 960
165 888
161 136
165 720
166 632
1 339 032
135
895
Table A5.
modes.
25
Seal
74
1
1
Air
Oil
venting change
1
1
Bearing
Drain
Seal
plugged clean
27
23
22
ReseaTotal
Cooling Revision
ling
11
5
2
2
15
10
Oil
system
Number of failures
20
O ring
montage
2
0
2006
2005
2004
2003
2002
2001
2000
1999
1998
1997
1996
1995
1994
1993
1992
1991
1990
1989
1988
1987
1986
1985
Year
Figure A3.
operating.
140
120
2 = 9, 71
Number of failures
Cumulated number of failures
100
80
60
40
20
Figure A4.
rence.
Design
Cooling spiral
Alignment
Prevention
Noisiness
Shaft
Flange
Clutch
Gearbox
Chain
Sensors
Bearing
Oil system
Seal system
0
failure mode
MTBF =
107
= 13
16
1339032
[h] = 9900h
135
896
MTBFL =
3270
[h] = 24h
135
Confidence limits of mean time to repair are determined by (8a) and (8b):
2 3270
[h] = 21h
311, 5
2 3270
[h] = 28h
MTTRU =
232, 95
MTTRL =
24, 2
= 2, 4 103
24, 2 + 9919
Paret analysis
Operators experiences show us that three most
common failures are seal leakage, oil system failure
897
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Reliability evaluation based on degradation is very useful in systems with scarce failures. In
this paper a new degradation model based on Weibull distribution is proposed. The model is applied to the
degradation of Light Emitting Diodes (LEDs) under different accelerated tests. The results of these tests are in
agreement with the proposed model and reliability function is evaluated.
INTRODUCTION
CLASSIC MODEL
(1)
Where:
0 mean initial value
A constant that indicates the speed of degradation.
t time.
Linear trend presents a problem for t 0 /A
because, in this period of time, functionality parameter
takes values lower than zero.
899
(2)
being:
0 mean initial value
C Constant that represents the time for which the
parameter has degraded to a 36.7% of its initial
value.
f (p, t) =
1
1
e 2
(t) 2
p(t) 2
(t)
(4)
Figure 1 shows the previous model assuming a linear variation of both the average and the standard
deviation.
Due the standard deviation increases with time, the
normal distribution will be flatting with time, and
therefore in a certain instant of time the normal curve,
p(t) 2
(t)
dp
(5)
Where:
Where:
(3)
2
1
e
(t) 2
0 p F
A
(6)
pF
0
(7)
Figure 1.
(8)
Using classic model, and depending on the degradation parameter, it is possible to obtain results without
any physical sense. As an example, it is possible that
using the classic model a percentage of the devices
improves their performance, functionality parameter,
s it can be seen in Figure 2.
As can be seen in Figure 2 there is a percentage
of devices (calculated as (t) + 3(t)) that improves
their performance with time that is not possible in a
900
(t) = 0 e
tt0
(11)
being
Figure 3. Normal distribution power values (+, , )
with mean and standard deviation linear trend according to
the classical mode (A 3B).
(10)
PROPOSED MODEL
t+t
(t + t)
0 e
= et
=
(t)
t
0 e
(9)
t0 location parameter.
scale parameter.
shape parameter (or slope).
Related scale parameter:
If = 1 functionality parameter varies with respect
time following an exponential. It means that degradation rate is constant in the whole period of time:
(12)
901
Pressure cooker
Day
EXPERIMENTAL TESTS
LED
10
17
26
L1
L2
L3
L4
L5
L6
L7
L8
L9
L10
L11
L12
L13
L14
L15
0,552
0,574
0,613
0,591
0,616
0,606
0,634
0,606
0,610
0,623
0,604
0,628
0,629
0,575
0,565
0,322
0,488
0,549
0,538
0,560
0,524
0,384
0,623
0,567
0,385
0,489
0,570
0,600
0,519
0,535
0,183
0,316
0,476
0,294
0,534
0,400
0,265
0,551
0,481
0,171
0,000
0,462
0,521
0,000
0,410
902
Weibull Pm/Po - t
1
y = 2,1642x - 7,2746
R2 = 0,9088
Ln (-Ln P/Po)
2
1
0
-1
0,2
0,4
0,6
0,8
-1
-2
-3
-4
-5
-6
-2
0,5
1,5
-3
Figure 7. Normal distribution representation in three different instant of time (day 10-blue and right, day 17pink-middle and day 26-yellow-left). Pressure cooker test
(110 C/85% RH).
Pm
2,5
3,5
Ln (t)
Figure 10.
0,4
0,2
0
10
20
30
40
Ti m e
Figure 8.
Standard deviation
0,35
0,3
0,25
0,2
0,15
0,1
0,05
0
1
11 13 15 17 19 21 23
Pm (t) = 0, 62e
Time
Figure 9.
time.
t96
691,86
2.16
(13)
RELIABILITY EVALUATION
903
Table 2. Accumulated
failures at different days
(110 C/85% RH).
Days
Accumulated failures
12
13
16
20
22
23
24
25
1
3
4
8
9
11
14
15
Weibull plot
F(t)
2
1
y = 4,6621x - 14,219
R2 = 0,9309
0
-1
-2
-3
-4
REFERENCES
Ln time
CONCLUSIONS
Once average and standard deviation power luminosity has been evaluated it is easy to evaluate reliability based on the proposed model. First results
show that:
All the experiments done at different conditions (110 C/85%RH, 130 C/85% RH and
140 C/85RH) inside the pressure cooker chamber
can be fitted by the proposed model.
First results related with reliability evaluation by
the proposed model show that all of them have
similar degradation behaviour but at different rates
depending on the accelerated tests.
After several hours of operation, this time depends
on the acceleration factor, LEDs start degradation
and failure rate increases with time.
Tests at different temperature/humidity conditions
are on going in order to analyse the reliability
influence of the different parameters: temperature,
humidity and pressure.
Reliability at normal working conditions will be
evaluated when all the tests will be finished.
904
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this paper, were interested in the problem of evaluating, analyzing and synthesizing
information delivered by multiple sources about the same badly known variable. We focus on two approaches that
can be used to solve the problem, a probabilistic and a possibilistic one. They are first described and then applied
to the results of uncertainty studies performed in the framework of the OECD BEMUSE project. Usefulness and
advantages of the proposed methods are discussed and emphasized in the lights of obtained results.
1
INTRODUCTION
In the field of nuclear safety, the value of many variables are tainted with uncertainty. This uncertainty can
be due to a lack of knowledge, of experimental value or
simply because a variable cannot be directly observed
and must be evaluated by some mathematical models.
Two common problems encountered in such situations
are the following:
1. The difficulty to build synthetic representations of
our knowledge of a variable;
2. The need to compare, analyse and synthesize
results coming from different mathematical models
modeling a common physical phenomenon.
Both issues can be viewed as problems of information fusion in presence of multiple sources. In the first
case, the information can come from multiple experts,
sensors, or from different experimental results. Taking into account these multiple sources to model input
uncertainty is therefore desirable. In the second case,
the output of each single mathematical model or computer code can be considered as a single source of
information. The synthesis and analysis of the different outputs can then be treated as an information fusion
problem.
Both probability and possibility theories offer formal frameworks to evaluate, analyze and synthesize
multiple sources of information. In this paper, we
remind the basics of each methodology derived from
these theories and then apply them to the results of the
BEMUSE (Best Estimate MethodsUncertainty and
Sensitivity Evaluation) OECD/CSNI program (OCDE
2007), in which the IRSN participated.
The rest of the paper is divided in two main sections.
Section 2 details the ideas on which are based the
methods and then gives some basics about the formal
METHODS
Modeling information
905
q100%
q95 %
0.9
45%
%
q0%
0
q5%
%
5%
50 0 K 60 0 K
Figure 1.
45%
%
0.5
q50 %
5%
|
80 0 K
0.1
90 0 K 1000 K
500 K
Figure 2.
600 K
700 K
800 K
900 K
1000 K
be interpreted as lower and upper probability measures (Dubois and Prade 1992), thus defining a set
P of probability distributions such that
xA
1 the
906
B
pi log
i=1
pi
ui
ql q5%
q50%
ql
q50%
q5%
source 2
weight 0.5
q95%
qu
1
synthesis
0
ql
q
q3 . 75%8 . 75%
Figure 3.
q42 . 5%
1
and
I (r, p) =
source 1
weight 0.5
q95% qu
B
i=1
ri log
ri
pi
1
h
Figure 4.
mean
907
N
i pi
i=1
(1)
(2)
i=1,... ,N
i=1,... ,N
(x) =
N
i i (x)
(3)
i=1
Table 1.
CEA
GRS
IRSN
KAERI
KINS
NRI1
NRI2
PSI
UNIPI
UPC
Exp. val.
PCT (K )
Tinj (s)
Tq (s)
Low
Ref
Up
Low
Ref
Up
Low
Ref
Up
Low
Ref
Up
919
969
872
759
626
913
903
961
992
1103
1107
1058
1069
1040
1063
1058
1041
1026
1099
1177
1062
1255
1107
1233
1217
1097
1208
1165
1100
1197
1249
674
955
805
598
608
845
628
887
708
989
993
1143
1014
1024
1068
1012
970
972
944
1157
1077
1176
1171
1152
1197
1108
1167
1177
1014
1118
1222
14.8
14
15.8
12.7
13.1
13.7
12.8
15.2
8.0
12
16.2
15.6
16.8
13.5
13.8
14.7
15.3
15.6
16.0
13.5
16.8
16.8
17.6
17.3
16.6
13.8
17.7
17.8
16.2
23.5
16.5
30
62.9
41.9
60.9
47.7
51.5
47.4
55.1
41.4
56.5
69.7
80.5
50
73.2
66.9
66.9
62.7
78.5
62.0
63.5
64.9
98
103.3
120
100
100
87.5
82.6
88.4
81.5
66.5
908
592
845
1012
1167
1228
592
845
1012
1167
1228
T(K)
T(K)
Figure 5.
2PCT.
3.1
3.2
Evaluation
Table 2 shows the results of the evaluation steps performed on the results of the BEMUSE programme,
with the models described above. From a methodological point of view, we can notice that the scores and the
ranking between sources are globally in agreement,
even if there are some differences coming from the
differences between formalisms.
From a practical standpoint, interesting things can
be said from the analysis of results. First, our results are
in accordance with informal observations made in previous reports (OCDE 2007): PSI and UNIPI have high
informative scores, which reflects their narrow uncertainty bands, and have very low calibration scores, due
to the fact that, for each of them, two experimental
values are outside interval [Low, Upp]. This consistency between conclusions drawn from our methods
Table 2.
Synthesis
Poss. approach
Participant
Used code
Inf.
Cal.
Global
Inf.
Cal.
Global
CEA
GRS
IRSN
KAERI
KINS
NRI1
NRI2
PSI
UNIPI
UPC
CATHARE
ATHLET
CATHARE
MARS
RELAP5
RELAP5
ATHLET
TRACE
RELAP5
RELAP5
0.77
1.23
0.98
0.68
1.29
0.79
0.79
1.6
0.53
1.44
0.16
0.98
0.75
0.16
0.16
0.75
0.13
0.004
0.75
0.02
0.12
1.21
0.73
0.11
0.21
0.59
0.10
0.008
0.4
0.025
0.71
0.84
0.73
0.70
0.72
0.75
0.78
0.88
0.69
0.87
0.55
0.52
0.83
0.48
0.67
0.63
0.72
0.25
0.67
0.28
0.40
0.44
0.60
0.34
0.49
0.47
0.56
0.22
0.46
0.24
909
0.8
0.9
0.8
0.7
0.7
0.6
0.6
F(X)
F(X)
RELAP5 (KINS,NRI1,UNIPI,UPC)
CATHARE (IRSN,CEA)
ATHLET (GRS,NRI2)
0.9
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
600
700
800
900
T(K)
1000
1100
1200
0
500
1300
1
0.9
0.8
0.7
0.7
0.6
0.6
(x)
0.8
0.5
700
800
900
T(K)
1000
1100
1200
1300
0.4
0.3
0.3
0.2
0.2
0.1
0.5
0.4
600
RELAP5 (KINS,NRI1,UNIPI,UPC)
ATHLET (GRS,NRI2)
CATHARE (IRSN,CEA)
0.9
(x)
0.5
0.4
0
500
(GRS,IRSN,NRI1,UNIPI)
IRSN,KINS,NRI1,UNIPI
0.1
500
600
700
800
900
T(K)
1000
1100
1200
0
600
1300
700
800
900
T(K)
1000
1100
1200
1300
0.9
0.8
0.8
0.7
0.7
0.6
0.6
(x)
(x)
0.9
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
500
600
700
800
900
T(K)
1000
1100
1200
0
300
1300
0.5
0.4
400
500
600
700
T(K)
800
900
1000
1100
1200
1300
Results of synthesis for PCT2: probabilistic and possibilistic approaches (- - -: experimental value).
Figure 6.C and 6.D show synthetic possibility distributions resulting from the application of a conjunctive
operator (Equation (1)). In this case, the disagreement between sources of a particular subgroup is
directly visible, both graphically and quantitatively
(disagreement is measured by the maximal height of
a distribution: the lower the distribution, the higher
the disagreement). We can thus see that information given by ATHLET users are more conflicting
than those given by CATHARE users (this could be
explained by the higher number of input data parameters in ATHLET code). Similarly, Figure 6.D shows
910
CONCLUSIONS
911
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
K.T. Jo
Digital Printing Division, Samsung Electronics, Suwon, Korea
ABSTRACT: To improve reliability of a newly designed product, we introduced new processes and methods.
The reliability definitions contain probability, intended function, specified period and stated conditions. Therefore, it is inevitable to research the operating condition, current state of probability and target of
probability, potential failure mode and mechanism of the product for the specified period.
We conducted a 4-step test program that is, Architecture and Failure Mode Analysis, Potential Failure Mechanism Analysis, Dominant Failure Extraction and Compliance Test. Based upon the Architecture and Failure
Mode Analysis, we selected the stress factors in an ALT and reproduced the field failure mode by an ALT.
Based on the results of researches, test plans are designed to satisfy the target of reliability. Stress analysis is
also useful tool to improve the reliability of a printed circuit assembly. HALT and HASS are respectively used
to improve reliability by finding root causes of latent defects at the stage of development and screening weak
products at the stage of pre-mass production in a short time. We conducted all these kinds of processes and
methods to improve the reliability of electronic devices developed for the first time.
INSTRUCTIONS
It is well known that the shape of failure rate of product in the field shows like Figure 1. The bathtub curve
consists of three periods: an infant mortality period
with a decreasing failure rate followed by a normal
life period (also known as useful life) with a relatively constant and low failure rate and concluding
with a wear-out period that exhibits an increasing
failure rate
The most appropriate shape of failure rate is illustrated as target curve in Figure 1. Ryu called it hockey
stick line (Ryu, 2003). He mentioned that the failure rate of a bad product proceeds along the bathtub
Screening/HASS
Current
HALT
Target
ALT/Stress analysis
t
Figure 1.
913
balance between stress and strength. We also conducted transient analysis to protect abnormal problem
in the field. For example, the stress ratio of main components is analyzed in case of mode change, on/off
testing of power, motor, heater, etc. More than 20
parts of the original design were changed to assure
the reliability of the product through stress analysis
M ea n
f
Stress
Strength
Varian ce
F ailure s
Stress/Strength
Stress
analysis
Strength
Stress
Stress/Strength
2.1
Figure 2.
914
Product
Set
n%
Table 1.
Unit #1
Unit #1
#2
Unit #3
1%
1%
1%
Figure 3.
2.4
2
(,
2r + 2)
1
T
r=0
r=1
r=2
29
41
63
n (r + 1)
1
t arg et h AF
n (r + 1)
LB
h
(4)
(1)
(r + 1)
Step 1
Architecture,
failure mode analysis
Step 2
Potential failure
mechanism analysis
Step 3
Dominant failure
extraction
Step 4
Compliance test
(2)
(3)
h = 500 hrs
Figure 4.
915
Table 2.
Table 3.
Key part
Material
Key part
1. Gear cap
2. Ball bearing
3. Outer pipe
4. Mica sheet
5. E-coil
6. Pi film
7. Inner pipe
Plastic
Stainless steel
Aluminum
Mica
Ni-Cr wire
500 NH
Aluminum
1. Gear cap
2. Ball bearing
3. Outer pipe
4. Mica sheet
5. E-coil
6. Pi film
7. Inner pipe
Melt/Crack
Wear
Deformation
Tear off
Breakdown
Creep
Deformation
In the real field, failure time of the product is determined by the dominant failure mechanism stimulated
by environmental stress factors such as temperature,
humidity, voltage and so on. Table 4 shows the dominant failure mechanism. As shown in Table 4, the
dominant failure mechanism is breakdown of e-coil
wire.
3.4 Compliance test
3.1
Key
parts
Potential
failure
mechanism
Temp
(0 50 C)
Humidity
(0 85%RH)
Voltage
(21 26V)
Point
Gear cap
Ball bearing
Outer pipe
Mica sheet
E-coil
Pi film
Inner pipe
Melt/Crack
Wear
Deformation
Tear off
Breakdown
Creep
Deformation
6
5
1
6
9
6
1
: 5, : 3, : 1.
Point
916
Table 5.
1 Cycle = 5 hours
Compliance test.
Test types
32 C
Potential
failure
mechanism
Point
Thermal
cycle
Temp/
humidity
Power
on/off
Melt/Crack
Wear
Deformation
Tear off
Breakdown
Creep
Deformation
6
5
1
6
9
6
1
Total
126
53
57
Ranking
Table 6.
Environmental
condition
( C)
Temperature
Humidity (%RH)
22 C
60
90
Min
Mean
Max
Remark
0
10
30
65
50
85
Operating
Figure 6.
30
Figure 7.
Test system.
Test plans
90
-5 C
Table 7.
4.1
30
Figure 8.
Test results
t
t
f (t) =
exp
Failure times
(5)
Set level
Humidity (%RH)
(6)
917
Table 8.
Weibull parameters.
Test condition
Set level
Unit level
1.15
1.15
556.41
145.53
Figure 9.
Table 9.
300
400
Sample size
12
4.3
In this paper, we introduced reliability assurance process and related tools such as 4-step test programs. In
particular, parts stress analysis and ALTs at the unit
level had excellent effects decreasing initial service
call rate in recent years. We explained it using e-coil
type heater roller in a laser beam printer. The unit
level accelerated test was developed to save time and
money for set level testing. In a unit level test, we
reproduced the same failure of the heater roller in a
set level test. We also developed ALT plans to assure
the goal of reliability of the module. The heater roller
was redesigned to reduce failure rate and evaluated by
ALT plan suggested in this paper.
The most important thing is to analyze and try to
decrease the gap between the estimated Bx life and the
one obtained from the field real data.
Acceleration factor
(7)
CONCLUSIONS
REFERENCES
ALTA reference manual, 1998, Reliasoft.
Kececioglu, D. & Sun, F.B. 1995. Environmental Stress
Screening: Prentice Hall.
Nelson, W. 1990. Accelerated testing: John Wiley & Sons.
Park, S.J. 2000. ALT + + reference manual. SAMSUNG
ELECTRONICS CO., LTD.
Park S.J., Park S.D. & Kim K.S. 2006. Evaluation of the Reliability of a Newly Designed Duct-Scroll by Accelerated
Life Test, ESREL 2006.
Reliability toolkit. 2001. RAC Publication, CPE. Commercial practices edition.
Ryu D.S., Park S.J. & Jang S.W. 2003. The Novel Concepts
for Reliability Technology, 11th Asia-Pacific Conference
on Non-Destructive Testing, Nov. 37.
Ryu D.S, & Jang S.W. 2005. The Novel Concepts for
Reliability Technology, Microelectronics Reliability 45:
611622.
918
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In this work we will apply a combined approach of a sequential life testing and an accelerated
life testing to friction-resistant low alloy-high strength steel rails used in Brazil. One possible way to translate test
results obtained under accelerated conditions to normal using conditions could be through the application of the
Maxwell Distribution Law. To estimate the three parameters of the underlying Inverse Weibull sampling model
we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration
condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for
the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing
using a truncation mechanism developed by De Souza (2005). An example will illustrate the application of this
procedure.
INTRODUCTION
f (t) =
+1
t 0; , , > 0.
919
exp
t
(1)
MTE represents the number of molecules at a particular absolute Kelvin temperature T (Kelvin = 273.16
plus the temperature in Centigrade), that passes a
kinetic energy greater than E among the total number
of molecules present, Mtot ; E is the energy of activation of the reaction and K represents the gas constant
(1.986 calories per mole). Equation 1 expresses the
probability of a molecule having energy in excess of
E. The acceleration factor AF2/1 at two different stress
temperatures, T2 and T1 , will be given by the ratio of
the number of molecules having energy E at these two
different temperatures, that is:
eE/KT2
MTE (2)
= E/KT
1
MTE (1)
e
E 1
1
= exp
K T1
T2
AF2/1 =
AF2/1
(2)
Applying natural logarithm to both sides of Equation 1 and after some algebraic manipulation, we will
obtain:
E 1
MTE (2)
1
=
(3)
ln AF2/1 = ln
MTE (1)
K T1
T2
From Equation 3 we can estimate the term E/K by
testing at two different stress temperatures and computing the acceleration factor on the basis of the fitted
distributions. Then:
ln AF2/1
E
(4)
=
1
K
1
T1
T2
(5)
AF2/n = exp
K Tn
T2
(6)
(7)
H1 : < 0
920
H0 : 0 ;
H1 : < 0
According to Mood & Graybill (1963), an approximate expression for the expected sample size E(n) of
a sequential life testing will be given by:
H1 : < 0
E(n) =
E(Wn )
E(w)
(11)
SEQUENTIAL TESTING
w = ln
f (t; 1 , 1 , 1 )
f (t; 0 , 0 , 0 )
(12)
n n
11 (ti 0 )0 +1
SPR =
0
(ti 1 )1 +1
00
i=1
n
00
1 1
exp
(ti 1 )1
(ti 0 )0
i=1
E Wn P (, , ) ln (A) + [1 P (, , )] ln (B)
(13)
E (n)
(1 )
n ln
<X
ln
00
1 1
1
(1 )
+ ln
< n ln
0
00
X=
n
i=1
n
i=1
1
1
0
1 1
(ti 1 )1
+ 0 0 E
(15)
(ti 0 )
ln (ti 1 ) (10)
(ti 0 )0
ln (ti 0 ) + (1 + 1)
n
i=1
(9)
00
P (, , ) ln (A) + [1 P (, , )] ln (B)
E (w)
(14)
(0 + 1)
921
dL r
= + r ln ()
ln (ti )
d
r
r
i=1
i=1
ti
(n r)
ln
tr
ti
ln
tr
=0
(21)
1
dL
= ( + 1)
d
(ti )
i=1
r
+1
1
ti
r
L(; ; ) = k!
r
f (ti ) [1 F(tr )]nr , or yet:
i=1
L (; ; ) = k!
r
f (ti ) [R(tr )]nr ; t > 0
i=1
f (ti ) =
ti
+1
exp
tr
r
r r
L (; ; ) = k!
R(tr ) = exp
i=1
r
ti
i=1
(16)
1
+ (n r)
tr
(/ti )
i=1
e(/tr )
nr
(19)
(n r)
i=1
tr
1
ti
r
+ (n r)
1
tr
(23)
L = ln (k!) + r ln () + r ln () ( + 1)
r
r
ln (ti )
ti
i=1
1/
i=i
(22)
+1
=0
(17)
(18)
1
(ti )
+1
r
ln (ti )
i=1
r
1
ln (ti )
r
ti
i=1
1
ln (tr )
+ (n r) tr
=0
+
1
1
+
r)
(n
ti
tr
r
i=1
(24)
To find the value of and that maximizes the log
likelihood function, we take the , and derivatives
and make them equal to zero. Then, we will have:
dL r
= 1
d
r
i=1
(n r) 1
1
ti
( + 1)
i=1
1
tr
r
=0
(20)
922
1
(ti )
1
ti
i=1
r
i=1
+1
1
ti
+ (n r)
+ (n r)
1
tr
1
tr
+1
=0
(25)
k+1
1/
g
Ui n
Ui
1e
n
(1, 2or4)
3
i=1
+ n j
EXAMPLE
k+1
n
g
(26)
923
765.1
862.2
909.4
973.2
843.6
877.3
930.9
1,014.7
850.4
891.0
952.4
1,123.6
673.6
705.1
769.2
816.0
683.1
725.4
776.6
981.9
1
1
AF2/n = exp 1, 015.1
= 4.38
296 520
1 = 117.9 hours
At 520 K.2 = n = = 8.41; 2 = 548.0 hours;
2 = 100.2 hours
The shape parameter did not change with 8.4.
The acceleration factor for the scale parameter AF2/1
will be given by:
AF2/1 = 1 /2 = 642.3/548.0
Using Equation 4, we can estimate the term E/K.
ln AF2/1
ln (642.3/548.0)
E
= 1
= 990.8
=
1
1
1
K
480 520
T1 T2
Using now Equation 5, the acceleration factor for
the scale parameter, to be applied at the normal stress
temperature AF2/n , will be:
1
1
Tn
T2
1
1
= 4.23
= exp 990.8
296 520
AF2/n = exp
AF2/n
E
K
1
117.9
=
=
2
100.2
1
1
1
K
480 520
T1 T2
ln (B) = ln
(1 )
(1 0.10)
= ln
= 2.8904,
0.05
ln (A) = ln
924
= ln
0.10
1 0.05
= 2.2513,
NUMBER OF ITEMS
0
0
V
A
L
U
E
S
-5
-10
ACCEPT Ho
-15
-20
-25
O
F
X
-30
REJECT Ho
-35
-40
we will have:
P (, ) ln (A) + [1 P (, )] ln (B)
= 0.01 2.2513 + 0.99 2.8904 = 2.8390
Then: E (n) = 2.8390
0.6115 = 4.6427 5 items.
So, we could make a decision about accepting or
rejecting the null hypothesis H0 after the analysis of
observation number 5. Using Equations 9 and 10 and
the twelve failure times obtained under accelerated
conditions at 520 K given by Table 2, multiplied by
the accelerating factor AF of 4.3, we calculate the
sequential life testing limits. Figure 1 below shows the
sequential life-testing for the three-parameter Inverse
Weibull model.
Then, since we were able to make a decision
about accepting or rejecting the null hypothesis H0
after the analysis of observation number 4, we do not
have to analyze a number of observations corresponding to the truncation point (5 observations). As we
can see in Figure 1, the null hypothesis H0 should be
accepted since the final observation (observation number 4) lays on the region related to the acceptance of H0 .
CONCLUSIONS
There are two key limitations to the use of the Arrhenius equation: first, at all the temperatures used, linear specific rates of change must be obtained. This
requires that the rate of reaction, regardless of whether
or not it is measured or represented, must be constant
over the period of time at which the aging process is
evaluated. Now, if the expected rate of reaction should
vary over the time of the test, then one would not be
925
REFERENCES
Bain, Lee J. 1978. Statistical Analysis of Reliability and
Life-Testing Models, Theory and Method. Marcel Dekker,
Inc., New York, NY, USA.
Blischke, W.R. 1974. On non-regular estimation II. Estimation of the Location Parameter of the Gamma and
Weibull Distributions, Communications in Statistics, 3,
11091129.
Chornet & Roy. 1980. Compensation of Temperature on Peroxide Initiated Cross linking of Polypropylene, European
Polymer Journal, 20, 8184.
Cohen, A.C.; Whitten, B.J. & Ding, Y. 1984. Modified Moment Estimation for the Three-Parameter
Weibull Distribution, Journal of Quality Technology 16,
159167.
De Souza & Daniel I. 2000. Further thoughts on a sequential life testing approach using a Weibull model. In
Cottam, Harvey, Pape & Tait (eds.), Foresight and Precaution, ESREL 2000 Congress. 2: 16411647, Edinburgh,
Scotland: Balkema.
De Souza & Daniel I. 2001. Sequential Life Testing with
a Truncation Mechanism for an Underlying Weibull
Model. In Zio, Demichela & Piccinini (eds.), Towards a
Safer World, ESREL 2001 Conference, 1620 September.
3:15391546. Politecnico Di Torino. Italy.
De Souza & Daniel I. 2004. Sequential Life-Testing with
Truncation Mechanisms for Underlying Three-Parameter
Weibull and Inverse Weibull Models.,In Raj B.K. Rao,
B.E. Jones & R.I. Grosvenor Eds.; COMADEM Conference, Cambridge, U.K., August 2004260271, Comadem
International, Birmingham, U.K.
De Souza & Daniel I. 2005. A Maximum Likelihood
Approach Applied to an Accelerated Life Testing with
an Underlying Three-Parameter Inverse Weibull Model
In: Raj B.K. Rao & David U Mba Eds. COMADEM
2005 Condition Monitoring and Diagnostic Engineering Management, University Press, 2005. v.01. p.63 72.
Cranfield, Bedfordshire, UK.
De Souza, Daniel I. & Addad, Assed N. 2007. Sequential
Life-Testing with an Underlying Three-Parameter Inverse
Weibull Model A Maximum Likelihood Approach
In: IIE Annual Conference and Exposition. Nashville,
TN: The Institute of Industrial Engineering, 2007. V.01.
pp. 907 912. USA.
De Souza, Daniel I. & Lamberson, Leonard R. 1995.
Bayesian Weibull Reliability Estimation, IIE Transactions
27 (3), 311320.
Erto & Pasquale. 1982. New Practical Bayes Estimators for
the 2-Parameter Weibull Distribution, IEEE Transactions
on Reliability, R-31, (2), 194197.
Feller & Robert L. 1994. Accelerated Aging, Photochemical
and Thermal Aspects. The Getty Conservation Institute,
Eds.; Printer: Edwards Bross., Ann Harbor, Michigan.
Harter, H. et al. 1965, Maximum Likelihood Estimation of
the Parameters of Gamma and Weibull Populations from
Complete and from Censored Samples, Technometrics,
No 7, pp. 639643; erratum, 15 (1973), pp. 431.
Kapur, K. & Lamberson, L.R. 1977. Reliability in Engineering Design, John Willey & Sons, Inc., New York.
Mood, A.M. & Graybill, F.A. 1963. Introduction to the Theory of Statistics. Second Edition, McGraw-Hill, New
York.
1 exp
f (t1 ) =
t
t
The expected value of x1 is given by:
E(t1 ) =
+1
n
t
t
1 exp
Letting U =
du =
n
dt
, we will have:
+1
dt; t =
+
U1/
!n
n U1/ + 1 eU du
E (t1 ) = n
U1/ 1 eU
!n
du
+ n
1 eU
!n
du
926
E (t1 ) = n
g
(f1 + 4f2 + 2f3 + + 4fk + fk+1 )
3
f (x)dx =
+ n
error
Making the error = 0; and with i = 1, 2, . . ., k + 1,
we will have:
U
n
0
1/
1e
k+1
!
U n
1/
Ui
1e
(1, 2 or 4)
1 eU
!n
du = n j
1e
Ui n
(1, 2 or 4)
du.
Finally:
g
3
k+1
!n
(A)
n
1 eU
i=1
i=1
du
1
k+1
g
1/
n
2
Ui
1e
E (t1 ) = n
or
Ui
3
i=1
4
k+1
n
g
1 eUi (1, 2 or 4) . (27)
+ n j
3
g
du = n
3
Ui n
!n
U1/ 1 eU
(B)
i=1
927
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper aims to explore different aspects related with the failure costs (non reliability costs)
within the Life Cycle Cost Analysis (LCCA) of a production asset. Life cycle costing is a well-established
method used to evaluate alternative asset options. This methodology takes into account all costs arising during
the life cycle of the asset. These costs can be classified as the capital expenditure (CAPEX) incurred when
the asset is purchased and the operating expenditure (OPEX) incurred throughout the assets life. In this paper
we explore different aspects related with the failure costs within the life cycle cost analysis, and we describe
the most important aspects of the stochastic model called: Non-homogeneous Poisson Process (NHPP). This
model will be used to estimate the frequency failures and the impact that could cause the diverse failures in the
total costs of a production asset. The paper also contains a case study where we applied the above mentioned
concepts. Finally, the model presented provides maintenance managers with a decision tool that optimizes the
life cycle cost analysis of an asset and will increase the efficiency of the decision-making process related with
the control of failures.
Keywords: Asset; Failures; Life Cycle Cost Analysis (LCCA); Non-homogeneous Poisson Process (NHPP);
Maintenance; Reliability; Repairable Systems
INTRODUCTION
929
930
3.1
931
Costs of Non
Reliability
Costs for
penalization
Figure 1.
Costs for
corrective
maintenance
932
4.1
As good as new
As bad as old
Better than old, but worse than new
Better than new
Worse than old
t2
mean =
(t)dt
(1)
t1
(2)
F(x + y) F(y)
1 F(y)
Sn =
n
xi
(3)
i=1
(4)
933
or system restored to a state better than new. Therefore, physically speaking, q can be seen as an index
for representing the effectiveness and quality of repairs
(Yaez et al., 2002). Even though the q value of the
GRP model constitutes a realistic approach to simulate
the quality of maintenance, it is important to point out
that the model assumes an identical q for every repair
in the item life. A constant q may not be the case for
some equipment and maintenance process, but it is a
reasonable approach for most repairable components
and systems.
The three models described above have advantages
and limitations. In general, the more realistic is the
model, the more complex are the mathematical expression involved. The NHPP model has been proved to
provide good results even for realistic situations with
better-than-old but worse-than-new repairs (Yaez
et al., 2002). Based on this, and given its conservative
nature and manageable mathematical expressions, the
NHPP was selected for this particular work. The specific analytical modeling is discussed in the following
section.
4.4
t2
(t)dt
(5)
t1
(t) =
(t)dt
(t) =
1
t
(6)
(7)
F(t) F(t1 )
R(t1 )
1 R(t) 1 + R(t)
R(t1 )
=1
R(t)
R(t1 )
(9)
where F() and R() are the probability of component failure and the reliability at the respective times.
Assuming a Weibull distribution, Eq. (9) yields:
(8)
This form comes from the assumption that the interarrival times between successive failures follow a
conditional Weibull probability density function, with
parameters and . The Weibull distribution is typically used in maintenance area due to its flexibility
and applicability to various failure processes, however,
solutions to Gamma and Log-normal distributions are
also possible. This model implies that the arrival of
the ith failure is conditional on the cumulative operating time up to the (i 1)th failure. Figure 3 shows a
schematic of this conditionality (Yaez et al., 2002).
This conditionality also arises from the fact that the
system retains the condition of as bad as old after
the (i 1)th repair. Thus, the repair process does not
restore any added life to the component or system.
In order to obtain the maximum likelihood (ML)
estimators of the parameters of the power law model,
consider the following definition of conditional
probability:
P(T t |T > t1 ) =
Figure 3.
F(ti ) = 1 exp
934
ti1
ti
(10)
ti1
ti
ti 1
f (ti ) =
. exp
(11)
For the case of the NHPP, different expressions for
the likelihood function may be obtained. We will use
expression based on estimation at a time t after the
occurrence of the last failure and before the occurrence
of the next failure, see details on these expressions in
(Modarres et al., 1999).
4.4.1 Time terminated NHPP maximum
likelihood estimators
In the case of time terminated repairable components, the maximum likelihood function L can be
expressed as:
L=
f (ti ) = f (t1 )
i=1
f (ti )R(tn |t )
(12)
i=2
Therefore:
t1 1
t1
L=
exp
n
n1 t1 1
i=2
n
ti1
ti
exp
i=2
t
tn
exp
(13)
tn
(14)
n
i=1
n
ln ttni
(15)
(tn , tn+s ) =
1
(tn + ts ) (tn )
(16)
n
ti
(17)
i=1
935
TCPf =
F
(tn , tn+s ) Cf
(18)
Cf = 5000
The obtained equivalent annual total cost, represents the probable value of money that will be
needed every year to pay the problems of reliability caused by the event of Failure, during the years
of expected useful life.
6. Calculate the total costs per failures in present value
PTCPf . Given a yearly value TCPf , the quantity
of money in the present (today) that needs to be
saved, to be able to pay this annuity for the expected
number of years of useful life (T), for a discount rate
(i). The expression used to estimate the PTCPf is
shown next:
PTCPf = TCPf
(1 + i)T 1
i (1 + i)T
(19)
$
failure
1
[(tn + ts ) (tn ) ]
Where:
n = 24 failures
n
ti = 5 + 7 + 3. . . . . . .4 + 7 + 4 = 117
tn =
i=1
months
ts = 12 months
tn+s = 119 months
The parameters and , are calculated from the
expressions (14) and (15):
= 6.829314945
= 1.11865901
The expected frequency of failures per year:
(tn , tn+s ) = 2.769896307
CASE STUDY
Times to failures.
failures
,
year
936
$
failures
5000
year
failures
$
year
FUTURE DIRECTIONS
The specific orientation of this work toward the analysis of the Reliability factor and its impact in the costs, is
due to, that great part of the increment of the total costs
during the expected cycle of useful life of a production
system, is caused in its majority, for the lack of prevision in the face of unexpected appearance of failure
events, scenario basically provoked by ignorance and
by the absence of a technical evaluation in the design
phase of the aspects related with the Reliability. This
situation brings as a result an increment in the total
costs of operation (costs that were not considered in
the beginning) affecting in this way the profitability of
the production process.
937
Can be used
Background/
Difficulty
Renewal
theory/
Medium
Renewal
theory/
Medium
Differential
equations
or integral
equations/
Low
Integral
equations/
Medium
Integral
equations/
High
Partial diff.
eq.; case by
base sol./
High to
very high
REFERENCES
Ahmed, N.U. 1995. A design and implementation model
for life cycle cost management system, Information and
Management, 28, pp. 261269.
Asiedu, Y. and Gu, P. 1998. Product lifecycle cost analysis:
state of art review, International Journal of Production
Research, Vol. 36 No. 4, pp. 883908.
Ascher, H. and Feingold, H. Repairable System Reliability:
Modeling, Inference, Misconceptions and their Causes,
New York, Marcel Dekker, 1984.
Barlow, R.E., Clarotti, C.A. and Spizzichino, F. 1993. Reliability and Decision Making, Chapman & Hall, London.
Barringer, H. Paul and David P. Weber. 1996. Life
Cycle Cost Tutorial, Fifth International Conference
on Process Plant Reliability, Gulf Publishing Company,
Houston, TX.
Barringer, H. Paul and David P. Weber. 1997. Life Cycle
Cost & Reliability for Process Equipment, 8th Annual
ENERGY WEEK Conference & Exhibition, George R.
Brown Convention Center, Houston, Texas, Organized by
American Petroleum Institute.
Barroeta, C. 2005. Risk and economic estimation of inspection policy for periodically tested repairable components,
Thesis for the Master of Science, University of Maryland,
Faculty of Graduate School, College Park,Cod.Umi-umd2712, pp. 77, August, Maryland.
Blanchard, B.S. 2001. Maintenance and support: a
critical element in the system life cycle, Proceedings of
the International Conference of Maintenance Societies,
paper 003, May, Melbourne.
Blanchard, B.S. and Fabrycky, W.J. 1998. Systems Engineering and Analysis, 3rd ed., Prentice-Hall, Upper Saddle
River, NJ.
Bloch-Mercier, S. 2000. Stationary availability of a semiMarkov system with random maintenance, Applied
Stochastic Models in Business and Industry, 16,
pp. 219234.
Crow, LH. 1974. Reliability analysis for complex repairable
systems, Reliability and biometry, Proschan F, Serfling
RJ, eds., SIAM, Philadelphia, pp. 379410.
Dhillon, B.S. 1989. Life Cycle Costing: Techniques, Models
and Applications, Gordon and Breach Science Publishers,
New York.
Dhillon, B.S. 1999. Engineering Maintainability: How to
Design for Reliability and Easy Maintenance, Gulf,
Houston, TX.
Dowlatshahi, S. 1992. Product design in a concurrent engineering environment: an optimization approach, Journal
of Production Research, Vol. 30 (8), pp. 18031818.
Durairaj, S. and Ong, S. 2002. Evaluation of Life Cycle
Cost Analysis Methodologies, Corporate Environmental
Strategy, Vol. 9, No. 1, pp. 3039.
DOD Guide LCC-1, DOD Guide LCC-2, DOD Guide LCC3. 1998. Life Cycle Costing Procurement Guide, Life
Cycle Costing Guide for System Acquisitions, Life Cycle
Costing Guide for System Acquisitions, Department of
Defense, Washington, D.C.
Ebeling, C. 1997. Reliability and Maintainability Engineering, McGraw Hill Companies, USA.
Elsayed, E.A. 1982. Reliability Analysis of a container
spreader, Microlelectronics and Reliability, Vol. 22,
No. 4, pp. 723734.
938
939
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
A.B. Skjerve
Institute for Energy Technology, Norway
ABSTRACT: Industrial accidents, explosions and fires have a depressingly familiar habit of re-occurring, with
similar if not identical causes. There is a continual stream of major losses that commonly are ascribed to poor
operating and management practices. The safety risks associated with modern technological enterprises make it
pertinent to consciously monitor the risk level. A comprehensive approach in this respect is being taken by the
Petroleum Safety Authority Norway (PSA)s program Trends in Risk Levels Norwegian Continental Shelf.
We analyse the publicly available data provided by this program using the DuffeySaull Method. The purpose of
the analysis is to discern the learning trends, and to determine the learning rates for construction, maintenance,
operation and administrative activities in the North Sea oil and gas industry. This outcome of this analysis allows
risk predictions, and workers, management and safety authorities to focus on the most meaningful trends and
high-risk activities.
INTRODUCTION
941
Table 1.
Injuries, n
AccMh
Entropy, H
Injury rate/Mh
Year
4670117
4913477
4967799
4418068
4696224
5168486
5506589
5827360
6248973
6273504
145
141
133
117
121
110
103
90
54
59
4.670117
9.583595
14.551394
18.969462
23.665686
28.834172
34.340761
40.168122
46.417095
52.690599
0.088633
0.181884
0.276167
0.360016
0.449144
0.547236
0.651744
0.762339
0.880937
1
0.27047
0.266685
0.258794
0.241637
0.246107
0.233505
0.224957
0.207881
0.150437
0.159497
31.04847
28.69658
26.77242
26.48216
25.76538
21.28283
18.70486
15.44438
8.64142
9.404633
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
942
80
Norway drilling
Norway construction
70
Norway maintenance
Rate/ Mh
UK Major Injuries
-0.017Mh
60
UK Major injuries
Norway Maintenance injuries
50
Norway drilling
Norway administration
40
Rate (Norway)= 52e
-0.014Mh
30
20
-0.0271Mh
Rate= 41.e
2
R = 0.8883
10
-0.0289Mh
Rate = 21e
2
R = 0.8865
0
0
10
20
30
40
50
60
70
80
Figure 1.
943
944
(1)
(2)
Information Entropy
Offshore Injuries UK and Norway
0.4
0.35
Entropy, plnp
0.3
0.25
0.2
0.15
Norway entropy
UK entropy MI
0.1
UK entropy 3d
US NMACs entropy
0.05
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Non-dimensional Experience, N*
Entropy
Norway Offshore Injury Data 1996-2005
0.35
0.3
0.25
Entropy, p lnp
0.2
0.15
0.1
Norway injury data
Theory, a=1
0.05
E = 0.28 - 0.11N*
2
R = 0.478
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Non-dimensional Experience, N*
Figure 2.
945
0.8
0.9
REFERENCES
Aase, K., Skjerve, A.B.M. & Rosness, R. 2005. Why Good
Luck has a Reason: Mindful Practices in Offshore Oil and
Gas Drilling. In: S. Gherardi & D. Nicolini (eds.), The
Passion for Learning and Knowing. Proceedings of the
6th International Conference on Organizational Learning
and Knowledge, vol. 1.: 193210. Trento: University of
Trento e-books.
DEST, 2005. The website of the Department of Education,
Science and Training of Australia. http://www.dest.gov.
au / sectors / training _ skills / policy_issues_reviews/key _
issues/nts/glo/ftol.htm#Glossary_-_L (Accessed January
2008)
DiBella, A.K. 2001. Learning practices: Assessment and
Action for Organizational Improvement. Upper Saddle
River, N.J: Prentice-Hall.
946
Duffey, R.B. & Saull, J.W. 2002. Know the Risk, First Edition,
Boston, USA, Butterworth and Heinemann.
Duffey, R.B. & Saull J.W. 2004. Reliability and Failures of
Engineering Systems Due to Human Errors, Proc. The
First Cappadocia Int. Mechanical Engineering Symposium (CMES-04), Cappadocia, Turkey.
Duffey, R.B. & Saull, J.W. 2007. Risk Perception in Society: Quantification and Management for Modern Technologies, Proc. Safety and Reliability Conference, Risk
Reliability & Societal Safety (ESREL 2007), Stavanger,
Norway, 2427 June.
Duffey, R.B. & Saull, J.W. 2008. Risk Management Measurement Methodology: Practical Procedures and Approaches
for Risk Assessment and Prediction, Proc. ESREL 2008
and 17th SRA Europe Annual Conference, Valencia,
Spain, 2225 September.
Ebbinghaus, H. 1885. Memory: A Contribution to Experimental Psychology. (Translated from: "ber das Gedchtnis"). http://psy.ed.asu.edu/classics/Ebbinghaus/index.
htm (Accessed January 2008).
Hoholm, T. 2003. Safety Culture in the Norwegian
Petroleum Industry: Towards an Understanding of interorganisational culture development as network learning.
Arbeidsnotat nr. 23/2003. Oslo: Center for Technology,
Innovation and Culture, University of Oslo.
IAEA, 1991. Safety Culture, Safety Series no. 75-INSAG-4,
Vienna: International Atomic Energy Agency.
IAEA, 2002. Recruitment, Qualification and Training of
Personnel for Nuclear Power Plants, Safety Guide no.
NS-G-2.8, Vienna: International Atomic Energy Agency.
Jaynes, E.T. 2003. Probability Theory: The Logic of Science, First Edition, Edited by G.L. Bretthorst, Cambridge
University Press, Cambridge, UK.
Johnston, R. & Hawke, G. 2002. Case studies of organisations with established learning cultures, The National
Centre for Vocational Education Research (NCVER),
Adelaide, Australia. http://www.ncver.edu.au/research/
proj/nr9014.pdf (Accessed January 2008)
Johnson-Laird, P. & Byrne, R., 2000. Mental Models Website. http://www.tcd.ie/Psychology/Ruth_Byrne/mental_
models/ (Accessed January 2008).
Moan, T. 2004. Safety of Offshore Structures, Second Keppel
Offshore and Marine Lecture, CORE Report No. 200504,
National University of Singapore.
947
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Suresh Goyal
Bell Labs Ireland, Dublin, Ireland
ABSTRACT: We describe a technique for estimating production test performance parameters from typical
data that are available from past testing. Gaussian mixture models are used for the data because it is often
multi-modal, and the inference is implemented via a Bayesian approach. An approximation to the posterior
distribution of the Gaussian mixture parameters is used to facilitate a quick computation time. The method is
illustrated with examples.
INTRODUCTION
Many manufacturing processes for electronic equipment involve a complex sequence of tests on components and the system. Statistical imperfect test and
repair models can be used to derive the properties of
the sequence of tests, such as incoming quality, rate
of false positives and negatives, and the success rate
of repair, but require the value of these properties
to be specified. It is recognized that optimal testing
strategies can be highly sensitive to their value (Dick,
Trischler, Dislis, and Ambler 1994).
Fortunately, manufacturers often maintain extensive databases from production testing that should
allow these properties to be estimated. In this paper
we propose a technique to compute the properties of
a test from the test measurement data. It is a robust
technique that is designed to be applied automatically,
with no intervention from the test engineer in most
cases.
This learning process is not as straightforward as
it might appear at first for several reasons. First, the
properties of the test that interest us are not what are
recorded in the database. Rather, the test measurements themselves are what are stored. We address this
by defining a model for the measurements, introduced
in (Fisher et al. 2007a; Fisher et al. 2007b), that implicitly defines the test properties; we fit measurement
data to the model following the Bayesian approach
which then gives us estimates of the test properties.
The Bayesian approach has the advantage that it correctly propagates the uncertainty in the measurement
model parameter estimates, as inferred directly from
the data, through to the estimates of the test model
MODEL
The measurement model
A unit is tested by measuring the value of one property of the unit. The true value of the property being
measured is x. The value is measured with error
and we denote the observed value as y. A Gaussian
949
K
pk
k=1
Figure 1.
1
2
exp 2 (x k ) ,
2k
(1)
< y < .
1
2
2
py|x (y | x, s2 ) =
e(yx) /2s ,
2
2 s
(2)
2 k2
STATISTICAL INFERENCE
A Bayesian approach is adopted, so the goal is to compute the distribution p(GI , GG , BG , BG | data). The
likelihood is most easily written in terms of , s2 , BG
950
and also x, the true value of the unit used in the oneoff tests. The deterministic relationships in Section 2.2
between the measurement model parameters ( , s2 )
and the test model parameters then allow us to compute the posterior distribution of GI , GG , BG and
BG from that of , s2 and BG .
3.1
The likelihood
m
n
2
2
py (yj | , s ) .
= py|x (zj | x, s )
j=1
3.2
(3)
I , GG , BB ).
p(BG | second pass data, G
i=1
(4)
and
Py = P(L y U | , s2 )
(5)
(6)
(9)
Ps = [GG (1 GG )GI + GG BG BB (1 GI )
+ (1 BB )(1 BG )BB (1 GI )]
[(1 GG )GI + BB (1 GI )]1 .
(7)
(8)
951
EXAMPLES
p(p1 , . . . , pn ).
(11)
952
the censored data. The analysis now gives posterior means and (2.5%, 97.5%) probability intervals as: GI = 0.950 (0.871, 0.997); GG =
0.939 (0.880, 0.993); BB = 0.697 (0.380, 0.962);
BG = 0.55 (0.13, 0.99). We see that the posterior distributions have considerably higher variance,
reflecting the loss of information from the censoring.
5
CONCLUSIONS
953
954
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The main objective of this study is to define reliability requirements in relation to environmental
impacts in critical areas in terms of environmental resource sensitivity. Nowadays many enterprises in Brazil are
evaluated in this area in term of many different environment requirements, but the environmental impact of the
enterprise or the group of enterprises as a whole are not assessed, and nor are their future modifications.
When the number of enterprises in a specific area increases the risk of accidents is also rises. In other words
reliability over time gets worse. Unfortunately most of cases in Brazil do not take into account the entire enterprise
risk impact in a specific area and the decrease of reliability over time.
The methodology in question takes into account all the critical events which cause a serious environmental
impact for each enterprise in the same area that take place over time. By taking into account all relevant events,
it is possible to produce the Environment Diagram Block which covers all related events and their probability
of occurring over time. This means that failures in any block represent accidents with potential environment
impacts.
The environmental reliability target is associated with the tolerable number of environmental impacts in a
specific area, taking into account all events over a specific period of time. The tolerable number of accidents
depends on social perception and environmental sensitivity.
For this analysis the Monte Carlo simulation has to be carried out over a period of time in order to define the
Environmental Availability and Environmental Reliability related to the number of tolerable events. Moreover, in
the case of any enterprise modifications or an increase in the number of enterprises a new block will be inputted
in the Environmental Block Diagram and the new results will be assessed.
INTRODUCTION
ANALYSIS METHODOLOGY
957
1 Environment Sensitivity
2 Critical Events
4 Simulation
5 Critical analysis
6 Conclusion
Figure 1.
ENVIRONMENTAL SENSITIVITY
958
Table 1.
Sensitivity
ranking
High
Medium High
Medium
Low Medium
Low
Figure 2.
Environmental sensitivity.
Habitat type
Saltmarsh
Sheltered Rocky Intertidal
Sheltered Rocky Intertidal
Special Use (endangered
species/marine protected areas)
Seagrass Meadow (low intertidal to
shallow subtidal)
Open Water Enclosed Bays and
Harbours
Exposed Sand/Gravel/Cobble Intertidal
Exposed Rocky Intertidal
Kelp Forest Subtidal
Open Water, Non-enclosed Nearshore
and Offshore
Soft Bottom to Rocky Subtidal
959
2
1
Comments
Marine
Wetlands
t
R(t) = 1
f (t)dt
0
f (t)
R(t)
Reliability, R(t)=1-F(t)
1,000
Reliability
0,801
0,601
0,402
0,202
ENVIRONMENTAL RELIABILITY
0,003
0,000
Figure 3.
960
20,000
Time, (t)
30,000
40,000
50,000
The concept takes into account the number of environmental impacts and their duration as shown in the
equation below (D(t) = availability).
4,000
Failure Rate
Derramamento de leo no japo
Gumbel-2P
RRX SRM MED FM
F=16/S=0
Failure Rate Line
3,201
n
2,401
D(t) =
1,602
i=1
n
ti
Ti
i=1
0,802
0,003
0,000
20,000
30,000
40,000
50,000
Time, (t)
Figure 4.
Event frequency.
Failure Rate
Derramamento de leo no japo
Gumbel-2P
RRX SRM MED FM
F=16/S=0
Failure Rate Line
3,201
2,401
1,602
0,802
Figure 5.
ENVIRONMENTAL AVAILABILITY
CASE STUDY
961
Submarine
Figure 7.
Drill 1
Figure 8.
Figure 6.
Table 3.
Blowout events.
Blowout
Probability
Underground Blowout
Submarine Blowout
Surface Blowout
Blowout
1,50E-04
1,80E-04
1,62E-06
3,33E-04
The main objective of the methodology is to estimate the period of time when the event analyzed
happen based on its PDF characteristics. In this direct
simulation it is possible to analyze the life of the
system as a whole and to find out which event occurred,
when it happened and to have an idea about its direct
and indirect impacts and the environment reliability
and availability for a specific period of time. For
instance, a group of events involving drilling activity
in the Campos Basin is represented in an Environmental Diagram Block where each block takes into
account blowout events over the lifetime of the drilling.
Each drilling activity has three catastrophic events that
are represented in Figure 7 below. The events are in
series in the Block Diagram because the occurrence of
any event represents an environmental impact on the
system as a whole.
The next step is to represent the group of enterprises
located in same area and to estimate environmental
availability and reliability limits. In the same way that
each event is in a series in an individual enterprise,
groups of enterprises are also in series. Figure 8 below
represents the drilling group of the Campos Basin. The
same block is also used for the analysis.
Surface
Underground
Drill 2
Drill 3
Drill 4
Drill 5
Drill 6
962
Table 4.
Env Avail
Env Unavail
Env Reliab
CE
ETC
EIT
1
5
10
20
40
80
99,99%
99,92%
99,84%
99,69%
99,42%
98,81%
0,01%
0,08%
0,16%
0,31%
0,58%
1,19%
96,80%
82,00%
64,40%
46,00%
25,20%
6,40%
0,03
0,20
0,40
0,74
1,41
2,86
100%
100%
100%
99,47%
99,44%
98,77%
23
141
283
523
1017
2082
2,300E-6
Failure Rate
Multiplos 10 pocos
Failure Rate Line
1,840E-6
1,380E-6
7
9,200E-7
4,600E-7
0,000
0,000
600000,000
1,200E+6
1,800E+6
2,400E+6
3,000E+6
Time, (t)
Figure 9.
RS FCI
7,071
Availability
100%
50%
5,657
0%
1 Item(s)
4,242
Underground blowout
2,828
1,414
Figure 10.
CONCLUSION
Environmental reliability is a powerful tool to support decision making related to environmental protection, defining limits for enterprises with reliability
requirements, numbers of enterprises and establishing
the most vulnerable areas for the location of emergency
teams.
Unlike the usual methodology, it is possible
to consider a group of enterprises and critical
events in simulation over a specific period of time.
The difficultly is obtaining historical data about
events and defining environments limits for specific
areas.
In the case of emergency teams it is considered that
they will be in the correct position and that all procedures and actions will happen correctly avoiding any
delay. In real life this does not happen, therefore the
specific model has to be evaluated taking into account
the performance of emergency teams.
The remarkable point about historical data is
understanding why accidents happens and if the data
fits well enough to be used in the current simulation
case.
In this case study only drilling activities which
affected a specific area where taken into account, but in
addition all enterprises and the lifetimes of platforms
and ships have also to be considered.
The next step in the case study is to consider all
enterprise data which has an influence on environmental sensitivity in the area in question. Because of
the environmental effects of other enterprises drilling
limits will probably be reduced in order to keep
the number of catastrophic accidents lower than one
during the lifetime in question.
963
REFERENCES
A.M. Cassula, Evaluation of Distribution System Reliability Considering Generation and Transmission Impacts,
Masters Dissertation, UNIFEI, Nov. 1998.
API (American Petroleum Institute). 1985. Oil spill response:
Options for minimizing ecological impacts. American
Petroleum Institute Publication No. 4398. Washington,
DC: American Petroleum Institute.
Ballou, T.G., R.E. Dodge, S.C. Hess, A.H. Knap and
T.D. Sleeter. 1987. Effects on a dispersed and undispersed
crude oil on mangroves, seagrasses and corals. American
Petroleum Institute Publication No. 4460. Washington,
DC: American Petroleum Institute.
Barber, W.E., L.L. McDonald, W.P. Erickson and
M. Vallario. 1995. Effect of the Exxon Valdez oil spill on
intertidal fish: A field study. Transactions of the American
Fisheries Society 124: 461476.
Calixto, Eduardo; Schimitt, William. Anlise Ram do projeto
Cenpes II. ESREL 2006, Estoril.
Calixto, Eduardo. The enhancement availability methodology: a refinery case study, ESREL 2006, Estoril.
Calixto, Eduardo. Sensitivity analysis in critical equipments: the distillation plant study case in the Brazilian
oil and gas industry. ESREL 2007, Stavanger.
Calixto, Eduardo. Integrated preliminary hazard analysis
methodology regarding environment, safety and social
issues: The platform risk analysis study. ESREL 2007,
Stavanger.
Calixto, Eduardo. The safety integrity level as hazop
risk consistence. the Brazilian risk analysis case study.
ESREL 2007, Stavanger.
Calixto, Eduardo. The non-linear optimization methodology model: the refinery plant availability optimization
case study. ESREL 2007, Stavanger.
Calixto, Eduardo. Dynamic equipments life cycle analysis. 5 International Reliability Symposium SIC 2007,
Brazil.
964
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Research suggests that public support for natural hazard mitigation activities is distorted due to
choice anomalies. For that reason, preparedness measures are often implemented to an insufficient extent. From
an ex-post perspective the lack of mitigation might result in a necessity for a risk-transfer. On the other hand,
based on the conclusions from the Samaritans Dilemma, the anticipation of relief in case of a disaster event
might induce individuals to diminish ex-ante protection activities.
In order to analyze the existence of this phenomenon in an international context, this paper discusses the
impact of expected foreign aid in case of a natural disaster on the level of disaster mitigation activities. The
results suggest that foreign aid in previous disaster years implies future ex-post charity and thus does crowd
out risk-management activities. The paper concludes with propositions on the enlightenment of natural hazards
aiming to counter the crowding-out of prevention.
INTRODUCTION
of the people relevant as well as for the reconstruction of infrastructure. It is meant to limit financial as
well as physical losses. On the other hand, based on
the conclusions from the Samaritans Dilemma, the
anticipation of foreign aid in case of a disaster might
induce people to diminish ex-ante protection activities
(Buchanan 1975, Coate 1995), especially in the case of
natural disasters where the probability of occurrence is
relatively low and the danger is underestimated. The
question arising is, whether the provision of foreign
aid can induce adverse effects in form of a reduction
of ex-ante mitigating activities.
In order to build up sustainable development strategies and to support less-developed countries an analysis of a countrys dependency on foreign aid and its
vulnerability against large-scale disasters is essential.
Both, a theoretical as well as an empirical analysis of
this relationship is still missing.
The remainder of the paper is structured as follows: The next section presents the theoretical background of our analysis. The framework incorporates
the ideas of two fields in the literature on the economics of natural hazards. The first string consists
of the ideas of choice heuristics in decisions on lowprobability-high-loss events based on the theories
by Kahneman & Tversky (1974). The second area
touched is the phenomena of charity hazard, an adoption of the Samaritans dilemma (Buchanan 1975,
Coate 1995) to insurance decisions for disaster events.
In Section 3 the connection of charity hazard and
foreign aid is illustrated. In section 4 preliminary
descriptive statistics and a summary of the results of
the study conducted by Raschky & Schwindt (2008) on
965
Table 1.
No.
Country
Year
Type
No. Killed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
1995
2004
2004
1999
2003
2003
2001
2003
1999
2004
2003
1998
1999
2003
2004
1995
1998
1996
1996
1997
1998
1998
1995
1998
1996
2004
2003
2004
1998
2002
1998
1997
1997
2003
1999
1998
1997
1994
2002
1995
Famine
Tsunami
Tsunami
Flood
Earthquake
Heat wave
Earthquake
Heat wave
Earthquake
Tsunami
Heat wave
Hurricane
Cyclone
Heat wave
Tsunami
Earthquake
Earthquake
Meningitis
Meningitis
Typhoon
Flood
Hurricane
Meningitis
Cyclone
Flood
Hurricane
Heat wave
Flood
Heat wave
Unknown
Earthquake
Flood
Meningitis
Earthquake
Earthquake
Tsunami
Diarrhoeal
Flood
Respiratory
Earthquake
610,000
165,708
35,399
30,000
26,796
20,089
20,005
19,490
17,127
16,389
15,090
14,600
9,843
9,355
8,345
5,297
4,700
4,346
4,071
3,682
3,656
3,332
3,022
2,871
2,775
2,754
2,696
2,665
2,541
2,500
2,323
2,311
2,274
2,266
2,264
2,182
2,025
2,001
2,000
1,989
OECD
Total Aid
OECD
Emerg. Aid
4.72
1,886.34
540.29
56.58
111.73
0.83
14.49
83.37
4.02
36.9
1,186.97
79.29
822.19
2,427.62
264.65
8.48
222.41
1,165.99
35.37
17.4
581.95
9.4
102.1
51.94
218.5
1,256.58
2,358.79
427.31
131.25
877.98
3,423.18
291.57
59.24
5.08
1.54
5.87
14.02
34.79
2.14
2.64
7.62
56.94
291.57
877.98
1,291.94
102.1
47.85
198.82
317.2
56.94
2.64
654.39
59.24
34.27
3.96
16.39
182.18
640.23
1,984.16
18.31
11.54
1.7
THEORETICAL BACKGROUND
Choice anomalies
966
1 For
2 The
4 For
967
6 We
968
EMPIRICS
geographical distribution of mortality risk from earthquakes worldwide. The darker the grid cells the larger
the mortality risk on a 10-point scale. Earthquakes
are already used in the existing economic literature to
analyse the effects of income and institutions (Kahn
2005) as well as inequality (Anbarci et al. 2005) on a
countrys vulnerability.
In a subsequent study, based on this paper,
Raschky & Schwindt (2008) augment existing theoretical and empirical literature on the determinants a
nations vulnerability against quakes by identifying the
effect of foreign aid. They collected data on the magnitude and outcomes of 335 major earthquakes occurring
worldwide between 1980 and 2002. Table 2 presents
the geographical distribution of the earthquakes in the
sample.
Raschky & Schwindt (2008) apply standard OLS
and 2SLStechniques to estimate the effect of foreign aid on the extent of earthquakes. They regress
969
Table 2.
Country
No. of
quakes
Country
No. of
quakes
Algeria
Argentina
Australia
Bangladesh
Bolivia
Brazil
Chile
China, P Rep.
Colombia
Costa Rica
Cuba
Ecuador
Egypt
El Salvador
Fiji
Greece
Guatemala
Honduras
India
Indonesia
Iran, Isl. Rep.
8
2
3
5
3
1
5
52
11
7
1
7
3
4
2
16
7
1
14
44
43
Italy
Japan
Kenya
Malawi
Mexico
Nepal
New Zealand
Nicaragua
Pakistan
Panama
Papua New Guinea
Peru
Philippines
South Africa
Spain
Taiwan (China)
Tanzania, Uni Rep.
Turkey
United Kingdom
United States
Venezuela
15
19
1
1
16
1
1
1
15
1
7
18
10
3
1
2
3
22
1
18
5
CONCLUSION
The aim of this work was to illustrate that the altruistic idea of foreign aid can result in an aggravation
for aid receiving countries, if the incentives created
by aid distort the decision about prevention. This
change to the worse increases the vulnerability of
recipient countries and can have disastrous effects
if the country indeed experiences a natural hazard
event.
In order to counter the reduction in prevention
the causes of the distortion should be identified and
combated. The analysis of the present work suggests
three features of the decision process, which could be
improved.
First, the five heuristics imply, that it is important
to inform people about the risk of natural hazards and
possible consequences. To make sure, that people are
not overstrained with the information, it seems plausible to limit the information to risk types a society is
really confronted with. The wrong perception of the
probability of occurrence is one reason, why people,
even without the consideration of foreign aid, dont
support mitigation activities to a sufficient extent.
Enlightenment about hazard risks might be a first step
to counteract the misperception.
Second, the survey results of Flynn et al. (1999)
suggest, that the reduction in prevention can result
from a mistrust of taxpayers in governmental decisions. Politicians could try to decrease the lack of
reliance by getting more information about tax payers
preferences and if possible consider this information
in their decision process.
The first two points just described are valid for the
overall problem of insufficient support for risk mitigation activities. In order to reduce the adverse effects of
foreign aid in special as a direct result of our approach,
the way of foreign aid distribution should we reconsidered. Important questions which need to be answered
in this context are whether to favour ex-ante or
ex-post financial aid and whether to choose financial
or in-kind transfer. These questions are hardly new for
the foreign aid literature, but the connection to natural
hazards is missing so far.
970
REFERENCES
Alesina, A.F. & Dollar, D. 2000. Who gives foreign aid
to whom and why? Journal of Economic Growth 5 (1):
3363.
Anbarci, N., Escaleras, M. & Register, C.A. 2005.
Earthquake Fatalities: The Interaction of Nature and Political Economy. Journal of Public Economics 89 (910):
19071933.
Browne, M.J. & Hoyt, R.E. 2000. The demand for flood insurance: Empirical evidence. Journal of Risk and Uncertainty
20 (3): 291306.
Buchanan, J.M. 1975. The Samaritans dilemma. In
E.S. Phelps, ed., Altruism, morality and economic theory:
7185. Russell Sage Foundation.
Coate, S. 1995. Altruism, the Samaritans dilemma, and government transfer policy. American Economic Review 85
(1): 4657.
Dilley, M., Chen, R.S., Deichmann, U., Lerner-Lam, A.L.,
Arnold, M., Agwe, J., Buys, P., Kjekstad, O., Lyon, B. &
Yetman, G. 2005. Natural disaster hotspots: A global
risk analysis, Disaster risk management series no. 5. The
World Bank and Columbia University.
Flynn, J., Slovic, P., Mertz, C.K. & Carlisle, C. 1999. Public support for earthquake risk mitigation in Portland,
Oregon. Risk Analysis 19: 205216.
Frey, B.S. & Schneider, F. 1986. Competing models for
international lending activity. Journal of Development
Economics 20 (2): 225245.
Frey, B.S. & Eichenberger, R. 1989. How important are
choice anomalies for economics? Jahrbuch der Nationaloekonomie und Statistik 206 (2): 81101.
Kahn, M.E. 2005. The death toll from natural disasters: The
role of income, geography and institutions. The Review of
Economics and Statistics 87 (2): 271284.
971
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Thousands of ski enthusiast surge into alpine ski resorts every year to indulge their passion and
seek sensation and thrill. This sensation-seeking by the individual skiers, an increasing density on the slope and
heterogeneity of skills results in around 60.000 accidents per annum on Austrian ski runs alone. The recent trend
towards adventure seeking activities in the leisure time and the accompanied adverse effects for individuals and
society call for appropriate adoption strategies by public policymakers and the relevant private industries alike.
An appropriate approach, from an economic perspective, is to internalize the external costs of private risk-taking
behaviour. We use data of 71 Austrian Ski resorts and 3,637 reported ski accidents to estimate the marginal
effect of one additional skier on the amount of accidents using standard OLS-methods. These estimates build the
basis for the evaluation of the external costs of risk-taking behaviour in alpine skiing. Based on this evaluation
instruments can be designed to reduce socially sub-optimal demand for risk-taking activities.
INTRODUCTION
ces/statistics.cfm.
2 http://www.fff.org/freedom/fd0607f.asp.
3 Freizeitunfallstatistik
973
6 see
974
7 see
Shoemaker (1993).
975
(1)
This means that the utility of an individual A is composed of activities (X1 , . . ., Xm ), which are exclusively
under his control or authority and a single activity, Y1 ,
which is by definition under control of a second individual. It is assumed that A will maximize his utility in
respect to X1 , . . ., Xm and the externality Y1 and that
he modify the values for X1 , . . ., Xm as Y1 changes to
maintain in a state of equilibrium.
For the economic analysis of the external effect
of risk-taking we use the model employed by Turvey
(1963) expanded by the case of accidents introduced by
Williamson, Olson and Ralston (1967). Let us suppose
we have two classes of individuals, A and B, who interact in a non-market situation. In the case of risk-taking
this means that B is experiencing physical injury as a
result of his interaction with A. In other words the class
976
(2)
ACC is the number of (severe) accidents in skiresort j, SKIERS is the number of skiers (in thousand)
2.088
Model 2
0.298
12.960
34.221
17.205
11.419
71
0,000
0.735
Std. Err.
0,627
0.228
3.233
8.185
1.944
0.103
5.346
12.888
6.388
3.665
71
0,000
0.676
Notes: OLS estimates; Dependent variable: Number of accidents (Model 2); Robust standard errors. , , indicate
significance at the 1,5 and 10% level.
977
MCsocial
R2
MCprivate
R1
MCexternal
D
Q*
Figure 3.
Q1
CONCLUDING REMARKS
To our knowledge this is the first attempt to empirically investigate the concept of risk-taking in high
978
979
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: As specific and new industry, biofuel industry mixes both a new processing for agricultural and
animal raw materials, in a rather chemical looking plant beyond rather classical industrial processes.
A controversy rises about stakes in biofuel industry: sustainability, indirect effects of allowance of agricultural
raw materials to produce industrial commodities, so the debate reaches unexpected levels;
Presentation will tackle:
Requirements for application drafting and public debate in France for a specific plant, transforming animal
fats into biofuels,
Developments of debate, management of application-specific arguments and industry or policy specific
discussions to reach a final licensing decision.
Adaptation of public regulation to such a wide span in public expectations about an environmental and energy
issue.
Consideration upon environment management inside project management.
INTRODUCTION
2
2.1
APPLICATION DRAFTING
Project content
981
2.2
Biofuel national
program
Tax exemption
application
January September
Tax
ax
exemption
Decision
2006
Environmental requirements
Commissaire
Enquteur
opinion
2.3
Public enquiry
Licensing steps
The main steps are as follow: filing application, public inquiry, and opinion of a commissaire enquteur,
then presentation to a dpartement-level commission
is also consulted; it may vary in accordance with
the type of facilities in question and its composition, includes notably representatives of the State
8 months
Licensing conditions
Commission
For environment
and technological
hazards
Ruling of Prfet
Granting license
or refusal
4 months
That presentation is done by classified installation inspectorate, and is backed by provisions for
licensing: conditions for supervision, discharge limits, ways and means of informing public and stake
holders.
2.4.1 Filing the application
When sent to the prfet, the applicant is declared entitled to proceed with its project within a two month
time span.
Then, due may be to the innovative nature of the
project, which entitlement lasted four months.
The content of the application lies mainly around
following key documents, beyond administrative identification of the applicant and its financial and technical reliability:
An environment impact statement,
A risk study,
A declaration of compliance with occupational and
health rules,
A declaration of the future use of land (land planning outlook) in order to foresee conditions for land
remediation at the end of operations.
2.4.2 Risk study
Hazards are mainly linked to explosion and fire.
The risk study described the range of influence of
various types of explosions likely to occur.
For the various classes of accident examined, it
appeared that effects would be limited inside the plant
precinct.
Anyway, solutions where described and organization was shaped in order to grant an acceptable
prevention level.
Due to the accuracy with which several hazards
scenario had been studied, no questions were raised
by the public.
982
TSS
COD
BOD5
Nitrogen NTK
Phosphorus
Table 2.
River Meuse
quality at
Bras-surMeuse (mg/l)
150
450
450
300
15
Discharge
500 m3 /j
11,5
10,33
2,12
3,67
0,11
0,263
0,789
0,789
0,526
0,026
River Meuse
flow at low
level:
3,3 m3 /s
TSS
COD
BOD5
Nitrogen
NTK
Phosphorus
River Meuse
quality at
Bras-surMeuse (mg/l)
Resulting
quality at
discharge
point
River Meuse
quality
objective
11,5
10,33
2,12
3,67
11,763
11,119
2,909
4,196
< 25
20 25
35
0,11
0,136
0,1 to 0,3
Anyway, some peopled considered the fact of neihgbouring rendering plant to fear some ill-smelling
phenomenon, those concerns were dropped after a
simple presentation of the differences between processes (chemical processing of fats versus a thermic
process upon raw animal waste).
Discharge
Concentrations Assessed
out of sewage Impact
plant (mg/l)
(mg/l)
a coolant and fluid heating boiler (10 MW), emitting particulates, CO, SO2 , NOx in compliance with
general regulation and discharge level,
a scrubber emitting methanol.
983
3.1
5.1
5.2
LESSONS LEARNED
Innovation management
Project management
Improvements
984
985
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: We consider the problem of modelling incident escalation in explosive storage. Escalation is
a key stage in the process of transition between minor and major incidents, and therefore is worthy of careful
consideration. Our aim is to develop a model that can be used to inform the risk assessment process, by providing
supporting information (in this case its contribution to incident escalation) about the properties of the explosive
to be stored. Scarcity of data (especially for model validation) is often a significant difficulty in these problems,
so we focus on developing a model that is less data-dependent than a full physical model.
INTRODUCTION
987
Table 1.
Location
Incident
11 July,
Afghanistan
21 July, Russia
8 June, Vietnam
3 March, Guinea
Source: BBC.
988
ESCALATION MODEL
y
x
pz+ (1 p+ )xz
z
z=0
N 1x
yz
(1)
y
x
pz+ (1 p+ )xz
z
yz
(1 p )N 1xy+z .
z=0
i1
Cj x
yz
0
(1 p )
i1
0
yz
Cj xy+z
.
(2)
989
(3)
exp hi (y)dy
R(t)
T
hr (y sr )dy
(4)
r=1 0
4
hd (y sr )dy,
(5)
hb (y sr )dy,
(6)
(7)
SIMULATION
990
60
55
50
R(t)
4.1
45
40
35
Intensit y = 0
Intensit y = 10
Intensit y = 10 2
30
Intensit y = 10 1
25
0
60
50
K = 0.5
K = 1.2
K = 1.6
K = 2.1
K = 2.7
K = 3.2
K = 3.5
R(t)
40
30
20
10
0
0
0.5
Figure 1.
1.5
2
t
2.5
3.5
60
50
R(t)
40
30
K = 0.5
20
10
0
0
Figure 2.
500
1000
1500
2000
t
2500
3000
3500
4000
991
REFERENCES
CONCLUSIONS
APTResearch (2002). Risk-based explosives safety analysis. Technical report, Department of Defense Explosives
Safety Board.
Azhar, S.S., M.A. Rahman, M. Saari, K.F. Hafiz, and
D. Suhardy (2006). Risk assessment study for storage
explosive. American Journal of Applied Sciences 3(1),
16851689.
Bienz, A. (2003). The new Swiss model for the probability of
an explosion in an ammunition storage. Technical report,
NATO/Bienz, Kummer & Partner Ltd.
Goldfarb, I. and R. Weber (2006). Thermal explosion:
modern developments in modelling, computation and
applications. Journal of Engineering Mathematics 56,
101104.
Hardwick, M.J., N. Donath, and J.W. Tatom (2007). Users
Reference Manual for Saftey Assessment for Explosive
Risk (SAFER) software.
HSE (1999). The Control of Major Accident Hazards Regulations 1999. The Health and Saftey Executive.
HSE (2001). Reducing risks, protecting people. Norwich,
UK: The Health and Safety Executive.
Kapila, A.K., R. Menikoff, J.B. Bdzil, S.F. Son, and
D.S. Stewart (1999). Two-phase modelling of ddt in
granular materials: Reduced equations. Technical Report
LA-UR-99-3329, Los Alamos National Laboratory.
Mader, C.L. (1998). Numerical Modeling of Explosives and
Propellants. Boca Raton, Florida: CRC Press.
Merrifield, R. and P. Moreton (1998). An examination of the
major-accident record for explosives manufacturing and
storage in the UK. Journal of Hazardous Materials A63,
107111.
Pfitzer, B., M. Hardwick, and T. Pfitzer (2004). A comparison of QRA methods used by DOD for explosives
and range safety with methods used by NRC and EPA.
Technical report, APT Research.
Stewart, M. G. and M. D. Netherton (2008). Security risks and
probabilistic risk assessment of glazing subject to explosive blast loading. Reliability Engineering and System
Safety 93, 627638.
USDoD (1998). Environmental exposure report. online,
http://www.gulflink.osd.mil/du/.
Valone, S.M. (2000). Development of a shocked materialsresponse description for simulations. Technical Report
LA-13665-MS, Los Alamos National Laboratory.
van Donegen, P., M.J. Hardwick, D. Hewkin, P. Kummer,
and H. Oiom (2000). Comparison of international QRA
models on the basis of the set-up and results of the joint
UK/Australian 40 tonne donor/acceptor trial. Technical
report, APT Research/NATO.
Wu, C. and H. Hao (2006). Numerical prediction of rock
mass damage due to accidental explosions in an underground ammunition storage chamber. Shock Waves 15(1),
4354.
992
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The widely used brominated flame retardant Deca-BDE is currently the subject of a controversial debate regarding its environmental persistence and human health safety. As an effective inhibitor of fire in
numerous plastics and textiles is also has a measured and well characterized risk benefit to society. An inconclusive ten year EU risk assessment has fueled an ongoing search for evidence of harm. Within this context, the
need for accurate measurement of the chemical is necessary to determine if risk reduction efforts are reducing
environmental burdens and if the chemical could degrade to more toxic components. An analysis of interlaboratory comparison data has shown that many competent laboratories remain unable to accurately determine trace
levels of the compound in a variety of media. This inability to measure a chemical risk is problematic and the
associated uncertainty does little to support effective risk management.
1
1.1
INTRODUCTION
Polybrominated diphenyl ethers
993
Uncertainty is an unavoidable part of any measurement. It is a parameter that defines the range of
the values that could reasonably be attributed to the
measured quantity and when uncertainty is evaluated and reported in a specified way it indicates the
level of confidence that the value actually lies within
the range. The uncertainty is a quantitative indication of the quality of the result. It gives an answer
to the question, how well does the result represent
the value of the quantity being measured? Because
uncertainty clouds analytical results, it is one of the
most critical parameters characterizing an analytical
determination.
3
994
Table 1.
Survey
No. of positive
detections
WWF 2003
(general public in the UK)
11 (7%)
WWF 2004a
(seven families in the UK)
33 (6 grandmothers, 7 mothers,
6 fathers and 14 children)
7 (21%)
WWF 2004b
(MPs from 13 European Union
countries)
3 (21%)
WWF 2004c
(general public in Europe, 17
countries)
16 (34%)
8 (100%)
18 (46%)
SFT, 2005
(pregnant women in Russia and Norway)
10 (pregnant women)
0 (0%)
12 (gender unknown)
5 (42%)
11 (7%)
50 females
40 (80%)
7 mothers
0 (0%)
995
INTERLABORATORY COMPARISONS
996
4.2
CONCLUSIONS
997
ACKNOWLEDGEMENTS
The authors would like to thank The Leverhulme Trust
for funding research on characterising the nature of
risk migration processes. www.riskmigration.org
REFERENCES
Bidleman, T.F., S. Cussion and J.M. Jantunen. 2004. Interlaboratory study on toxaphene in ambient air. Atmos.
Environ. 38: 37133722.
Covacci, A., S. Voorspoels and L. Ramos. 2007. Recent
developments in the analysis of brominated flame retardants and brominated natural compounds. J. Chromat A.
1153: 145171.
De Boer, J. 1999. Capillary gas chromatography for the
determinatio of halogenated micropollutants. J. Chromatography A. 843: 179198.
De Boer, J., C. Allchin, R. Law, Bart Zegers and J.P. Boon.
2001. Method for the analysis of PBDEs in sediment and
biota. Trends in Analytical Chemistry, 20: 591602.
De Boer, J. and W.P. Cofino. 2002. First world-wide interlaboratory study on PBDEs. Chemosphere, 46: 625633.
De Boer, J. and W. Wells. 2006. Pitfalls in the analysis
of brominated flame retadants in environmental, human
998
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Accelerated testing is an effort to quickly fail products in order to obtain failure-related data. This
paper presents an implementation approach of accelerated testing in industry. It introduces test samples, stressors
and performance factors as basic elements. Sample size calculation, failure acceleration strategies and statistical
concepts as main auxiliary tools are elaborated. In order to prepare an estimation of service life to compare
with test data, different types of field failure data with their expected results are studied. Finally, available and
possible qualitative and quantitative accelerated testing methods are presented according to required results.
They are categorized in design modification methods, extrapolation methods, using rate methods, performance
testing-related method, and comparison method.
INTRODUCTION
1001
Lu
La
(1)
1002
(2)
where (Age)1 and (PF)1 are age and performance factor of first aged sample, and (Age)2 and (PF)2 are
those ones for second aged sample, and L is estimated
life of samples.
2.5 Statistical analysis
Cumulative Distribution Functions (CDF): Consider
there are some discrete life data as presented in
Figure 1.
Figure 2.
subsection 2.1. By choosing a specified probability distribution function (PDF) like weibull, normal,
or lognormal distribution, and by using graphical
method, least square method or maximum likelihood
estimation (e.g. ReliaSoft (2001)), unknown parameters of PDF function and consequently CDF function
could be obtained. For a product, CDF diagram for test
locates on the left side of field CDF diagram and they
are assumed to be parallel as shown in Fig. 2. In fact,
AF is the relation between these two diagrams and its
simply expressed as:
AF =
(B10 )field
(B50 )field
=
(B10 )test
(B50 )test
(3)
Confidence level: If the target is to achieve reliability of R in time t, the problem is to obtain
minimum required sample size (N ) needed for testing, if number of maximum allowable failed samples
is r, so N r samples must survive after testing
1003
r
(Nx ) R(t)N x (1 R(t))x
(4)
x=0
2.6
Sample size
Acceleration strategies
(5)
1004
Figure 3.
period.
(Usage)a
(Usage)u
(6)
(7)
1005
(Lu )2
(B10 )2
=
(Lu )1
(B10 )1
(8)
L(T ) = Ce T
(9)
AF(Ta ) = eB( Tu Ta )
1
(10)
(11)
1006
Figure 4.
Figure 5.
Figure 6.
PF-related methods
Comparison methods
If the aim of AT is to comparatively judge about different aspects of design and material to select the best
option, it is considered as comparison method. The
product may be also tested in different levels of stressors in order to decide the effect of each stressor on
life and to remove its effect. It is based on performing
same test for all available products in order to obtain
their life data. Such data could be drawn on CDF diagram in order to decide which one is better from the
view of particular life. The diagrams are not necessarily parallel, and may intersect each other. Accordingly,
for example, if a diagram has the highest B10 , there is
no guarantee to have the highest B50 .
1007
CONCLUSION
1008
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper describes a new method for decomposing a coherent fault tree (X) of n variables
into a set of simpler functions. Each simple function is analyzed independently to determine the minimal cut sets
(MCS) and the probabilistic parameters of interest, i.e. top event failure probability and importance measures of
basic events. These results are suitably re-combined to obtain the results at (X) level such that i) the MCS of
(X) is the union of the MCS of simple functions and ii) the probabilistic parameters at (X) level are obtained
by simply combining the results of the probabilistic analysis of all simple functions. It is shown that in applying
the cut-off techniques, for determining the most important MCS, to a decomposed fault tree the truncation error
can be correctly determined. An example is described to show the potential of the proposed method.
INTRODUCTION
1009
Figure 1a.
Function Top.
Figure 1b.
(X) =
2
(1)
i=1
where:
X = (x1 , x2 , . . ., xn ); S = (x1 , x2 , . . ., xm );
Y = (xm+1 , xm+2 , . . ., xn ); X = S Y; S Y = .
Table 1.
1
2
3
4
5
6
7
8
Table 2.
i (Y) = (X)|BCi(S)
0
1
0
0
1
1
0
1
0
0
1
0
1
0
1
1
0
0
0
1
0
1
1
1
1
2
3
4
5
6
7
8
Pr((X)) =
2
(2)
i=1
Boundary condition
Operator AND
Operator OR
BC(E) = 1 or Failed
BC(E) = 0 or Good
G=H
G=0
G=1
G=H
1010
that is to say:
and
Pr LB ((X)) Pr((X)) Pr UB ((X))
(3)
(x1 = 0, . . ., xm = 0, xm+1 , . . ., xn ) = 0
Moreover:
The group with w = 1 is composed of m rows;
the MCS of the corresponding simple functions
i (Y)(i = 1, . . . , m) are also MCS of (X).
If i (Y) = (X)|BCi(S) = 1 then the variables of
BCi set to 1 constitute a MCS of (X) of order w.
Consequently, for all BCk > BCi it is useless to construct the corresponding simple functions, because
they would contain only non-minimal cut sets;
If the generic BCi is of order w (2 w m)
and its w variables at 1 do not combine each other
in (X) (i.e. they do not belong to any MCS of
), then it is useless to construct the corresponding
simple function, because it would contain only nonminimal cut sets;
If the generic BCi is of order w (2 w m)
and its w variables at 1 combine each other in
(X) then the corresponding simple function has
to be generated. Such function may or may not
contain MCS of (X). It is easy to recognise that
the MCS of i (Y) can be obtained by minimising
i (Y) only with respect to those j (Y) for which
BCi > BCj . The following pseudo code describes
the main operations for constructing i (Y).
If BCi > BCj then
if i j then i = 0
else
i = i |Ej=0
j = j |Cj=0
i = i j
endif
endif
If i (Y) j (Y) then all cut sets of i (Y) are nonminimal cut sets of (X) as BCi subsumes BCj . The
check whether i (Y) j (Y) can be performed either
on the fault tree structure of i (Y) and j (Y), or on
their BDD. In the first case the check can be limited to
gates that are modified by the use of the BC set, which
are those gates on the path from events in S to Top.
When i (Y) j (Y), the MCS of i (Y) are those
that do not verify j (Y). The complexity of this new
function can be reduced by removing the sets Ej and Cj .
Ej is the set of MCS of order 1 of j (Y) that can be
1011
RECOMBINATION OF RESULTS
4.2
Pr((X))
Pr(xj )
4.1
Ns
Pr(BCi ) Pr(i )
(4)
i=1
Pr(BCi ) =
m
Pr(xj )BCi,j
(5)
IjB
= Pr
BCi i (1j , Y) Pr
Ns
i=1
j=1
where:
Ns
BCi i (0j , Y)
i=1
i=1
m
Ns
i=1
Ns
i=1
Ns
Ns
Pr(BCi )IjiB
i=1
(6)
Therefore:
j=1
IjB
1012
Ns
i=1
Pr(BCi )IjiB
(7a)
BCi = xik
RAWj =
k=1
Ns
i=1
Pr(BCi )
Pr(i )
Pr(xj )
(7b)
Pr((1j , X))
Pr((X))
Pr(xj ) B
I
Pr((X)) j
Multiplying by
Pr(i (Y))
Pr(i (Y))
we obtain:
1
Pr(BCi )RAWji Pr(i )
Pr UB ()
Ns
and
RAWj
It is possible to write:
Where i =
Ns
i=1
Pr(i )
Pr(i )
1
IjiC Pr(i )
Pr UB ()
Ns
IjC
(8a)
Pr((X))
Pr((0j , X))
Ns
(8b)
RRWj
i=1
if xj BCi , otherwise i = 1
If xj is an element of S, then:
1
i Pr(BCi ) Pr(i )
Pr UB ()
1
Pr(xj )
i=1
i=1
IjC
(9b)
i=1
RRWj =
1
IjiB Pr(BCi ) Pr(xj )
Pr UB ()
Multiplying by
derived:
1
i Pr(BCi ) Pr(i )
Pr UB ()
Ns
RAWj
1 B
I Pr(xj ),
IjiC =
Pr(i ) ji
IjC
(9a)
i=1
Pr(UB (X))
Ns
i=1
1013
(10b)
(11)
and/or
Figure 2.
Table 3.
EXAMPLE
1
2
3
4
5
6
M8
M7
E20
0
0
0
1
1
0
0
0
1
0
0
1
0
1
0
0
0
1
1
0
0
0
1
0
1014
T0P5
Function 3. BC3 : M8 = 1; M7 = E20 = E15 = 0,
hence G_TOP3 = Top|BC3 :
1015
Table 4.
Pr(BC)
Pr(i)( )
1
2
3
4
5
6
0.1
0.1
0.1
0.1
0.01
0.01
2.782 102
0.1981
0.1089
3.439 102
0.0
1
Table 6.
Pr(BC) Pr(i)
Function
MCS
2.782 103
1.981 102
1.089 102
3.439 103
0.0
1.0 102
E20 E2 E1
E20 E2 M1
E20 E2 M3
E20 E2 E9 M2
M7 M3
M7 M1
M7 E9 M2
M8 M3
M8 E9 M2
E15 E1 M2
E15 M1 M2
E15 M2 M3
E15 E9 M2
M7 M8
3
4
Table 5.
Pr(BC)
Pr(i)( )
Pr(BC) Pr(i)
1
2
3
4
5
6
7.29 102
7.29 102
7.29 102
7.29 102
8.10 103
8.10 103
2.782 102
0.1981
0.109
3.439 102
0.0
1
2.028 103
1.444 102
7.946 103
2.507 103
0.0
8.10 103
6.3
This paper presented a method to decompose a coherent fault tree (X)based on the events of a path
set Sinto simple fault trees i (Y). The probabilistic parameters at (X) level have been obtained by
recomposing the results of the probabilistic analysis
of all i (Y) and BCi (S). The MCS of (X) are the
union of the MCS of i (Y). The number of simple
fault trees and their dimensions (number of events and
gates) depends on the number m of events making up
the path set S and on the structure of the fault tree to be
decomposed. Generally, the greater is m, the smaller
are the i (Y).
A decomposition method is useful for the analysis
of complex fault trees when the available FTA tool
is not able to perform the analysis without applying
the (logical and/or probabilistic) cut-off techniques.
In these cases the decomposition allows the analyst to
overcome the difficulties of the problem of estimating
the truncation error.
The fault tree to be decomposed has been supposed
to be already modularised. Indeed, the modularisation
is in itself a decomposition method. Some interesting
considerations could be drawn considering the different type of modules. Type I module contains only non
repeated basic events; type II contains type I modules as well as all occurrences of one or more repeated
basic events; finally, type III contains the previous two
types of modules and all occurrences of one or more
repeated sub-trees. It would be interesting to study the
integration of the third modularisation type with the
decomposition method.
It should be stressed that the proposed method
is sensitive to the algorithms for determining the
BC-table. Experiments will be undertaken to compare
the different alternatives on real fault trees.
1016
At the time of writing the software implementation has not yet been completed. The available partial
experimental results are promising. It is important to
complete the software and to perform the experimental verification, in order to verify the capability of the
method to decompose very large fault trees.
If the experimental results will be positive, then
the method can easily be extended to deal also with
non-coherent fault trees.
ACKNOWLEDGEMENTS
This work was carried out by the Joint Research Centre within the action: Assessment Methodologies for
Nuclear Security of the European Commission Seventh multi-annual Framework Programme, as direct
research, in the area of nuclear security.
Thanks are due to G.G.M. Cojazzi, G. de Cola, and
L. Fabbri for the support given.
REFERENCES
Bouissou M, Bruyere F, Rauzy A., (1997) BDD based fault
tree processing: a comparison of variable ordering heuristics, C. Guedes Soares, ed., Proceedings of ESREL 97
Conference, Vol. 3, Pergamon Press.
1017
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
G. Maschio
University of Padova-Dipartimento di Principi e Impianti di Ingegneria Chimica, Padova, Italy
ABSTRACT: The Directive 99/92/EC (ATEX 137) deals with the safety and health protection of workers
potentially exposed at explosive atmospheres. The application of the ATEX Directive requires the assessment of
specific risks due to the presence of potentially explosive atmospheres. The potential development of explosive
atmospheres is generally typical of industries classified at major hazard, but it is also possible in other industries
where flammable materials are handling. Since the assess of the risk due to the formation of explosive atmospheres
is required in both the cases and it is fundamental for the safety in the workplaces, a quantitative approach has been
proposed. The objective of the paper is to describe the main aspects of a quantitative methodology based on a
probabilistic risk assessment starting from a detailed knowledge of the analyzed system.
1019
Description of the
workplace and the
activity
Identification of the
substances and of the
emission source
Definition of the
cloud formation
Calculation of the
release probability
Classification of Hazardous
Areas
Identification of the
presence of ignition sources
and estimation of ignition
probabilities
Calculation of probability
of presence of workers
METHODOLOGICAL APPROACH
Measures of
prevention and
protection
The first step of the procedure is the classification of hazardous areas which takes into account
the characterization of all the emission sources of
flammable substances. In order to evaluate the risk,
the calculation of the effects of the explosion, the
assessment of probability of presence of ignition
sources and the number of exposed workers are also
necessary.
The flow-chart for the evaluation of risk due to the
presence of explosive atmospheres, shown in Figure 1,
can be divided in two part: the census phase and the
risk evaluation phase.
1020
RISK INDEX
According the flow-chart described above, it is possible to calculate the risk due to the presence of explosive
atmospheres. The risk can be expressed through the
risk index.
The risk index, Rae is given by equation (1):
(1)
c. Gases
This class comprises gases such as liquefied
petroleum gas or methane. These are usually stored
under pressure in cylinders and bulk containers.
Uncontrolled releases can readily ignite or cause the
cylinder to become a missile.
d. Solids
Solids include materials such as plastic foam, packaging, and textiles which can burn fiercely and give
off dense black smoke, sometimes poisonous.
Rae = pe pa D
1021
Table 1.
Zone
Explosive Atmosphere
Probability P
(year1 )
Zone 0
Zone 1
Zone 2
P > 0.1
0.1 P > 1 E-03
1 E-03 P > 1 E-05
12.
13.
14.
15.
Hot surfaces;
Flames and hot gases or particles;
Mechanical sparks;
Electrical network;
Wandering electrical currents;
Cathode protection;
Static electricity;
Lightning;
Heating cables;
Waves with radio-frequency (RF) from 104 Hz to
3 1012 Hz;
Electromagnetic waves with frequency from
3 1011 Hz to 3 1015 Hz;
Ionizing radiations;
Ultrasounds;
Adiabatic compressions and bump waves,
Exothermic reactions (also the spontaneous ignition of dusts).
In order to quantify the probability of effectiveness for each type of ignition source it is possible
to use literature data or, preferably, specific studies
for the plant under analysis. Assessment methods,
such as historical analysis, fault tree analysis, FMEA
or FMECA, or specific analytic procedures can be
used.
4.4
Explosion consequences
1022
HC
mcloud
4,196 106
(2)
The models of shock wave model have been developed by TNO and is described in the Yellow Book
(1997). It is also known as the expanding piston or
piston blast model and allows the peak overpressure
and the duration time of explosion to be estimated.
This model is based on the assumption that the
cloud is a hemisphere of unburnt gas volume V0 which
1023
(4)
4.6
4.7
CONCLUSIONS
The proposed methodology permits to identify the critical points in the system (technical and procedural)
and decrease the exposure of the workers as low as
possible.
In particular the quantitative approach allows to not
underestimate the risks for the exposed workers.
The quantitative evaluation of the risk ATEX allows
to obtain an improved effective in the prevention and
protection interventions adopted by the company.
A tool for the risk assessment has been created, it
allows to repeat the calculations and permits a faster
verifications of the possible improvement of the measures of risk prevention and mitigation for the system
under analysis.
Finally the quantitative analysis permits a detailed
study of the accidental scenarios due to small releases.
For industries at major risk a detailed analysis of
such events is necessary because they can represent
potential sources of domino effects.
REFERENCES
Benintendi, R., Dolcetti G., Grimaz Stefano, Iorio C.S.
A comprehensive approach to modelling explosion phenomena within ATEX 1999/92/CE directive, Chemical
Engineering Transactions. Chemical Engineering Transactions, proceeding of 2nd International Conference
on Safety & Environment in Process Industry CISAP2.
(2006).
Chems Plus v.2 Manual (1991).
Council Directive 89/391/EEC on the introduction of measures to encourage improvements in the safety and health
of workers at work (1989).
Council Directive 94/9/EEC on the approximation of the
laws of the Member States concerning equipment and protective systems intended for use in potentially explosive
atmospheres (1994).
Council Directive 99/92//EEC on the approximation of the
laws of the Member States concerning equipment and protective systems intended for use in potentially explosive
atmospheres (1999).
Decreto Legislativo 626/94 riguardante il miglioramento
della salute e sicurezza dei lavoratori durante il lavoro
(1994).
D.P.C.M. 25 febbraio 2005 recante le Linee Guida per
la predisposizione del piano demergenza esterna di cui
allarticolo 20, comma 4, del decreto legislativo 17 agosto
1999, n. 334 (2005).
Decreto Legislativo 233/03. Regolamento recante norme
per lattuazione della direttiva 99/92/CE relativa alle
1024
1025
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The work deals with the risk assessment theory. An unitary risk algorithm is elaborated. This
algorithm is based on parallel curves. The basic risk curve is of hyperbolic type, obtained as a multiplication
between the probability of occurrence of certain event and its impact (expected loss). Section 1 refers the problem
formulation. Section 2 contains some specific notations and the mathematical background of risk algorithm. A
numerical application based on risk algorithm is presented in section 3. Risk evolution in time is described in
section 4. A generalization of the curve of risk and other possibilities to define the risk measure are given in the
last part of the work.
PROBLEM STATEMENT
Specific definitions
There are a lot of Probability Impact couples generating the same Risk R, defining quadrangles of same
area as illustrated in figure 1.
If the vertexes of such quadrangles, which are not
on axes, are linked through a continuous line it results
a hyperbolic curve C, named the curve of risk. This
curve allows the differentiation between the acceptable
risk (TolerableT) and the unacceptable one (NonTolerableNT).
1027
1.2
Table 1.
No.
Frequency
of occurrence
Probability
Class of
probability
5 106
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
5 105
104
2 104
1.4 103
3 103
6 103 h
4 102 h1
1 h1
1028
MATHEMATICAL SUPPORT
i = 1, m;
y=
j = 1, n
c
;
x
ch
X (x, c, h) = x +
c2 + x 4
Y (x, c, h) =
or (xi , yj ) R+ R+
hx2
c
+
x
c2 + x 4
(2)
(3)
.................................
x4 ax3 + bcx c2 = 0,
ch
c2 + x 4
Y =
hx2
c
+
x
c2 + x 4
(1)
h=
a x 2
c + x4 .
c
where h = d(M , N ), M , N is the constant distance between any two points on the common
normal. (proof, see Popoviciu & Baicu 2007).
Remark 1 It is very difficult to eliminate the variable
x from (1) and to obtain the explicit equation Y =
F(X ) for the curve . In numerical applications we
set c = R1 .
In order to draw the parallel curves and , let
us say by a Mathcad Professional subroutine, we use
mtan =
c
,
a21
mnorm =
a21
c
1029
Br+1 = (normal in B1 ) (y = yn ),
Br+1 (ar+1 , br+1 )
C3 with h3 = 2h an so on
Cr with hr = (r 1)h
(5)
Rj = Rj,
Bj = Bj.
c hj
c2 + x 4
, j = 2, 6,
x2 hj
c
+
, j = 2, 6.
x
c2 + x 4
Remark 3 In applications, the mathematical notations from section 2 have the following meanings:
x = 1, x = 2, . . . , x = 9 in figure 2.
1030
Table 2.
R1
R2
R3
R4
R5
R6
4
Figure 2.
R1 = 9
R2 = a2 b2
R3 = a3 b3
R4 = a4 b4
R5 = a5 b5
R6 = a6 b6
t=0
t=1
t=2
t=3
t=4
9.00
13.37
18.56
24.37
31.40
39.07
24.40
36.26
50.34
66.63
85.15
105.95
66.18
98.34
136.51
180.70
230.92
287.33
179.48
266.68
370.20
490.04
626.22
779.20
486.71
723.20
1004.00
1329.00
1698.00
2113.00
Generally, the value of risk increases in time. We consider the value Rj from step 7.2 as the corresponding
value for time t = 0. For time t we propose the formula
Rj (t) = Rj eat ,
= 13.373
= 18.564
= 24.573
= 31.402
= 39.037.
a > 0, j = 1, 2, . . ..
ER = Pi Ij Pi+1 Ij+1 .
Example.
ER = 4 4 5 5 = 400
or
ER = 4 3 5 4 = 240.
1031
and
(8)
N (X , Y ) .
f (x) 1 + f
2 (x)
(9)
(10)
RIIIp (X , T ) = M [(T X )+ ]p ,
SD2 (X ) = S 2 (X ) = SVar(X )
RIV (X ) = SVar(X ) = M [(m X )+ ]2
Ss2 (X ) =
pi = 1
n
1
[(X xi )+ ]2
n 1 i=1
RIV (X ) = Ss2 (X ).
V. The Taguchi [6] measure of risk (in industrial
practice) is defined as
RT (X , T ) = k[Var(X ) + (m T )2 ],
n
2
1
X xi .
n 1 i=1
k>0
(14)
(13)
D2 (X ) = 2 (X ) = Var(X ) = M [(m X )2 ]
1
xi ,
n i=1
(12)
An estimator of semi-dispersion is
i=1
X =
(11)
p N
RT (X , T ) = k[s2 + (X T )2 ]
(15)
1032
15
X =
12
11
14 18 15
5
4
5
1
12
1
12
1
12
1
12
1
12
1
12
1
12
1
12
1
12
1
12
1
12
1
12
X = 8.917.
The estimation of dispersion and risk measure (8) are s2 (X ) = 32.354; RI (X ) = 32.354.
b. The risk measure based on semi-dispersion is
obtained as it follows
12
(X xi )+
2
= 184.73
i=1
s2 (X ) = 100.72
RI (X ) = s2 (X ) = 100.72.
b. The risk measure based on semi-dispersion
12
(X xi )+
2
= 520.008
i=1
CONCLUSIONS
REFERENCES
Anderson, K. 2005. Intelligence-based threat assessments for
information networks and infrastructures. A White Paper.
Baicu, Floarea & Baicu, A.M. 2006a. Audit and security of
information systems. Victor Publishing House, Bucharest.
Baicu, Floarea & Baicu, A.M. 2006b. Risks associated to
security of information systems. Hyperion University
of Bucharest. Annals mathematics-informatics. 5966.
Bucharest.
Ball, D.J. & Floyd, P.J. 1998. Societal risks. Final report to
HSE. HSE Books.
BS ISO/IEC 17799. 2005. Information technology. Security techniques. Code of practice for information security
management.
Evans, W. Andrew. 2003. Transport fatal accidents and
FN-curves: 19672001. United Kingdom. Health &
Safety Executive. HSE Books. United Kingdom.
Kendrick, Tom. 2003. Identifying and managing project
risk: essential tools for failure-proofing your project.
AMACOM. American Management Association.
Lerche, I. 2006. Environmental risk assessment: quantitative measures, anthropogenic influences, human impact.
Berlin, Springer.
Panjer, H. Harry & Willmot, E. Gordon. 1992. Insurance
risk models. Society of actuaries. Actuarial education and
research fund. Library of Congress. USA.
Popoviciu, Nicolae & Baicu, Floarea. 2007. A new approach
for an unitary risk theory. WSEAS Press. Procc. of the 7th
WSEAS intern. conference on signal processing, computational geometry and artificial vision. 218222. August.
Athens.
Radulescu, M. 2006. Modele matematice pentru optimizarea investitiilor financiare. Editura Academiei
Romne. http://en.wikipedia.org/wiki/Risk_assessment.
2008. http://en.wikipedia.org/wiki/Risk. 2008.
1033
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: General systems such as production plants and aircrafts can be regarded as a phased-mission
system, which perform several different functions depending on their operational stage. The definition of failure
for a component also changes depending on its operation stage, and thus multiple failure modes must be considered for each component. Further, the occurrence of an accident in the system depends on the operation stage
as well as the total operation time. This paper proposes a novel, simple accident occurrence evaluation method
for a phased-mission system composed of components with multiple failure modes. Firstly, based on the system
accident conditions at each phase, the system accident occurrence condition at present time is given analytically
in terms of component state conditions. Evaluating component state conditions, the system accident occurrence
probability can be easily evaluated. An illustrative example of a simple redundant pump system shows the details
and merits of the proposed method, where the obtained results are compared with those obtained by the state
transformation method.
INTRODUCTION
1035
2.1
Assumptions
X (i, j, t)
1,
0,
X (i, j, t) = 1
(2)
j=0
(5)
(6)
for t1 t2
X (i, j, t) = 0
j
for t1 t2 & j = 0
X (i, 0, t1 ) X (i, 0, t2 ) = X (i, 0, t1 )
for t1 t2
(8)
(9)
(10)
X (i, j, t1 )X (i, 0, t2 ) = 0
for t1 t2 & j = 0
X (i, j, t) = X (i, k, t)
(12)
(13)
k =j
(3)
(7)
for t1 t2 & j1 = j2
Applying De Morgans laws (Hoyland & Rausand,
1994) to eq. (1), the following equation holds.
(4)
(15)
1036
for t1 t2 t3 & j = 0
(16)
Now obtain the system accident occurrence condition at phase l. The above condition, Eq. (21), shows a
condition whether the system is suffering from the system accident at phase k, which does not always implies
that the system accident occurs at phase k because
another system accident might occur at the previous
phase. One of the most important characteristic is that
the system accident condition at the later phase does
not always lead to an immediate system accident even
if it is satisfied. A system accident occurs at phase
l requires that {no accident occurs before the phase
l} AND {an accident occurs at phase l}. Thus, system
accident occurrence conditions at time t during phase l
denoted by binary variable Ys(l, t), can be represented
in terms of Y (k, t), k = 1, . . ., l as:
Ys(l, t) =
(19)
l1
k=1
Y (k, t) Y (l, t)
(22)
2.4
Y (k, t)
1,
0,
Y (k, t) =
m=1
(i,j)C(k,m)
X (i, j, t)
(24)
t
Q(i, j, t) =
(23)
(21)
for j = 0
1037
(25)
where
(i, t)
Mii
(i, j, t)
(26)
j=1
(27)
t
Q(i, j, t) =
for j = 0
(28)
where
(i)
Mii
(i, j)
(29)
j=1
(30)
ILLUSTRATIVE EXAMPLE
This section considers the phased-mission system composed of two redundant pumps as shown
in Figure 1. Analysis results are compared with the
conventional event transformation method.
l1
=E
Y (k, t) Y (l, t)
k=1
(31)
Figure 1.
1038
3.2
Mission operation
3.3
(35)
(34)
(32)
(37)
(38)
(33)
(39)
Thus, as the number of phases increases, the system accident condition becomes complicated. On the
other hand, the proposed method need not increases
the number of basic events.
1039
3.4
(40)
(41)
(43)
2(2)
[1 exp({(1) + (2)}t)]
(1) + (2)
(2)2
[1 exp({(1)
((1) + (2))2
+ (2)} t)]2
(44)
+ (2)}t)]
(45)
Eqs. (44) and (45) show that the system accident probability increases during each phase as time passes. Since
QS (k, t) is a cumulative probability of the system accident during phase k, the cumulative value changes at
the transition from phase 1 to phase 2. Since the start of
phase 2 corresponds to the end of phase 1, the increase
of accident occurrence probability at the start of phase
2 can be obtained as:
QS (2, t1e) =
CONCLUSIONS
This paper proposes a simple novel method for failure analysis of the phased-mission system composed
of components with multi-failure modes. To represent the state transition of component clearly, the
disjoint binary variables are introduced, which can
simplify the logical operation of state variables at the
different phase. Further, failure probability for each
failure mode can be obtained based on the transition rate for each failure mode. Illustrative example
shows the characteristics of the phased-mission system where the latent failure condition lead to a system
failure at the start of a new phase. The failure analysis
system based on the proposed method is now under
development.
Part of this research was performed under the
support of Grant-in-Aid for Scientific Research (C)
19510170.
REFERENCES
(1)2
[1 exp({(1)
QS (2, t) =
((1) + (2))2
2
(47)
During phase 1, system accident occurrence probability QS (1, t) can be obtained from Eq. (32).
QS (1, t) =
(42)
(j)
[1 exp({(1)
(1) + (2)
+ (2)}t)]
(1)2
((1) + (2))2
[1 exp ( {(1) + (2)} t1e)]2
(46)
1040
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: It is recognized that the usual output of a fault tree analysis in some studies is not sufficiently
informative. For added value in a widely used instrument for doing risk analyses, a Markovian approach is
suggested. It is shown how to extend the calculations of the standard fault tree gates, so that information is
available not only on the failure probability at the top level, but also on estimates of its rate of failure and mean
down time. In applying this to an integrated fault tree analysis of a municipal drinking water system, we further
identified the need for gates that are variations of those currently in use.
INTRODUCTION
MDT
MTBF
where
MTBF = MTTF + MDT
and MTTF is short for Mean Time To Failure. This fact
tells us that the probability P(F) is not particularly
informative, since two systems with very different
dynamic behavior can share the same P(F).
1041
see (Lindhe et al. 2008). It was this study that motivated the present work. Finally, in Section 5 we discuss
and draw some general conclusions.
FAULT TREES
Fi
(1)
Fi
(2)
P(F) = 1
P(F) =
(1 P(Fi ))
(3)
P(Fi )
(4)
i
i + i
so, by (3),
P(F) = 1
i
i
i + i
(5)
1042
(6)
(7)
i i
=
(
+
i)
i i
i i
(8)
i
i + i
(10)
The AND-gate
P(F) =
(9)
i
(
+
) i i
i
i
i
(11)
1043
(13)
p1 1 = p01 1 + p0 1
p01 (1 + 2 ) = p1 1 (1 q2 )
p0 1 = p1 1 q2 + p01 2
where p1 and p01 are the stationary probabilities for
the system to be in either of its up-states, and p0 is the
probability that the system is in its down state. Solving
for p0 is straightforward. We get
p0 =
1 2 + q2 1
1 + 1 2 + 1
1
q2
1 + 1
Consider next a subsystem comprising a power generator having another power generator as back-up. The
back-up may fail to start on demand. Some people
would refer to such a system with the phrase cold
stand-by. The state diagram of such a process is
shown in Figure 6. The down states are 0 and 00; 1 and
1/1 denotes the failure rate and mean down time of
the main power generator, q2 is the probability of failure on demand for the back-up generator while its upand down times are independent and exponential(2 )
and (2 ), respectively.
A straightforward analysis of the balance equations
shows that
p0 =
p00 =
i + qi 1
1
p0 =
1 + 1 i=1 i + 1
i + qi 1
1
1 + 1 i=1 i + 1
1 (1 q2 )
2
1 + 1 2 + 1 + 2
(14)
(15)
Hence,
1 q2
1 + 1
P(F) =
1 2 + q2 (1 + 2 )
1 + 1 2 + 1 + 2
(16)
(12)
1044
p0 + p00 1
1 (1 + 2 )(2 + q2 (1 + 2 ))
q2 (2 + 1 + 2 )(1 + 2 ) + (1 q2 )2 1
(17)
EXAMPLE
(t)
A similar remark applies to the mean down time
1/. Notice, however, that the probability P(F)
of failure is calculated exactly at each level and
that (7), i.e.,
P(F) =
1045
0.002
4.0
0.11
0.5
0.10
0.007
39.5
0.39
4.9
0.14
Gamma (0.9,0.0033)
1/Gamma (0.78,0.52)
Gamma (0.87,0.19)
1/Gamma (0.78,4.17)
Beta (11.2,97)
0.01
0.39
0.84
0.12
3.3
2.96
0.04
1.8
16.8
25.5
2.7
0.15
12
%
1/year
hours
1/year
1046
APPLICATION
We now briefly discuss how the above gate constructions were used to analyse a municipal drinking water
system (Lindhe et al. 2008).
The drinking water system was analyzed based on
an integrated approach. This means that the entire system, from source to tap, was considered. The main
reasons for an integrated approach are: (1) interactions between events exist, i.e., chains of events have
to be considered; and (2) failure in one part of the system may be compensated for by other parts, i.e., the
system has an inherent ability to prevent failure.
The top event, supply failure, was defined as
including situations when no water is delivered to the
consumer, quantity failure, as well as situations when
water is delivered, but it does not comply with existing water-quality standards, quality failure. The fault
tree was constructed so that the top event may occur
due to failure in any of the three main subsystems
raw water, treatment or distribution. Within each subsystem quantity or quality failures may occur. Note
that, for example, quantity failure in the treatment
may occur due to the water utility detects unacceptable
water quality and decides to stop the delivery.
To consider the fact that failures in one part of
the system may be compensated for by other parts,
cold stand-by AND-gates (cf. Sections 2.3 and 2.4)
were used. For example, if the supply of raw water
to the treatment plant is interrupted, this may be compensated for by the drinking water reservoirs in the
treatment plant and distribution network. Since only a
limited amount of water is stored the ability to compensate is limited in time. Similar unacceptable raw water
quality may be compensated for in the treatment. In
the case of compensation by means of reservoirs the
DISCUSSION
A major advantage with incorporating Markov Processes into the fault tree calculations is the added
dimensionality of the output as can be seen already
in simple examples such as the one in Section 3.
The application to a municipal drinking water system (cf. Section 4) also showed that the variations of
the AND-gate were of great importance. The information on failure rates and down times for each
intermediate level provided valuable information on
the systems dynamic behavior.
It is moreover worth mentioning here that many
experts prefer to elicit their thoughts on dynamic systems in terms of percentiles for failure rates and mean
down times, making it fairly easy to define uncertainty distributions that in the future may be updated
by hard data.
We finally emphasize the fact that the resulting
binary process at the top level typically is not Markov.
Thus, it is wrong to think of and at the top level as
being parameters of an MP, although we do not hesitate
to do so approximately. For instance, in the example
of Section 3, we approximated the down time distribution using an exponential() density. This is also
implicit in our referring to as the system failure rate.
ACKNOWLEDGMENTS
All authors acknowledge support from the Framework
Programme for Drinking Water Research at Chalmers
1047
1048
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Alarm prioritisation helps operators in keeping the full control of the plant and in performing
the appropriate corrective actions while in emergency, when a possible flood of alarms is likely to obscure the
real source of troubles with irrelevant information. Alarm prioritisation is a very complex, time consuming and
iterative process that requires detailed information about the plant and its operations, such as: time available
for taking corrective actions, number of operators and their preparedness in recognizing and facing emergencies, availability of written procedures to be applied, likelihood of simultaneous occurrence of multiple alarms
reducing the operator response time. Most of this information is normally not available at the plant design
stage. Notwithstanding this, a basis for an alarm prioritisation in accordance with the EEMUA guidelines can be
established at the plant design phase, by starting from the results of the safety analysis carried over on all plant
subsystems, and by applying a set of prioritisation general rules. An example of prioritisation in accordance
with EEMUA guidelines is provided for a Petrochemical Plant currently under construction. The results of the
preliminary prioritisation can be the basis for a complete prioritisation process to be performed at a later stage.
INTRODUCTION
Due to the great development of electronic applications, new plants are much more instrumented than in
the past, and a very large number of diagnostics are
usually provided in a plant.
Additionally the wide use of well-established systematic analysis techniques (e.g., FMEA, HAZOP)
allows the identification of the safeguards to be implemented in the design of the plant in order to prevent
or mitigate the consequences of potential accident
scenarios.
Typically, these safeguards include the provision of
new instrumentation for a wider monitoring and control of plant process parameters, as well as the implementation of a number of additional plant alarms.
However, in the absence of prioritisation, it is largely questionable that these additional monitoring and
alarms allow the operators a better and timely reaction
during a major plant upset. In fact, when a plant upset
occurs, a flood of alarms is likely to obscure the
real sources of trouble and to significantly increase the
stress level of operators, thus reducing the probability
that the proper corrective actions are taken.
There is a general consensus that some major accidents occurred in the past years to petrochemical and
nuclear plants were exacerbated by wrong or delayed
decisions, which were taken by operators because
overwhelmed by irrelevant or misleading information.
Typical examples of these case are the LOCA (Loss
of Coolant Accident) occurred at Three Mile Island in
ALARM PRIORITISATION
1049
Priority levels are intended to provide a guide for determining in which order the operators have to respond.
Following the recommendations provided by EEMUA,
four different priority levels have been defined for the
Project. They are the following:
Emergency Priority Level (1): To this level belong
all alarms indicating a situation for which an operator action is immediately required in order to
prevent immediate hazardous conditions which can
2.3
1050
Table 1.
On-site
Minor injury
4
5
6
Fatality
Multiple fatalities
7
8
Table 2.
Off-site
Site contamination,
recovery within 1 year
Local public needs to
take shelters indoors
Public evacuation
Irreversible injury
in the public
Fatality in the public
Multiple fatalities
in the public
Catastrophic event
with many fatalities
Impact assessment.
Table 4.
Expected consequence
Minimal
Equivalent
LOPA target
factor
Table 3.
01
Minor
23
Major
4
Severe
5
Operator
response
Minimal
Minor
Major
Severe
Delayed
Slow
Fast
Low
Low
Low
Medium
High
High
Emergency
Emergency
Emergency
Time response.
Operator response
Fast
Slow
Delayed
Impact on environment
1051
Results
1052
Table 5.
Priority level
No of alarms
% Overall
Emergency
High
Medium
Low
55
308
1761
6414
0.6
3.6
20.6
75.2
2.6
REFERENCES
CONCLUSIONS
[1] United States Nuclear Regulatory Commission, Special Inquiry Group, Three Mile Island: A report to
the Commissioners and to the public, Washington
D.C. U.S. Government Printing Office, 1980 (Rogovin
Report)
[2] Health and Safety Executive, The explosion and
fires at the Texaco Refinery, Milford Haven 24 July
1053
1054
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In the paper we consider the time dependent system of systems where the system total task must
be executed during the constrained time resource. To model the delay times of operational system maintenance
process performance, there is used Fault Trees with Time Dependency (FTTD) analysis. In the FTTD, events
and gates are characterized by time parameters, which are derived from the time characteristics of system of
systems processes performance. The FTTD analysis for calculations of proper value of time parameters for
system of systems will be shown in this approach.
INTRODUCTION
base on a classic inventory theory, where the procurement process parameters are usually optimized
taking into account the cost constraints (see e.g.
Sarjusz-Wolski, 1998).
The surveys by Cho & Parlar (1991), Guide Jr &
Srivastava (1997), Kennedy et al. (2002), Nahmias
(1981), and Pierskalla & Voelker (1976) review the
growing body of literature on spare parts inventories
problem.
The stock level of spare parts is dependent on the
chosen maintenance policy. Therefore, maintenance
programs should be designed to reduce both, the maintenance and inventory related costs. As a result, the
problems of system performance optimization can be
solved by joint, rather than separate or sequential
approach to define the maintenance and inventory
policies.
Consequently, there is used a system of systems conception to model the interactions between
operational system and its logistic support system.
This conception is widely used in the defense sector,
where logistic processes are aimed at supporting the
warfighter/mission requirements.
According to the definition (Crossley, 2006),
the system of systems context arises when a need
or a set of needs are met with a mix of multiple
systems, each of which are capable of independent operation but must interact with each other
in order to provide a given capability. More
information can be found in (Werbinska, 2007a,
Werbinska in prep.).
When to (re)order?
How many items to (re)order?
There is a plethora of models in the literature
on reliability theory regarding to the spares supply
process optimization. A significant portion of them
Moreover, when designing such real-time systems like logistic support of operational system performance, the following time parameters: parts
1055
1056
Figure 1. Time dependent system of systems model with a single-unit operable system, non-negligible replacement time,
and an (s, Q) inventory policy.
1057
<
S
s
S
s
>
<
(1)
We start the analysis with calculations for the HAZARD, from formula (see (Magott & Skrobanek 2002)):
S
S
hs = (hs hs )
S
he = hs + he
S
he = hs + hs
(2)
>
The causal XOR gates with one output event z and two
inputs events: x, y is given in Fig. 5.
Event name
Figure 2.
<
S
hs
S
hs
>
<
S
he
S
he
>
Hazard
Figure 4.
<
<
S
d1 ,
S
d,
S
d1 >
S
d >
<
<
S
d2
S
d,
S
d >
S
d2 >
<
S
xe
Figure 5.
1058
S
xe
>
<
S
d1
S
d1
>
<
S
d2
S
d2
<
>
S
ye
S
ye
>
In this gate:
given by:
S
occur(z) (x (duration(x) d1
S
(zs) (xs) + Sd1 ))
(xs) + d1
S
(y (duration(y) d2
(3)
S
S
(ys) + d2
(zs) (ys) + d2
))
S
d1
,
xs = zs
xe = ze
(6a)
ys = zs ,
ye = ze ,
ys = zs
ye = ze
(6b)
S
S
S
d1
, (d2
, d2
)
where:
represent respectively the
minimal and maximal time delays between the occurrence of the cause x(y) and the effect z.
If we know in which time interval the event z can
start, e.g. <zs , zs >, we can make calculations of
the time intervals for events x and y. As a result,
<xs , xs >(<xe , xe >) denotes the time interval in
which event x must start (end) in order to cause the
hazard. The details of the calculations are given in
(Magott & Skrobanek 2002, Skrobanek 2005).
For causal XOR gates we have the following inequalitiesequalities system:
S
S
xe
dl
S
S
}, xs = zs dl
xs = zs min{dlS , xe
S
xe = zs , xe = xs + xe
(4a)
S
S
d2 ye
S
S
S
ys = zs min{d2
, ye
}, ys = zs d2
S
ye = zs , ye = ys + ye
(4b)
3.3
xs = zs ,
xe = ze ,
We start our analysis with hazard event identification and construction of the fault tree for the defined
top event. For the system of systems presented in
the section 2, we can describe hazard as operational
system fault lasts after time redundancy .
The FTTD for the investigated system of systems
with time dependency is given in Fig. 7. This FTTD is
described in (Magott et al. 2008).
Left branch of the FTTD concerns situations when
the operational system fault has occurred while there
were no available spare parts in the logistic support
system. Right branch of the FTTD concerns situations when the system of systems downtime occurs
due to over crossing the defined time resource by the
operational elements repair times.
The analysis begins with calculations for the hazard.
In the FTTD given in Fig. 7 we have two cases:
hazard is equal to event 2, therefore using formulae
(1), (2) we obtain:
2s = hs = 0
2s = hs =
S
(hs
2e = he = hs +
(7)
S
hs
)
S
he
= (0 0) = 0
= hs +
S
2e
(8)
S
2e
occur(z) (occur(x) x = z)
(occur(y) y = z)
max }
(5)
2e = he = hs +
S
he
(9)
= hs +
S
2e
S
2e
(10)
(11)
3s = hs = 0
1059
(12)
<0, 0>
< max{0,
max
<
min
min
+ max{0, t d min
+ t d max
max
s _ min
min
s _ max
max
},
>
< max{0,
<
>
<
+ max{0, t d min
min
max
+ t d max
s _ min
s _ max
min
max
min
max
>
},
},
max
<
s _ min
s _ max
>
max
>
>
< 0,0 >
<
min
min
< 0, >
where :
>
s _ min
= s
min
s _ max
= s
max
+ (s + 1) t min
+ (s + 1) t max
Figure 7. The fault tree with time dependencies for the investigated system of systems.
+ t d max
s _ min
min
s _ max
Figure 8.
>
min
from
formulae
(1), (2)
(hazard
max },
= event 3)
max
},
max
min
>
S
S
3s = hs = (hs
hs
)
3e = he = hs +
S
he
= hs +
(13)
S
3e
S
he
= hs +
(14)
S
4e
= min + max{0, tdmin s_max },
(17)
S
4e
(18)
S
3e
S
= 3e
= max min
(15)
(16)
The results of the presented calculations are given
in Fig. 8.
S
d1
= min
(19)
S
d1
(20)
1060
= max
min
+ max{0, t d min
from formulae 4a
(for gate 2)
only if min max + t d max
4 < min{ max , max + t d max
from formulae 4a
(for gate 5)
only if min max + t d max
s _ max
max
+ t d max
max
},
min
> < 0,
min
only if min
5 < min{ max ,
s _ min
and
max
max
+ t d max
s _ min
} min{
< min{
max
max
+ t d max
s _ min
},
max
min
+ t d max
max
max
},
> < 0,
min
min
t d _ max
s _ min
6 < min{
min
s _ min
7 < min{
max
min
s _ min
max
max
max
},
min
>
from
formulae 4a
(for gate 5)
>
+ t d max >
min
, t d max },
s _ max
only if
Figure 9.
},
from
formulae
(1), (2)
(hazard
s _ min
min >
= event 3)
},
max
max
min >
from
formulae 4a
s _ min >
(for gate 3)
B
> < min{ max , max }, >
only if
(21)
only if
7<
and
S
d1
= min
4e = 2s = 0
4e = 4s +
S
4s
max
max
max
},
min
max
min
only if
max
> <
max
, >
7<
max
min
max
max
}, >
max
min
max
><
max
, >
(22)
The hazard can occur only if:
(23)
(24)
min
max
Figure 10.
S
S
, 4e
}
4s = 2s min{d1
min
7 < min{
max
(26)
min max
(27)
1061
Therefore, if:
max + td max < min + s_min
(29)
min + s_min
max
max
+ min{s_max , td max }
max
(3.e) = (1.e)
min
(3 .s) = (1.s)
(7.s)
(5.s)
max
(7.e)
start of
hazard
max
start of hazard
min{
max
(3.e) = (1.e)
min
min{
(5.e)
(3 .s) = (1.s)
(7.s)
(5.s)
4 < min{
max
max
(7.e)
+ t d max
max
+ t d max
4<
max
min
>
< 0,
min
max
+ t d max
if
6<
<
(6.s)
max
max ,
Figure 12.
max
s _ min
},
max
+ t d max
s _ min
s _ max
min
min
min
> < 0,
>
min
s _ min
s _ min
>
+ t d max >
>
min
s _ min
s _ min
+ t d max >
(2.e)
= (1.e)
, t d max }
(4.e)
(2.s) = (1.s)
(4.s)
start of
event 4
min
if
4<
max
< 0,
min
if
,
s _ max
s _ min
(6.e)
s _ min
t d _ max
s _ max
if s _ max t d _ max
6 < max t d max ,
<
s _ min
max
max
min
max
if
(32)
(5.e)
b)
(31)
start of
hazard
max
(30)
max
t d max +
+
max
s _ max
max
max
+ t d max
+ t d max
s _ min
+ t d max
s _ min
s _ min
min
>
s _ min
>
t d _ max
6<
s _ min
max
t d max
s _ max
<
s _ min
max
t d max ,
min
if
s _ max
s _ min
max
t d max
<
s _ min
max
t d max ,
1062
min
s _ min
s _ min
t d max ,
min
>
+ t d max >
t d _ max
6<
>
>
min
s _ min
s _ min
+ t d max >
CONCLUSIONS
1063
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: According to the definition of the Council of Logistics Management, logistics is that part of
the supply chain process that plans, implements and controls the efficient, effective flow and storage of goods,
services and related information from the point of origin to the point of consumption in order to meet customers
requirements. Problems of logistic system reliability assessment were discussed in paper (Nowakowski 2006).
The major questions that are discussed concern definition of the logistic system failure and estimation of
simple measures of the system reliability. Reliability assessment of distribution process was discussed in paper
(Nowakowski 2007). This paper deals with two problems: Developing more adequate reliability measure that
the proposed earlier and assessing reliability of supply/procurement process which is critical for operation
system.
INTRODUCTION
RELIABILITY MEASURES
1065
Nccu
Ncti
; Pcti =
;
N
N
Ncdo
(1)
N
(2)
or:
Rd2 = wcpr Pcpr + wcpl Pcpl + wccu Pccu
+ wcti Pcti + wcql Pcql + wcqt Pcqt + wcdo Pcdot
(3)
where w = weight of the partial index P .
System reliability Rd1 (Equation 2) assumes that the
analyzed random events (faults, failures) are statistically independent and the significance of the events
is the same. This assumption is a strong simplification. The second approach (Equation 3) takes into
consideration the weight of potential consequences
of a logistic failure. The indicator is more realistic
because every fault of logistics system causes different
effect on the link of SC. The indicator allows to: identify the importance of a given component (parameter)
with a respect to the system performance of interest
and evaluate the effect of the change in any direction
of any parameter (Do Van et al. 2007). If every deviation from planned values of 7R formula is understood
as a fault of the system, the direction of deviation is
significant.
The discussed problem is known and solved i.e.
dealing with maintenance policy defining in multiunit, heterogeneous system. Bevilacqua & Braglia
(2000) suggest that criticality analysis based on
FMECA technique may be very helpful tool. Authors
present the list of possible criteria of elements assessment: safety, machine importance for the process,
spare machine/parts availability, maintenance cost,
access difficulty, failure frequency, downtime length,
machine type, operating conditions, propagation
effect and production loss cost. They perform the
analysis in the electrical power plant, composed on
safety machine 1, 5,
machine importance for the process 2, 5,
maintenance costs 2,
failure frequency 1,
downtime length 1, 5,
operating conditions 1.
To develop more adequate reliability measure importance weights should be estimated. The new concept
assumes that the weights may base on importance factor models used in the reliability theory to establish
the importance of a component or an event. Reliability
analysts have proposed different analytical and empirical measures which rank components regarding their
importance in a system. They allow to identify components/events within a system that more significantly
influence a system behavior with respect to reliability,
risk and safety (Espiritu et al. 2007). The short brief
of existing models is presented below.
3.1 Marginal importance factor
(Birnbaum importance factor)
The indicator introduced by Birnbaum (1969) describe
probability that the system will be in operating state
with component i as the critical, when i is operating.
It is the rate at which the system reliability increases
when the reliability of the component i increases
(Villemeur 1992). Analytically it is defined by:
IiB (t) =
Rs (t)
Ri (t)
(4)
1066
(7)
Wi+
W
(8)
3.3
PAWi (t) =
Fi (t)
Fs (t)
(5)
(6)
3.4
(9)
where fi (t) =
distribution.
Rs (t; Ri (t) = 1)
Rs (t)
W
Wi
(10)
Rs (t)
Rs (t; Ri (t) = 0)
(11)
1067
Table 1.
Table 2. The type of logistic system failure with its criticality for a production process.
The influence
of element
reliability
Analyzed reliability level of ith element
on system
reliability
Ri (t) = 0 Ri (t) = function of t Ri (t) = 1
Rs (t) increase
Rs (t) increase IiVF (t)
RAWi (t)
Fs (t) increase
IiB (t)
RAWi (t)
The type
of supply
process failure
Direction of
discrepancy
between order
and delivery
incorrect
product
incorrect place
incorrect
customer
incorrect time
interval of
delivery
too early
delivery
too late
delivery
incorrect quality
of product
better quality
than ordered
worse quality
than ordered
incorrect
quantity of
product
bigger delivery
than ordered
lesser delivery
than ordered
incorrect
documentation
Consequences
may cause production
process stoppage
cost, critical
may cause production
process stoppage
cost, critical
may cause production
process stoppage
cost, critical
higher costs of
inventory holding
cost, non critical
may cause production
process stoppage
cost, critical
no consequences
may cause production
process stoppage
cost, critical
no consequences
may cause production
process stoppage
cost, critical
cost
NUMERICAL EXAMPLE
1068
Mi
i
99
97 98
11
1 1 = 0, 94
100
100 100
(12)
Table 3.
Table 4.
The type
of logistics
system failure
(incorrect)
The number
of failures
The number
of failures, which
were critical
for production
process
product
place
customer
time interval of delivery
quality of product
quantity of product
documentation
1
0
0
30
3
7
15
1
0
0
3
2
0
0
Domain
Requirements
Quality
Logistics
Technology
Rs (R4 = 1) =
Rs (R4 = 1) =
Purchase
97 98
1 1 = 0, 95
100 100
(13)
98
99
111
1 1 = 0, 97
100
100
(14)
98
99
111
1 1 = 0, 97
100
100
(15)
0, 95
= 1, 01
0, 94
(16)
PAW4 =
0, 97
= 1, 03
0, 94
(17)
PAW5 =
0, 96
= 1, 02
0, 94
(18)
EXPERIMENTAL EXAMPLE
1069
material quality,
price,
punctuality of delivery,
completeness of delivery,
service level,
payment conditions,
documentation correctness,
package quality,
way of controversy solutions.
The criterions were ranged according to the level of
importance by group of 16 experts. Some results of this
expertise is shown in Table 5 for criterions concerning
quality of delivery.
The method was implemented in the factory and
some data concerning reliability/quality of delivery
process of brass sleeves from one supplier are shown
in figures 14.
The analysis concern deliveries between 1999 and
2004 while new assessment method was implemented
in 2002 year.
The rate of deliveries with correct number of
parts changes among particular monthsfrom 35% to
Table 5.
Assessment
criterion
Criterion
wages
Delivery/Punctuality
On time
1 day
2 days
Delivery/Completness
Correct
Deviation 3
Deviation 5
Deviation 10
Delivery/shipment/
packeging instruction
Full, no omissions
Slight omissions,
unimportant
Service
Marshaling warehouse
Buffer warehouse
DDP/DDU
Figure 1.
of time.
Maximal number
of points
for the criterion
12,5%
100
70
50
5,75%
100
80
50
30
Figure 2.
Figure 3.
Figure 4.
1,75%
100
50
1,75%
100
70
50
1070
SUMMARY
1071
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: When performing a risk analysis, the target is usually a certain tolerable risk level. The applied
safety measures have a direct influence on the final value of the risk for a given plant and, thus, increasing these
safety measures is the usual way to reduce the risk to a tolerable level. However, there is also the possibility
of reaching the desired risk value by searching the best design through an optimization approach. When the
design is modified to diminish the riskfor example, by substituting a large storage tank by two or more
smaller tanksthe cost of the plant often increases and the consequences of the hypothetical accident decrease.
Consequently, the cost of the accident decreases as well. Therefore, the possibility of reaching an optimum
design implies a minimum total cost and a tolerable risk level constraint (taking into account both the cost of
the equipment and the cost of the accidents that can occur).
INTRODUCTION
1073
OPTIMIZATION PROCEDURE
The use of the optimization methodologies is an interesting way to reduce risk to an acceptable level with
a minimum economical investment. The suggested
methodology has been represented in a schematic way
in Figure 2.
4.1 System definition
The system defines the borders of process or set of
equipments to focus calculations on. It can be applied
to a section of a plant and particularly to the equipment
that is more susceptible to risk, either due to the process
itself or to the substances that are processed.
4.2 Selection of variables
The selection of variables refers to the problem of
selecting input variables that are most predictive in risk
analysis; this is related to variables and parameters that
influence the process design and the calculation of consequences. These variables can physically represent
equipment size and operating conditions.
4.3 Establishment of the objective function
This is one of the most important steps in the applications of optimization to any problem. It is often
Investment in safety
Risk analysis is used to evaluate the hazards associated to an industrial installation, a particular activity
or transportation of a hazardous material. The methodologies of risk analysis provide a reasonable estimation
of potential accidents, the frequency of these accidents
and the magnitude of its effects or consequences. The
reasons why organizations are concerned about risk
analysis can be classified in three types:
Moral: related to human life and health (Tweeddale,
2003).
Legal: depending on the specific legal and legislation framework of the community. They are related
to environmental laws and the international safety
standards.
Commercial: loss of benefits, costs associated to
damages to people, buildings, environment and image
of the company.
In order to reduce the risk, investments are made
in chemical plants in areas such as: management,
research, design, process, operational, plant sitting,
plan layout, equipment, instrumentation, fire protection, inspection and emergency planning. Generally
speaking, as the investment in this fields increases,
the risk associated to a given plant or activity decreases
(Figure 1).
There is no doubt about the economic importance of
loss prevention. The various safety factors which are
incorporated in the plant design can increase significantly the cost. These can include design with thicker
vessels walls, use of more costly materials of construction, selection of more expensive equipments and
redundancy of certain items. Some cost arise through
failure to take proper loss prevention measures, others are incurred through uninformed and unnecessarily
expensive measures.
Risk
Figure 1.
1074
Selection of the significant (because of their consequences and frequency) accidents, depending on: the
features of the system, the type of substances that are
handled and the operating conditions of the process.
4.5
It is related to the magnitude that causes the damage and the time of exposure to it. It refers to the
effects of an accident, for example, the overpressure,
the radiation or the concentration of a toxic gas.
4.6
Calculation of consequences
1075
For heavier than air gases, the code uses AlohaDegadis, a simplified version of the heavy gas model
Degadis, which is one of the more well-known and
accepted heavy gas models. This model computes
pollutant concentrations at ground level, where a dispersing chemical is most likely to contact people,
and produces a footprint, plotted either on a grid
or background map. Each footprint represents the
area within which, at some point during the hour
following the beginning of a release, ground level pollutant concentrations will reach or exceed the level
of concern entered by the user. The model produces
graphs of indoor and outdoor concentration expected
to be experienced by people at any specific location
(Cameo, 2007). The program uses information concerning meteorology, chemical compounds, source,
mass and type of release, etc.
CALCULATION OF CONSEQUENCES
(1)
(2)
8.1
OBJECTIVE FUNCTION
The objective function
ni
CIi +
i=1
nj
CCj
(3)
j=1
TC = total cost; CI = cost of the equipment, including the cost of one or more storage tanks along with
direct and indirect costs; CC = costs of the consequences, including costs of fatalities. Damages to
buildings and equipment, damages to the environment
are not considered.
The total capital cost of the equipment is referred
to a stainless steel pressurized tank (Smith, 2005),
updated to 2007 through appropriate indexes:
Equipment cost = 70354
V
5
0.53
N
(4)
(5)
1076
The selected scenario, taken from an existing installation, considers the storage of 1900 kg of chlorine in
one or more pressurized tanks. The accident consists
in the instantaneous release of the full inventory of the
tank in 10 min. There are inhabited buildings with 18
people indoors at a distance of 300 m downwind.
Weather conditions:
Wind speed = 4 m/s.
Temperature = 12 C.
Stability = D type.
Night.
Humidity = 70%.
Type of gas = chlorine.
Time of release = 10 min.
Pressure in the tank = 5 atm.
Ground roughness coefficient = 10 cm.
Figure 4.
5
4
Fatalities
3
2
1
0
1
Figure 5.
3 N
1077
Consequences Cost
800000
Cost []
Equipment Cost
600000
400000
200000
0
1
Figure 6.
9.1
0,0000025
Risk [fatalities/year]
Total Cost
1000000
0,000002
0,0000015
0,000001
0,0000005
0
1
Figure 7.
1,0E-05
frequency [1/year]
1200000
1 tank
2 tanks
3 tanks
4 tanks
5 tanks
1,0E-06
1,0E-07
0,01
0,1
1
Nk
Figure 8.
1078
10
10
CONCLUSIONS
ACKNOWLEDGEMENTS
NOTATION
constants in Eq. (1)
indoors concentration of the gas (ppm)
costs of the consequences (C
=)
cost of the equipment (C
=)
error function ()
frequency (year 1 )
constant in Eq. (1)
number of storage tanks (C
=)
number of equipment cost items ()
number of consequences cost items ()
number of fatalities ()
time of exposure to the gas (min)
total cost (C
=)
volume of the storage tank (m3 )
function probit ()
1079
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
C. Winder
School of Safety Science, The University of New South Wales, UNSW Sydney, Sydney, Australia
ABSTRACT: The Comprehensive Nuclear-Test-Ban Treaty (CTBT) bans nuclear weapon test explosions
anywhere in the world in order to constrain the development and qualitative improvement of nuclear weapons
and the development of advanced new types of these weapons. Compliance with the CBCT will be monitored
by a global verification regime with international monitoring and on-site inspections (OSI). As part of normal
operational activities, OSI teams may encounter a range of hazards and risks. One such risk is from exposure
to chemical hazards from investigations of sites where industrial, agricultural or other hazardous chemicals
were manufactured, stored, used or disposed, and where risks to OSI team members from hazardous chemicals
may arise. Conventional risk management procedures can assist in dealing with such exposures through identification of chemical hazards, assessment of chemical by risks into account a range of hazard and exposure
factors, and control of risks assessed as being unacceptable. Chemical risk assessments under the OSI situation
require consideration of a toxicity assessment component and an exposure assessment component to characterise
risk. Further, after using industrial chemicals of known hazard characteristics, practical hypothetical exposure
situations and established IDLH recommendations indicate OSI activities for many industrial chemicals where
chemical exposures may occur, it is possible to conclude that in the absence of information or evidence of presence
of industrial chemicals, Inspection Team members are unlikely to be at risk from exposure to toxic chemicals.
INTRODUCTION
The general principle of liability for negligent conduct has evolved over the last century into the general
concept of duty of care. This concept has been
crystallised into legislation in some countries, placing
mandatory obligations on individuals and organisations (duty holders), such as employers or service
providers. For employers and principal contractors,
this includes obligations to provide:
remains with the primary duty holder. For this reason, delegation of responsibilities should not be taken
lightly, and checks and balances must be included in
the delegation to assure that duty of care obligations
remain intact. Further, while these obligations apply
to all conventional employer-employee (or indeed, the
principal-subcontractor) relationship, there are especial workplace circumstances where it is not always
possible for a duty holder to meet these obligations.
For example, in military or emergency situations, it is
not always possible to meet the duty of care obligation
to provide a safe workplace. Under such circumstances, duty of care obligations must be met by other
arrangements, including:
A sequence of preliminary, operational and special
risk assessments.
Development and implementation of standardised
working arrangements.
Ensuring all employees have received sufficient
training to be competent in their work activities.
Arrangements for debriefing of employees and
review of working arrangements after each assignment to ensure effective operational readiness.
1081
(in an uncontaminated area within or near the inspection area) that is to serve as a safe haven/clean area
from where it would conduct deployments to/within
the inspection area.
2.2 Challenges to OSI risk management
COMPREHENSIVE NUCLEAR-TEST-BAN
TREATY (CTBT)
An OSI would be regarded as a final verification measure. It may be launched by the CTBTO upon the
request of any State Party and subsequent approval by
the Executive Council. Once an OSI is approved, up to
forty inspectors and inspection assistants would deploy
to the field and utilise approved inspection equipment
in an inspection area of up to 1, 000 km2 , abiding by a
very short time schedule with a launching period of just
six days. This period will also see the formation of the
inspection team from a roster of potential inspectors
since the CTBTO will not have its own inspectorate.
The purpose of an OSI would be to clarify whether
a nuclear explosion has been carried out in violation of
the Treaty and to gather any information which might
assist in identifying the potential violator. For this,
the inspection team, with the expected support of the
inspected State Party, would set up a base of operation
OSIs represent a particular challenge to risk management processes as they pose a variety of health
and safety concerns which are not only consequences
of tasks being performed by the team but also a
function of the dynamic nature of the mission. For
instance, unique conditions of sites where OSIs are
likely to be conducted are associated with phenomena of ambiguous events that might or might not be
a nuclear explosion and are therefore of considerable
concern for the health and safety of the team. These
conditions relate, among other things, to the potential for ionising radiation, radioactive debris, explosive
ordnance, tectonic activity and unstable buildings or
terrain.
Additional challenges relate to those hazards that
are associated with the possible/likely conduct of an
OSI at a remote site and/or in a foreign environment.
Moreover, it is to be expected that site conditions
within the inspection area are unidentified at the outset of an OSI. This not only relates to natural terrain
features which may affect site accessibility but also to
remnants from past or present land use. For instance,
site contamination with unidentified chemical substances as a result of former industrial, agricultural
and/or military activities or uncontrolled waste disposal is a possible scenario that should not result in
unacceptable risks for the inspection team. While the
inspection team will request the inspected State Party
during the launching period to provide information
on site characteristics including previous hazardous
activities and possible contamination that will allow
conducting a preliminary risk assessment, there will
be no guarantee that the provided information is totally
accurate and comprehensive.
1082
Underlying assumptions
Methodology
1083
1 For
1084
1085
1086
Exposure modelling
4.2
A simple risk assessment can be made by calculating the minimum amount of chemical that would be
required to be present in a specified air volume (low,
moderate, high) as estimated from its IDLH. While it
is accepted that there are many imponderables in estimating risks under operational OSI situations, if it is
assumed that all the available chemical is present in
the air volume fully mixed, this provides a crude estimate of risk. For example, if 100 g of a chemical is
required to be present to fill a specified air volume
to its IDLH, this will give an approximate measure
Width
m
km
Area
m
Height
km2
m2
100 m2
Square
0.01
10.00 0.01
10.00
0.0001
Circle
0.0056
5.64 0.0056
5.64
0.0001
1 km2
Square
1.00
1,000.00 1.00
1,000,00
1.00
Circle
0.56
564.19 0.56
564.19
1.00
1000 km2
Square
31.62
31,622.78 31.62
31,622.78 1,000.00
Circle
17.84
17,841.24 17.84
17,841.24 1,000.00
Table 2.
Exposure assessment
km
0.005
Volume
m
km2
m2
5.00
100.00
100.00
0.01
0.01
10.00 0.00
10.00 0.00
1,000,000
1,000,000
0.01
0.01
1,000,000,000 0.01
1,000,000,000 0.01
10.00 5.00
10.00 5.00
5000
5000
5,000,000,000
5,000,000,000
Substance name
CAS
Number
1994 IDLH
(mg/m3 )
High
Toxicity (g)
Moderate
Toxicity (kg)
Low
Toxicity (kg)
Paraquat (mist)
Hydrogen selenide
Acreolin
Chlorinated biphenyls (54% chlorine)
Mercury (elemental vapour or inorganic dusts)
Hydroquinone (vapour)
Lead and its compounds (dust)
Formaldehyde
Manganese (dust)
Phosphoric acid (mist)
Carbon Black
1910-42-5
7783-07-5
107-02-8
11097-69-1
7439-97-6
123-31-9
7439-92-1
50-00-0
7439-96-5
7664-38-2
1333-86-4
1.00
3.32
4.60
5.00
10.00
50.00
100.00
246.00
500.00
1,000.00
1,750.00
0.50
1.66
2.30
2.50
5.00
25.00
50.00
123.00
250.00
500.00
875.00
5.0
16.6
23.0
25.0
50.0
250.0
500.0
1,230.0
2,500.0
5,000.0
8,750.0
5,000
16,600
23,000
25,000
50,000
250,000
5,000,000
1,230,000
2,500,000
5,000,000
8,750,000
1087
Risk characterisation
DISCUSSION
REFERENCES
Grandjean, E. 1988. Fitting the Task to the Man: A Textbook of
Occupational Ergonomics, fourth edition. London: Taylor
and Frances, pp. 8394.
NIOSH. 1994. Documentation for Considered Immediately Hazardous to Life or Health Concentrations. NTIS
Publication No PB-94-195047. Cincinnati: US National
Institute for Occupational Safety and Health. At: http://
www.cdc.gov/niosh/idlh/idlh-1.html
Winder, C. 2004. Toxicity of Gases, Vapours and Particulates.
Chapter 15 in: Occupational Toxicology, second edition,
Winder, C., Stacey, N.H., editors. Boca Raton: CRC Press,
pp. 399423.
1088
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: A comparison of different methodologies used to estimate the evacuation radius, is presented.
The evaluation is accomplished doing some modifications in the way of computing indoor concentration (taking
into account the profile of outdoor concentration and the possible sorption into indoor surfaces) and toxic load.
The casualties probability estimation is made using the probit analysis, but a parallel analysis to assess the
effectiveness of shelter in place, based on the AEGL-3, is also made with the aim of comparison and to confirm
the results obtained with the probit analysis. Five substances of different toxicity (chlorine, methyl isocyanate,
acroleine, hydrogen fluoride and mustard gas) are used to study the performance of one or another methodology,
finding that for cumulative substances there is an overestimation with the method used by Catalonian government.
Concerning meteorological conditions, larger distances were obtained when increasing the atmospheric stability.
INTRODUCTION
1089
Table 1.
Model
Simplifications
Reference
Without
adsorption
ka = kd
= k1 = k2 = 0
Deposition
kd = k1 = k2 = 0
Yuan 2000,
Karlsson 1994,
Casal et al. 1999b
One-sink
k1 = k2 = 0
Sink
diffusion
k1 = k2
Jrgensen et al.
2000, Singer et al.
2004, Zhang et al.
2002a
dm2, j
dm1, j
= ka, j Ci kd, j m1, j
,
dt
dt
dm2, j
= k1, j m1, j k2, j m2, j ,
dt
j = 1...N
j = 1...N
(2)
(3)
1090
METHODOLOGY
As mentioned in the first section, casualties percentage is estimated through the probit analysis. The probit
variable (Y ) is related to the probability (P) according
to the following expression:
1
Y 5
P=
1 + erf
2
2
(4)
(7)
1n
(8)
(5)
1n
TLL
TLo (t)
t2
Y = A + B ln [C (t)]n dt
TLL
TLi (t)
SFM =
TLo
TLi
t
Con dt 0 1 Cin dt
TLi
=1
t
TLo
(1) 0 2 Con dt
(9)
1n
(10)
[C (t)]n dt
The methodology currently used by the Catalonian government to determine the evacuation radius
given an external concentration is represented by the
following steps (PlaseQcat 2005):
1. Estimate indoor concentration at different distances
with Equation (11), using an air exchange rate (no )
of 2 h1 .
(6)
1 exp (no t)
Ci
=1
Co
no t
In this work, in addition to the probit analysis, shelter effectiveness was also estimated. To assess shelter
in place effectiveness there are two type of indicators:
ones that are used to evaluate if a toxic load limit (TLL)
is exceeded, it means a limit from which certain health
affections could take place; and others that are used
(11)
1091
TLi = Cin t
(12)
Table 2.
Parameter
Method 1
Method 2
Method 3
Method 4
no
2
1 exp (no t)
Ci
=1
Co
no t
1
dCi
= no (Co Ci )
dt
1
dCi
= no Co no Ci
dt
A
ka Ci
V
t n
Co dt
1
1 exp (no t)
Ci
=1
Co
no t
Ci
TLo
TLi
Con t
Cin t
t
0
t
Con dt
0
t
Cin dt
Cin t
Con dt
Con t
(13)
Table 3.
Substance
A1
B1
n1
AEGL-3 30 min
mg/m3 (ppm)2
Cl
Iso
Acro
HF
MG
6.35
1.2
4.1
8.4
5.473
0.5
1
1
1
1.13
2.75
0.7
1
1.5
2
81.194 (28)
0.933 (0.4)
5.732 (2.5)
50.715 (62)
2.7 (0.41)
Constants taken from Serida Database 1.3 (1999), concentrations must be in mg/m3 and time in min.
Values taken from EPA (http://www.epa.gov/oppt/aegl/
pubs-/results56.htm).
3 Values estimated from lethal dosages Hartman (2002).
2
1092
Table 4.
Substance
Method
Evacuation
radius (m)
Chlorine
Method 1
Method 4
Method 2
Method 3
Method 1
Method 4
Method 2
Method 3
Method 1
Method 4
Method 2
Method 3
Method 1
Method 4
Method 2
Method 3
Method 1
Method 4
Method 2
Method 3
203
149
163
144
4695
3300
1997
1891
716
518
481
437
223
163
167
150
1280
910
890
800
Methyl
isocyanate
Acroleine
Hydrogen
fluoride
Mustard
gas
TLL
SFo
SFi
SFM
TLRF
Po
Pi
5.35E + 06
0.25214
0.14718
0.17187
0.13985
0.52076
0.30177
0.21134
0.18845
0.1553
0.0895
0.0848
0.0715
0.1855
0.1075
0.1140
0.0944
0.1918
0.1112
0.1152
0.0939
0.68539
0.69079
0.6827
0.68389
1.4156
1.4164
1.4146
1.4145
0.4222
0.4202
0.4233
0.4211
0.5043
0.5044
0.5075
0.5079
0.5214
0.5221
0.5264
0.5136
2.7183
4.6935
3.9721
4.8903
2.7183
4.6935
6.6935
7.5062
2.7183
4.6935
4.9908
5.8881
2.7183
4.6935
4.4511
5.3826
2.7183
4.6935
4.6102
5.4701
0.93607
0.98576
0.97747
0.98728
0.50341
0.66119
0.73573
0.7561
0.63212
0.78694
0.79963
0.83016
0.77687
0.90165
0.89351
0.91992
0.86466
0.9546
0.9529
0.9665
4.37%
16.62%
11.85%
18.44%
0.84%
2.23%
3.93%
4.66%
1.83%
6.19%
6.87%
9.43%
5.68%
22.26%
19.70%
28.48%
18.12%
61.32%
58.29%
74.55%
0.10%
28.587
171.96
10835
218.7
0.10%
0.10%
0.10%
0.10%
1093
Figure 2.
Figure 3.
1094
REFERENCES
Figure 4.
stability.
CONCLUSIONS
One of the most important limitations when assessing the indoor concentration is the lack of knowledge
about the air exchange rate, which, as seen with
methods 1 and 4, is a very important parameter that
conditions the evacuations radius obtained and could
lead to under or overestimations.
For the substances studied, method 3 was the one
that predict the lowest evacuation radius and the highest safety factor for the same distance; the other
methods gave more conservative results. From a theoretically point of view, method 3 is also the most
rigorous, since it incorporates an indoor concentration
model that takes into account deposition over indoor
surfaces, and assesses the toxic load according to the
profile of the indoor concentration. Therefore, this
methodology should be the one that better represents
the reality. Even though, since there is not experimental data available to validate the results obtained with
the four methods, a final statement could not be given.
When estimating and evacuation radius, a parallel
analysis with another guideline level (like AEGL-3,
as shown here) is a good tool, since it could be used
Casal J., Montiel H., Planas E., Vilchez J.A. 1999a. Anlisis del Riesgo en Instalaciones Industriales. Barcelona:
Ediciones UPC.
Casal J., Planas-Cuchi E., Casal J. 1999b. Sheltering as a
protective measure against airborne virus spread. Journal
of Hazardous Materials., A68: 179189.
Chan W.R., Nazaroff W.W., Price P.N., Gadgil A.J. 2007a.
Effectiveness of urban shelter-in-placeI: Idealized conditions. Atmospheric Environment. 41, 49624976.
Directrz bsica 1196. 2003. Ministerio de trabajo y asuntos
sociales. Instituto nacional de seguridad e higiene en el
trabajo.
El Harbawi M., Mustapha S., Choong T.S.Y., Abdul Rashid
S., Kadir S.A.S.A., Abdul Rashid Z. 2008. Rapid analysis
of risk assessment using developed simulation of chemical
industrial accidents software package. Int. J. Environ. Sci.
Tech., 5 (1): 5364.
Espinar J. 2005. Estudi comparatiu dels parmetres de
toxicitat i vulnerabilitat de substncies perilloses. Undergraduate final work. Universitat Politcnica de Catalunya.
Geeta B., Tripathi A., Marasimhan S. 1993. Analytical
expressions for estimating damage area in toxic gas
releases.
Hartman H.M. 2002. Evaluation of risk assessment guideline
levels for the chemical warfare agents mustard, GB, and
VX. Regulatory Toxicology and Pharmacology, 35 (3):
347356.
Jetter J.J., Whitfield C. 2005. Effectiveness of expedient
sheltering in place in a residence. Journal of Hazardous
Materials, A119: 3140.
Jrgensen R.B., Bjrseth O. 1999. Sorption behaviour of
volatile organic compounds on material surfacesthe
influence of combinations of compounds and materials
compared to sorption of single compounds on single
materials. Environment International, 25 (1): 1727.
Jrgensen, Rikke B., Bjrseth, Olav., Malvik, Bjarne.
1999. Chamber testing of adsorption of volatile organic
compounds (VOCs) on material surfaces. Indoor Air,
9: 29.
1095
Jrgensen R.B., Dokka T.H., Bjrseth O. 2000. Introduction of a sink-diffusion model to describe the interaction
between volatile organic compounds (VOCs) and material
Surfaces. Indoor Air, 10: 2738.
Karlsson E. 1994. Indoor deposition reducing the effect
of toxic gas clouds in ordinary buildings. Journal of
Hazardous Materials, 38: 313327.
Karlsson E., Huber U. 1996. Influence of Desorption on
the Indoor Concentration of Toxic Gases. Journal of
Hazardous Materials, 49: 1527.
Liu D., Nazaroff W.W. 2001. Modeling pollutant penetration
across building envelopes. Atmospheric Environment, 35:
44514462.
Mannan M.S., Lilpatrick D.L. 2000. The pros and cons of
shelter in place. Process Safety Progress, 19 (4): 210218.
PlaseQcat. 2005. Pla demergncia exterior del sector qumic
de Catalunya. Generalitat de Catalunya. Secreteria de
Seguretat Pblica, Direcci General dEmergencies i
Seguretat Civil.
Purple book. TNO. 1999. Guideline for quantitative risk
assessment. (CPR18E). Committee for the prevention of
disasters. The Hauge.
Seinfeld J.H., Pandis S.N. 1998. Atmospheric Chemistry and
Physics: From Air Pollution to Climate Change. Wiley
Interscience Publication, New York, NY, 926950.
Shair F.H., Heitner K.L. 1974. Theoretical Model for Relating Indoor Pollutant Concentrations to Those Outside.
Environmental Science and Technology, 8 (5): 444451.
Singer B.C., Revzan K.L., Hotchi T., Hodgson A.T., Brown
N.J. 2004. Sorption of organic gases in a furnished room.
Atmospheric Environment, 38: 24832494.
Singer B.C., Hodgson A.T., Destaillats H., Hotchi T., Revzan
K.L., Sextro R.G. 2005a. Indoor sorption of surrogates for
sarin and related nerve agents. Environmental Science and
Technology, 39: 32033214.
Singer B.C., Hodgson A.T., Hotchi T., Ming K.Y.,
Sextro R. G., Wood E.E. 2005b. Sorption of organic
gases in residential bedrooms and bathrooms. Proceedings, Indoor Air: 10th International Conference on Indoor
Air Quality and Climate, Beijing, China.
Yuan L.L. 2000. Sheltering Effects of Buildings from Biological Weapons. Science and Global Security, 8: 329355.
Zhang J.S., Zhang J.S., Chen Q., Yang X. 2002a. A critical
review on studies of volatile organic compound (VOC)
sorption on building materials. ASHRAE Transactions,
108 (1): 162174.
1096
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Risks are connected with one another by cause-related and/or by effect-related reasons. Thus,
two risks are interdependent as they are caused by interrelated events, or because the events inducing them impact
on interconnected targets. While the former relations can be represented by a cause-related risk network, the
latter ones form an effect-related network. Together both grids merge in a comprehensive risk network, requiring
changes and possibly new ways in risk management. However and at the same time, such a risk network entails
new approaches for the risk management process. This paper examines the nature of risk networks to provide
insights into the risks interrelations, their management, and their implications for the risk management process.
Consequently and contrary to the risk management performed so far, the paper shows that the information on
risk networks is desirable to effectively identify, evaluate and mitigate all risks encountered.
INTRODUCTION
RISK NETWORKS
(1)
1097
Effect-related
f
r
d ddependencies
p
ci s
((casee 3)
ddependent
e t
r11 :
e1
t1
r11 :
e1
t1
r22 :
e2
t2
r22 :
e2
t2
Independence
e
Cause-related dependencies
((casee 1))
independent
pe
t
Effect-related
f c
ted componentss
(casee 4))
(c
(case 2)
r11 :
e1
t1
r11 :
e1
t1
r22 :
e2
t2
r22 :
e2
t2
independent
dependent
Cause-related
C
u -r e components
co
nt
Figure 1.
Risk dependencies.
2.2
Figure 2.
2.1
Risk network.
1098
The management of the effect-related risk dependencies starts with the formulation of the target system,
as the definition of the former relations is therein
included. Thus, a separate identification of these
dependencies is unnecessary and the second phase of
the process in figure 4 can be skipped. It is rather
examined if the formulated target system is complete
and consistent. Thereby, the company has to especially attend to the supposed dependencies among the
relevant targets. If any of the defined relations is inconsistent, an adjustment of the target system is obligatory.
As soon as the target system is accurately formulated, the evaluation of the incorporated dependencies
begins. This does not only mean that the direction of
the induced effects is established, but also that their
strength, e.g. a fairly small deviation from one target
may lead to a substantial missing of another aim.
Regarding a mitigation of the effect-related risk
dependencies, the company may either accept them
or rearrange its target system. When monitoring these
relations, the company must bear these alterations in
mind. The revision not only has to again revise the
consistency of the dependencies, but also to decide
whether the actions taken to mitigate them were appropriate. If any deficiencies become apparent, the target
system must again be reviewed and, if necessary,
According to the depicted risk management process the management of cause-related risk dependencies starts with an identification of potentially
risk-inducing events. For this purpose, the company
can rely on pattern recognition techniques. A pattern
represents the combination of an objects features and
particulars (Mertens 1977; Heno 1983). In the present
context, we can treat the risk dependencies as the
object to analyze.
For this analysis several techniques seem
appropriate:
The nearest-neighbor principle (Schrmann 1996),
The discriminant analysis (Backhaus et al. 2006),
A multinomial logistic regression (Anre et al.
1997; Backhaus et al. 2006) and
Artificial neural networks (Rojas 1996; Backhaus
et al. 2006).
None of these techniques clearly stands out from the
others as several comparisons by pairs show (Schrder
thereto provides an overview of studies, Schrder
2005). Thus, each company has to decide on its
own which technique to apply for identifying the
cause-related risk dependencies.
Having recorded all risk-inducing events, their
evaluation starts. Firstly the occurrence probability
of each event has to be verified. Than the effect of
one events supposed realisation on other so far nonrealised events has to be determined. Therefor the
company can apply conditional probabilities.
p(ei |eh ) =
p(ei eh )
p(eh )
(2)
where i = h.
The value of these probabilities depends on the
relation the two events are in (fig. 3). If they are positively dependent, this relation causes a rise of the
non-realized events probability. For example, if the
two events are equivalent regarding their occurrence
probabilities, than the realization of one of the two
events induces an entrance of the second event.
In case of negative dependency, the realization of
one event diminishes the probability of the other event.
This reduction can go so far as a probability of zero for
the second event, i.e. its realization is no longer possible. Table 1 lists the conditional probabilities subject
to the existent dependency of the two relevant events.
The identification and evaluation of the causerelated risk dependencies bear indications helpful for
their mitigation. Firstly, an assurance taken to safeguard against the joint realization of two risks may be
1099
Table 1. Effects of the events dependency on the occurrence probability if one event is realized.
in general
in general
Events dependency
specific forms:
- event identity
- equivalency
- one event is part
of the second
specific forms:
- incompatibility
- complementarity
Positive dependency
Equivalency
One event is part of the other
Negative dependency
Incompatibility
Complementarity
Positive dependency
Figure 3.
Negative dependency
Effect-related
t
risk dependencies
Identity
t of the
affected target
Figure 4.
Dependency of the
affected targets
Interdependency
r
of
the affected targets
Indifference
Indifference
Target complel
mentarity
Target comple-i
mentarity
Goal conflict
Goal conflict
A risk network consists of the risks a company is confronted with, i.e. any risk-inducing event together with
its effects, and of their linkages (fig. 2). Accordingly,
the networks management has to allow for the risks
management (fig. 5) and the management of any of
their dependencies (fig. 6). Compared to the sole risk
management, the coordination of the three processes
1st Phase
Formulation/review of the risk
strategy and the target system
5th Phase
Risk monitoring
Figure 5.
Monitoring/
risk
accounting
system
2nd Phase
Risk identification and analysis (e.g.
referring to weak signals)
4th Phase
3rd Phase
1100
1st Phase
(Re-)formulation of relations between
targets, i.e. effect-related risk
dependencies
---
5th Phase
Controlling of the target systems
consistency and assessment of the
effectiveness of implemented risk
mitigation measures
Controlling and assessment of
identified event dependencies
Figure 6.
2nd Phase
Monitoring/
risk
accounting
system
4th Phase
3rd Phase
presented costs the company a higher effort and additional time. In the following, we will oppose these
costs to the benefits of an integrated risk network
management, considering (a) the risk identification,
(b) the risk assessment, (c) the risk mitigation and (d)
the risk control.
4.1
Risk identification
4.2
Risk assessment
1101
4.3
Risk mitigation
Risk control
FINAL REMARKS
1102
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper shows an efficient order for calculating importance measures and develops several
new measures related to fault diagnostics, system failure intensity and system failure count. Useful interpretations
and applications are pointed out, and many roles of the Birnbaum importance are highlighted. Another important
topic is the accuracy of various alternative methods used for quantification of accident sequence probabilities
when negations or success branches of event trees are involved. Thirdly, the role of truncation errors is described,
and criteria are developed for selecting truncation limits and cutoff errors so that importance measures can be
estimated reliably and risk-informed decision making is robust, without unreasonable conservatism and without
unwarranted optimism.
INTRODUCTION
IMPORTANCE MEASURES
1103
y yk
CIk = CIk (Y ) = CI(Zk ; Y ) =
y
(1)
RRWk = (1 CIk )1
(2)
(3)
(4)
= (y yk )/zk + yk
(5)
and thus
RIFk = 1 +
1104
CIk
CIk
zk
(6)
CIk
CIk
CIj
Zj Jk
zk
(7)
ACCIDENT SEQUENCES
(8)
j=1
C1:IDHL
C2:IDHL
C3:IDHL
C4:IDHL
C5:IDHL
C6:IDHL
C7:IDH
1105
(9)
n
j=1
[1 P(Gj )]
(10)
(14)
(12)
(13)
(15)
G = (A1 + A2 + A3 ) + (D1 + D2 + D3 + D4 + D5 )
(16)
with numerical probabilities P(Aj ) = P(Bk ) =
P(Di ) = 0.1 and P(Cj ) = 0.001. This example is
adapted from Epstein & Rauzy (2005). Note that Ci
1106
Table 1.
methods.
Method
formula
Rare-event
approximation
Min-cut
upper bound
Complete
S-P form
Method
formula
AS1
AS2
AS3
AS4
AS5
AS6
AS7
0.0080000
0.0028000
0.0016000
0.0010000
0.0002000
0.0004305
0.0005000
0.0079721
0.0034317
0.0034317
0.0010000
0.0004305
0.0004305
0.0004966
0.0068590
0.0029526
0.0029526
0.0010000
0.0004305
0.0004305
0.0004305
AS1
AS2
AS3
AS4
AS5
AS6
AS7
Rare-event
approximation
0.500000
0.428571
0.250000
0.0
0.500000
0.111111
0.0
Min-cut
upper bound
0.499000
0.443333
0.443333
0.0
0.111111
0.111111
0.004010
Complete
S-P form
0.473684
0.415205
0.415205
0.0
0.111111
0.111111
0.111111
is missing from Equations 15 and 16 because of truncation with truncation limit TL = 2 105 (to be
discussed later).
Numerical results for P(FG) obtained with different approximations (REA, MCUB) and with complete
S-P formula calculation are presented in Table 1. All
TOP-events in each formula have been quantified with
the approximations indicated. The results confirm that
methods AS1AS4 are inadequate, as was also shown
by Epstein & Rauzy (2005). They did not consider
methods AS5AS7.
Other conclusions and comments on this
example:
1107
(17)
(18)
For large models TL needs to be orders of magnitude smaller than TE. With y = 104 this criterion
easily leads to TL of the order of 108 or smaller.
Another criterion may be based on threshold values that are used for the criticality importances CI
of components. A criticality threshold CIT could be
a value used in ranking components for maintenance
or safety categories. Such values CIT usually range
from 106 to 103 . There are two types of potential
ranking-errors due to truncation:
For components that are not included in the truncated model or are essential in the deleted terms:
the criticality (y yk )/y underestimates and fails
to rank the components properly for PSAA.
On the other hand, with large TE(y), use of Equation 17 leads to many unimportant components
classified highly in maintenance programs or safety
categories, without serving real safety.
Thus, it makes sense to minimize TE(y) as much as
possible. The condition TE(y)/y < 0.1 CIT yields
TE(y) < 0.1 y CIT
(19)
1108
(20a)
(20b)
In addition to risk assessment certain importance measures have roles to play in failure diagnostics. Assume
that Y is a fault-tree TOP-event, a Boolean function, and any basic event is completely statistically
independent or possibly mutually exclusive with some
subset of events. Failure of a component does not
always reveal itself but needs to be identified after
zk P(Y |Zk )
= zk RIFk
y
(21)
y yk
P(Y ) P(Y |Zk )
=
= CIk
P(Y )
y
(22)
1109
ws =
vs =
K
k=1
K
k=1
and
(23)
ws,k (t)
.
ws (t)
Ws,k (t)
Ws (t)
(26)
(24)
the sum of all terms BI j (t)wj (t) that contain as a factor either wk (t), or zk (t). This is how much the system
failure intensity would be reduced if Zk were made
non-failable. Besides ranking components this can be
useful in a diagnostic problem. Assume that a system
failure occurrence at time t is observed, without knowing which components were failed and contributed
to the system failure. A question is: which component should be inspected or replaced first? A natural
ranking indicator for Zk is the intensity criticality
importance defined here as
ICIk =
CCIk =
(25)
BPIk,series =
K
j=1
(28)
(1/j )
K
j=1
(29)
(1/j )
1110
wk
K
(1 zk )2
wj
j=1 1zj
zk
1 zk
BPIk zk
1 zk
k
k
= 1+
BPIk,series
k
k
=
(30)
CONCLUSIONS
REFERENCES
Cepin,
M. 2005. Analysis of truncation limit in probabilistic safety assessment. Reliability Engineering & System
Safety 87:395403.
Duflot, N. et al. 2006. How to build an adequate set of minimal cut sets for PSA importance measures calculation.
Proceedings of PSAM 8 conference, New Orleans, 1419
May 2006. New York: ASME.
Epstein, S. & Rauzy, A. 2005. Can we trust PRA? Reliability
Engineering & System Safety 88(3):195205.
Vaurio, J.K. 2006a. Developments in importance measures
for risk-informed ranking and other applications. Proceedings of PSAM 8 Conference, New Orleans, 1419
May 2006. New York: ASME.
Vaurio, J.K. 2006b: Configuration management and maintenance ranking by reliability importance measures. Proceedings of ESREL 2006 Conference, Estoril, Portugal,
1822 September 2006. Leiden: Balkema, Taylor &
Francis.
Vaurio, J.K. 2007. Definition and quantification of importance measures for multi-phase missions. Proc. ESREL
2007 Conference. Stavanger, Norway, June 2007. Leiden:
Balkema, Taylor & Francis.
1111
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The legislation for registration of risk situations involving hazardous substances in The
Netherlands came into effect in the year 2007. Competent authorities are now required to submit data on
establishments and transport situations to the Register for risk situations involving hazardous substances. This
Register is an Internet application that has been developed in the past years, based on older concepts of the
legislation. With the final details of the legislation now being established, the application had to be adapted
considerably. After that the data from the original database have been converted to the final release. Other kinds
of risks (e.g. forest fire, collapse of high buildings) are registered by a second Internet application, called ISOR.
Also mandatory, competent authorities have to submit data on such risks. To comply with future European legislation registration of areas prone to flooding by sea or rivers have also recently been added. The data from both
applications are combined and shown on maps via Internet. Professionals (only civil servants of the government,
provinces and municipalities) have access to the maps and all data through a protected site. The general public
has access to the maps and a selection of the data through an open site. The general goal is that public and
authorities get insight and overview on risk situations. Municipalities can use the maps to fulfill their obligation
to inform their citizens on risks within their territories. Both spatial and emergency planning is expected to
improve with the combined data from Register and ISOR. That will be enhanced by the fact that the maps also
contain data on vulnerable objects (e.g. hospitals, schools, tall buildings).
INTRODUCTION
1113
RRGS data
RRGS
database
Data
replication
ISOR data
input
National
database of
the
provinces
12 provinces
Professional maps
12 provinces
Public maps
Figure 1.
2
2.1
Relationship between RRGS and ISOR applications and the risk maps.
LEGISLATION
Environmental management Act
and Registration Order external safety
The registrations mentioned above are based on several new pieces of legislation. First an amendment
of the Environmental management Act was established, which came into force by the end of March
2007 (Anonymous 2005). The amendment formalises
the institution of a register for risk situations with
hazardous substances. It imposes the obligation of
competent authorities to submit data to this register on
risk situations for which they have a formal responsibility. Another obligation is the authorization of the
submitted data. Authorization signifies that the data
are correct and can be shown on the risk map (see
Section 4). RIVM (the Dutch National Institute for
Public Health and the Environment) has been charged
with the management of this register. In turn, RIVM
subcontracted the management to IPO (Association of
Provincial Authorities).
The details of the register are regulated in an Order
in Council, called Registration Order external safety
(Anonymous 2007a). In this Order the categories of
risk situations are defined. Examples are Seveso-II
establishments, LPG filling stations, establishments
storing or processing fireworks and transport of hazardous substances. The Order also prescribes the data
1114
3.1 Structure
RRGS has one database, containing three sets of data
on risk situations with hazardous substances. The first
set consists of the mandatory registration of situations based on the Registration Order external safety.
The second set contains the mandatory registration
of situations based on the Regulation provincial risk
map. The third set is relatively small and comprises
optionally registered situations (main effect distance
not extending beyond the establishments limits; see
Section 2.2). E.g. emergency response organisations
may use the data in the third set for occupational safety
measures.
The database is filled through an Internet application. Competent authorities can log in to a protected
site. In all cases sets of screens containing compulsory and optional fields are to be filled. There
are screens for general information on the establishment, including a drawing facility for the border of the premises of the establishment. Next the
type of establishment is specified through screens
which define for the installations present the category from the Registration Order or the Regulation.
For the installations, amongst others, the location,
substances present will be filled in. Finally there
is a set of screens for additional information on
the establishment, like the applicability of other
licences or the presence of a plan of attack for the
fire brigade. For transport routes comparable sets of
screens apply.
3.3 Filling
For establishments the authority granting the Environmental license is responsible for submission of the
registration for that establishment. In almost 300 cases
the authority is the central government (Ministry of
Housing, Spatial Planning and the Environment for the
larger nuclear installations and certain defence establishments; Ministry of Economic Affairs for onshore
mining industry establishments), in almost 700 cases
(in general large establishments) a province and in
most cases (about 8500) a municipality. For transportation of dangerous substances via roads, railways
or inland waterways, the authority that should submit the registration is the maintenance authority; for
pipelines the Ministry of Housing, Spatial Planning
and the Environment.
1115
Figure 2. Provincial risk map (province of South Holland near the town of Dordrecht) showing vulnerable objects and
establishments with individual risk contours. The small map at the top left-hand side is part of the search facilities. The map
legend at the right-hand side is adaptable to the users wishes with +/ buttons and check boxes.
4
4.1
RISK MAPS
General description
of risk map, viz. the provincial risk map with all risks
and vulnerable objects shown.
4.2
1116
Figure 3. Floodscenario shown on the provincial risk map (same as Figure 2). The legend shows under the headings
Natuurrampen (Natural disasters)and Overstromingsdiepte (Flood depth) water depth ranges (minder dan = less than;
meer dan = more than).
4.3
Floods
The European Directive on the assessment and management of flood risks (Eur-lex 2007) requires all state
members to have flood maps by 2013 and disaster
control programmes for floods by 2015. A layer with
potentially floodable area has been added to the provincial risk maps as a first anticipation on the legislation.
Per dike district the water depth has been calculated
for squares of 50 by 50 m. These are shown on the map
layer in different shades of blue per depth range (see
Figure 3).
In the future more map layers concerning floods will
be added, showing e.g. flooding risk or a simulation
of a certain flooding scenario (collapse of dike X at
location Y).
DISCUSSION
1117
REFERENCES
Ale, B.J.M. 2002. The explosion of a fireworks storage
facility and its causes. In E.J. Bonano et al. (eds.)
PSAM6, 6th International Conference on Probabilistic
Safety Assessment and Management, San Juan, Puerto
Rico, USA, June 2328 2002: 8792. Oxford: Elsevier.
Anonymous 2005. Staatsblad 2005, 483. Wet milieubeheer,
titel 12.2 (Registratie gegevens externe veiligheid
inrichtingen, transportroutes en buisleidingen).
Bulletin of Acts, Orders and Decrees 2005, 483. Environmental management Act, title 12.2 (Registration of
external safety data on establishments, transport routes
and pipelines).
Anonymous 2007a. Staatsblad 2007, 102. Registratiebesluit
externe veiligheid.
Bulletin of Acts, Orders and Decrees 2007, 102. Registration Order external safety.
1118
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Jet fires have been frequently the first step of severe accidents which have involveddue to the
domino effectfurther explosions or large fires. Nevertheless, the current knowledge of jet fires is still rather
poor. Several authors have published experimental data, generally from small scale jet fires or from flares. In
this communication, the experimental results obtained with relatively large jet fires (with flames length up to
10 m) are discussed. The fuel was propane, and both sonic and subsonic jet exit velocities were obtained from
different outlet diameters. The distribution of temperatures on the flame main axis was measured with a set of
thermocouples. The jet fires were filmed with a videocamera registering visible light (VHS) and a thermographic
camera (IR). The main flame geometrical features were analyzed as a function of the fuel velocity, mass flow
rate and jet outlet diameter: lift-off, flame shape, flame size.
INTRODUCTION
EXPERIMENTAL SET-UP
The experimental facility was built at the Security Formation Centre Can Padr. A schema of the field
1119
test apparatus is shown in Figure 1. The facility consisted of a set of pipes which allowed obtaining vertical
and horizontal jet fires. This paper concerns only to
vertical jet fires.
The gas pipe exit had an interchangeable cap which
allowed selecting different exit diameters (ranging
between 10 mm and 43.1 mm). In addition, pressure
measurements were taken at the gas outlet in order
to calculate the mass flow rate. The fuel (commercial propane) was contained in a tank located on an
upper site.
Jet flame geometric parameters were studied analyzing the images filmed by two video cameras located
orthogonally to the flame.
An AGEMA 570 Infrared Thermographic Camera
(IR), located next to one of the previously mentioned
video cameras, was used to know the temperature and
radiation distribution of the jet flame, and also to compare the geometric parameters obtained by the video
cameras. The IR camera vision field was 24 18 ,
the spectral range 7.5 to 13 micrometers.
Type B (Pt 30% Rh / Pt 6% Rh) and S (Pt 13%
Rh /Pt) thermocouples were used to measure the flame
axial temperature distribution. They were arranged in
a mast as shown in Figure 1. A meteorological station was used to measure the ambient temperature, the
relative humidity and the wind direction and velocity.
A Field Point module was employed as a collection data system. It is formed by a communication
module FP-1001(RS-485,115 kbps), three connection terminals FP-TB-1 and three input/output (I/O)
modules. The diverse measurement devices (thermocouples, pressure gage, etc.) were connected to this
system.
Two laptops were used to collect the data from the
different sensors. They controlled the measurement
Thread
Cap
Thermocouples Mast
Figure 1.
Experimental set-up.
Propane
Pin
+1
(1)
1120
100
90
80
70
60
50
40
30
20
10
0
d (mm)
S /d
43.1
30
20
15
12.75
sonic
0.1
0.2
0.3
0.4
0.5
0.6
-1
m (kg s )
Figure 3. Variation of the lift-off as a function of the
fuel mass flow rate for different outlet diameters, for both
subsonic and sonic flow.
In all cases, large jet fires show a significant turbulence which creates some difficulties to establish the
flame length.
3.3
Lift-off
V
d
(2)
In this expression, the constant c has the dimension of time. For the present study, the experimental
subsonic data could be correlated with relatively good
accuracy with c = 0.0027 s: the data corresponding
to the orifice diameters ranging between 12.75 mm
and 30 mm followed the same trend in a S/d vs. V /d
plot, but the data for d = 43.1 mm gave larger values
of S/d.
Eq. (2) can only be applied at subsonic conditions, as gas velocity does not increase any more once
the sonic flow has been reached. However, ifonce
the sonic velocity has been reachedthe pressure
upstream the jet outlet rises, the lift-off length becomes
larger. This is due to the increase in the gas density
upstream the orifice which finally leads to a larger
fuel mass flow rate.
Nevertheless, most accidental jet fires occur at exit
sonic velocity. Thus, an equation allowing the calculation of flame lift-off at sonic conditions would be of
outmost interest.
1121
3.4
Flame length
(3a)
Fuel
propane
methane
hydrogen
Sonju and
Hustad
(1984)
Santos and
Costa
(2005)
Kiran and
Mishra
(2007)
Froude
number range
up to 3 104
40
27 and 29 0.2
14 and 16
(3b)
FLAME TEMPERATURE
up to 1 105
27
21
36
24
0.2
0.2
LPG
up to 4.5 104 30
0.2
up to 2 104
d (mm)
L/d
L
= 53 Fr 0.12
d
propane
methane
propane
ethylene
1,000
43.1
30
20
12.75
10
53Fr
0.12
100
10
100
1,000
10,000
100,000
1,000,000
In large accidental fires, the temperature is not uniform over the flame due to the turbulence and to the
existence of fluctuating luminous, less-luminous and
non-luminous parts. This is especially significant in
pool fires, in which the combustion is rather poor
Muoz et al., 2004).
In accidental jet fires, due to the turbulence of the
phenomenon and the important entrainment of air by
the jetand to the fact that the fuel is often a gasthe
combustion is much more efficient and thus the variation of the temperature is not as significant as in a
pool fire.
Fr = V /g d
Information concerning the distribution of temperatures over the jet fire is rather scarce in the literature.
Frequently it is not well detailed and generally regards
1122
Temperature (C)
1600
1400
1200
1000
800
600
400
200
0
0
20
40
60
80
100
CONCLUSIONS
A
c
d
Fr
g
1123
NOTATION
constant in Eq. (3-a)
constant (s)
orifice or outlet diameter (m or mm)
Froude number (V 2 g 1 d 1 ) ()
acceleration of gravity (m s2 )
L
m
n
Pin
P0
S
T
V
ACKNOWLEDGEMENTS
The authors acknowledge the Spanish Ministerio de
Educacin y Ciencia (project no. CTQ2005-06231 and
a doctoral grant for M. G-M) and the Autonomous
Government of Catalonia (doctoral grant for A. P) for
funding.
REFERENCES
Becker, H.A. & Yamazaki, S. 1978. Entrainment, Momentum
Flux and Temperature in Vertical Free Turbulent Diffusion
Flames. Combustion and Flame 33: 123149.
Chamberlain, G.A. 1987. Developments in design methods
for predicting thermal radiation from flares. Chemical
Engineering Research and Development 65: 299309.
1124
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper is an approach for tunneling risk management to have safe and sustainable tunnel
for people, equipment and of course employees. Identifying hazard, categorizing the risks according to their
frequency and consequences and employing risk assessment methods to reduce, diminish or accept those risk,
help to reach this goal. For this purpose an upgraded algorithm for FMEA, which will be more common for all
phases of tunneling process and more both qualitative and quantitative than conventional FMEA, is used with an
eye on specific attributes of each road or metro tunnel. However different parts of a tunnel, existence or absence
of them in a tunneling project are concerned to make the options for each and every tunnel customized.
INTRODUCTION
There is one reason for people to be afraid underground spaces, which of course, is the lack of natural
light decreases the ability of orientation in time and
place. Another major reason for negative reactions
toward traffic tunnels and underground space is the
fear of accidents and fire in an enclosed area [1].
Therefore men reckon to decrease these incidents
and accidents in underground environments to defeat
the fear. Identifying hazards which lead to the accidents is first step of this process, and then is looking
for control or diminishing approaches to deal with
them which need various methods of identification,
assessment, analysis and at last management.
There are a numerous type of papers who address
different risk assessment and management methods
but only some of them were follow the same goal of
this paper.
In Korea for two tunnels (14-3, 13-3) diagnose and
note environmental risk such as ground, water leakage,
faults, etc from design to construction stages and with
appropriate risk analysis methods try to rank the identified risks and manage them to the acceptable level or
diminish them if it is possible [2].
Okazaki, et al, [3] in Japan assessed the convergence displacement and settlement, as contained in
the tunnel construction records, with lithofacies confirmed by boring cores. In addition, magnetic susceptibility of boring cores was measured, and a comparative
study was conducted regarding its correlation with
magnetic intensity distribution by helicopter magnetic
survey in helicopter borne survey.
Konstantinos, et al, [4] in Athens University
present a paper which object is to provide a generic
RISK MANAGEMENT
Risk management is a central part of any organizations strategic management. It is the process whereby
organizations methodically address the risks attaching
to their activities with the goal of achieving sustained
benefit within each activity and across the portfolio
of all activities. The focus of good risk management
is the identification and treatment of these risks. Its
objective is to add maximum sustainable value to
all the activities of the organization. It marshals the
understanding of the potential upside and downside
of all those factors which can affect the organization.
1125
RISK ASSESSMENT
usability
credibility
complexity
completeness
adaptability
validity
and cost
RISKS OF UNDERGROUND ENVIRONMENT
CONSTRUCTION
1126
12
1. Tunn el attributes:
Dimensions
Equipment
Human and machinery traffic
Tunnel plan
Geological and geotechnical information
occurrence
10
Figure 2.
10
Occurrence-severity diagram.
End
Diminish
Yes
6. Minimize orinsurance,.
7. Appraisal
Is it Standard?
Is it safe according to international regulations?
Figure 1.
severity
No
No
Low
Vital?
Moderate
Yes
High
3. Hazard Identification
3.1. Use workers and
Train them to get familiar to hazard and risk definition
Endorse their reports
Learn them standards and regulations
3.2. Categorization by tunnel users and groups affected by those risks
Owner
Contractor
Designer
Employees
Supplier
Workers
People
Acceptable?
1127
REFERENCES
Figure 3.
Risk matrix.
CONCLUSION
1128
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Fuzzy FMEA provides a tool that can work in a better way with vague concepts and without
sufficient information but it is necessary to look for the suitable membership function to the study case. To
evaluate the revised matrix FMEA and fuzzy considerations, we present its application on a Discontinuous
Distillation Plant of biofuel. First, it is shown the matrix obtained from the ordinary method with severity
ranking and failure modes corresponding to the failure effects. Then, it is presented an FMEA in matrix form
and finally both of them are presented with fuzzy technique applied. The most important conclusions as a result
of the fuzzy techniques are shown and the most important result of the FMEA in a Discontinuous Distillation
Plan of biofuel is presented.
INTRODUCTION
Traditional FMEA
1129
Membership function
1
2.2
Fuzzy FMEA
The conventional FMEA has been one of the wellaccepted reliability and safety analysis tool due to its
visibility and easiness. However, FMEA team usually suffers from several difficulties when conducting
FMEA in real industrial situation (Xu et al. 2002;
Yeh & Hsieh 2007).
A lot of difficulties are originated from the use of the
natural language due to the necessity to assign values
to those concepts.
Fuzzy sets are sets whose elements have degrees of
membership.
H
VH
Linguistic variable
10
1130
Table 1.
Rule #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
L
L
L
M
M
M
M
H
H
H
H
VH
VH
VH
VH
L, M, H
VH
VH
L, M
L, M
H, VH
H, VH
L, M
L, M
H, VH
H, VH
L, M
L, M
H, VH
H, HV
L, M, H, VH
L, M
H, VH
L, M
H, VH
L, M
H, VH
L, M
H, VH
L, M
H, VH
L, M
H, VH
L, M
H, VH
L
L
M
M
M
M
M
H
M
H
M
VH
H
VH
H
yi wi
(1)
Safty valve
Presostato
Pushbutton in plant for total cut
Fire protection system
Figure 2.
REMARKS
(2)
1131
dangerous. This deficiency can be eliminated by introducing a new technique to calculate the Criticality
Rank using fuzzy logic. This concept was applied in
this work.
Other difficult in traditional FMEA is to trace
and/or cross-reference a particular failure mode, cause
or effect, because there are many-to-many relationships among failure modes, effects and causes. The
matrix form proposed by the same authors is a pictorial representation of relationships between several
FMEA elements.
Some authors introduce fuzzy logic for prioritizing
failures for corrective actions in a system FMECA and
FMEA.
The resulting fuzzy inputs are evaluated using a linguistic rule base and fuzzy logic operations to yield
a classification of the riskness of the failure and an
associated degree of membership in the risk class. This
fuzzy output is then defuzzified to give the Criticality
Rank for each failure.
RESULTS
Table 2.
Component
Failure mode
A
A
A
B
B
C
C
D
E
F
G
H
I
J
J
FA1
FA2
FA3
FB1
FB2
FC1
FC2
FD1
FE1
FF1
FG1
FH1
FI1
FJ1
FJ2
4
4
4
3
6
2
2
4
7
3
2
4
5
4
5
2
2
4
6
2
2
2
2
3
4
2
5
2
2
2
2
2
5
4
7
2
2
2
7
2
2
4
3
5
4
RPN
16
16
80
72
84
8
8
16
147
24
8
80
30
40
40
Table 3.
Fuzzy FMEA.
Component
Failure
mode
Fuzzy
S
Fuzzy
O
Fuzzy
D
Fuzzy
R
FA1
FA2
FA3
FB1
FB2
FC1
FC2
FD1
FE1
FF1
FG1
FH1
FI1
FJ1
FJ2
M
M
M
M
H
L
L
M
H
M
L
M
M
M
M
L
L
M
H
L
L
L
L
L
M
L
M
L
L
L
L
L
M
M
H
L
L
L
H
L
L
M
M
M
M
M
M
M
M
M
L
L
M
M
M
L
M
M
M
M
A
A
A
B
B
C
C
D
E
F
G
H
I
J
J
M
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
L
0
0
0
0
0
0
0
0
0
L
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
L
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
0
M
0
0
0
0
0
0
0
0
0
M
1132
Table 4.
16 16
It is no clear if it is necessary to construct membership function gathering expert knowledge. In this work
it is used simple triangular membership functions that
represent the whole rank used in a traditional FMEA.
Results of fuzzy techniques applied to this case
demonstrate consistency.
Risk priority number is used in prioritizing which
items require additional quality planning or action. By
fuzzy logic approach applied to matrix FMEA form
allows to detect easily the main contributor to the risk
in a better way.
Matrix FMEA.
80 0
0 72
84
16
0 147 0
24
80
30
ACKNOWLEDGEMENTS
To the Cuyo National University that support this
project. To Eng. Pablo Gusberti and to the staff
personnel of the Bioenergy Program.
REFERENCES
40 40
CONCLUSIONS
1133
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This work arises like a need to diminish the risk of data loss during the operation of a data
harvesting station. The station collects GPS and weather information on the top of the Aconcagua Mountain
located in Argentina. The access is only possible during summer season. Between April and November of each
year the access to the station is impossible due to the extreme climatic conditions. Under such conditions the
station should be capable of continuing its operation after to have a total loss of energy.
A failure mode and effect analysis was carried out by components. Critical failure modes were identified and
eliminated through the redesign.
The analysis carried out was oriented by the strong influence of the environment in the behavior of each
component. The correct characterization of the environment turned out to be fundamental. The design was
concluded with a series of modifications arisen of the analysis.
INTRODUCTION
1135
Batteries
GPS Trimble
Photovoltaic
load
regulator
Figure 1.
Photovoltaic
load
regulator
A Failure mode and effects analysis (FMEA) is a procedure for analysis of potential failure modes within a
system for the classification by severity or determination of the failures effect upon the system. It is widely
used in the manufacturing industries in various phases
of the product life cycle. Failure causes are any error or
defect in process or design, especially ones that affect
the customer, and can be potential or actual. Effects
analysis refers to studying the consequences of those
failures.
In FMEA, Failures are prioritized according to how
serious their consequences are, how frequently they
occur and how easily they can be detected. A FMEA
also documents current knowledge and actions about
the risks of failures, for use in continuous improvement. FMEA is used during the design stage with an
aim to avoid future failures. Later it is used for process control, before and during ongoing operation of
the process. Ideally, FMEA begins during the earliest
conceptual stage of design and continues throughout
the life of the product or service.
In this work two subsystems are studied: GPS and
Meteorological stations. The first gather GPS information measuring xyz coordinates of the point where
the box with both systems are anchored. The second
subsystem is a meteorological station for temperature,
atmospheric pressure, humidity and wind velocity and
direction measurement.
Freewave
SYSTEM DESCRIPTION
The station was built and was installed on the top of the
Aconcagua Mountain in the summer of 2006 (Blanco,
unpubl.).
1136
Battery
ISS
Envoy
Photovoltaic
load
regulator
Figure 2.
METHODOLOGY
4.1.1 Cables
With the cold the cables improve their conductivity;
but they make worse its mechanical properties. Nevertheless the only cables under tensional requirements
are the external cables to the container. Cables adequates for extreme temperatures (below zero Celsius)
were acquired.
4.1.2 Anemometer and weathervane
There is not information for the analysis of the physical
fault of the anemometer and the vane due to the prevailing conditions on the top of the Aconcagua Mountain.
For this reason is considered that the selection fit the
requirements.
4.1.3 Interference
Radio frequency interference like root cause failure
has not been considered.
4.1.4 Other faults
Failures originated by inadequate move of the equipment have not been considered.
Human failures due to the installation of the station
have not considered either.
At the top of the Aconcagua Mountain the reflections and the cognitive activities are degraded by the
lack of oxygen.
This process can lead to the attainment of errors
which have been tried to diminish with a design that
reduces the decision making in the armed one, if it
does not eliminate them.
SUCCESSFUL STATES
Remembering that the most excellent point is the safeguard of the satellite data process, the following states
of success have been considered:
The device of harvesting and storage (NetRS Trimble) and the device of transmission (Freewave) in
operation.
The device of harvesting and storage (NetRS Trimble) and the device of transmission (Freewave) in
delay. This case would occur before the eventuality
of climatic conditions such that by more than 3 days
prevent the generation of electrical energy from the
1137
Table 1.
Components
#
ID
Description
Failure mode
Battery
Battery
6
6.1
REMARKS
Batteries
It is considered that as much the load as the unloading of the batteries strongly is influenced by the
temperature variation (Enersys Inc. 2002, Enersystem 2005). A battery of the type used in the
Aconcagua station lets be able to be loaded when
the temperature descends to 15 C, point in which
with an inferior load to 50% it begins to produce the
freezing of the same one. Finally from 20 C the
possibility of unloading of the battery stops.
Before maintained conditions of low extreme temperatures, only the electrical generation of the solar
paddles will be able to put into operation the diverse
devices; but it will not obtain the load of the batteries
(Batteryworld 2005).
Conservatively it was not considered the heating of
the atmosphere in the container due to the lack of
suitable models to simulate its effects (Cirprotec
2005).
Effect on
the system
Cause failure
Detection
None
Open circuit
None
None
Frost
None
Connectors
Redundant
device
None. Solar
panels are
available.
FMEA TABLE
Component ID
Component description or function
Failure mode
Effect on the system
Cause of failure
Detection way
Mitigation/restoration
8
6.2
Mitigation/
restoration
RESULTS
The results of the analysis can be grouped in suggestions of design changes and recommendations of
another nature.
8.1 Suggestions
Provision of primary energy to the Trimble device
(GPS) from the set of 3 redundant batteries. In the
original design energy was provided from this set to
the secondary connector exclusively.
Provision of secondary energy to the Trimble device
(GPS) from the set of 2 redundant batteries.
1138
Recommendations
CONCLUSIONS
Broadcast of data was initiated successfully; nevertheless with the first storm the station to stopped
functioning. In the summer from 2007 the cause of
the failure was determined due to a change carried out
during the installation on the top of the mountain. The
solar panels were covered by a metallic fabric. The
fabric allowed to the snow to be accumulated and to
resist defrosting.
Results tested that the methodology of analysis was
correct. None of the ways of residual failure related to
the original design affected the station.
Meteorological station must be reexamined to
improve availability and to increase storage capability
and remote manage.
ACKNOWLEDGEMENTS
To the Engs. L. Euillades & M. Blanco (personnel of
the Satellital Image Division of the CEDIAC Institute)
for supply us the relevant information for accomplish
this study.
REFERENCES
Blanco, M. 2005. Circuito de Conexiones del Proyecto
Aconcagua, Documento Interno. Instituto CEDIAC, Facultad de Ingeniera, Universidad Nacional de Cuyo.
Mendoza, Argentina.
Batteryworld 2005. Understanding batteries. Battery World.
www.batteryworld.com.au.
CIRPROTEC 2005. Protecciones de seales de radiofrecuencia, www.cirprotec.com.
Enersys Inc 2002. Genesis NP 12-12, Sealed Rechargable
Lead-Acid Battery. Specification sheet. USA.
Enersystem, 2005. Catalogo de bateras YUASA. Series
NP/NPH/NPX. Selladas y Recargables, Bateras de
Plomo-cido.
FT 2005. Freewave, FGR Ethernet Series. Specification
sheet. Freewave Technologies, Bouder, Colorado, USA.
Nez Mc Leod, J. E. & Rivera, S. Estacin de Recoleccin de
Informacin GPS y Meteorolgica Aconcagua. Mmico de
Conexiones. CEDIAC-GD-IN002-06. Instituto CEDIAC,
Facultad de Ingeniera, Universidad Nacional de Cuyo.
Solartec S.A. 2005a. Regulador de Carga Solartec R5 y R8.
Manual del Usuario. Buenos Aires, Argentina.
Solartec S.A. 2005b. Generadores Elctricos KS5-KS7KS10-KS12-KS16-KS20. Manual del Usuario. Buenos
Aires, Argentina.
TNL 2003. NetRS GPS Receiver User Guide. Trimble.
Trimble Navigation Limited. USA.
TNL 2005. Estacin de referencia GPS Trimble NetRS.
Trimble Navigation Limited. Hoja de datos. USA.
1139
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
R. Spitsa
Institute of Geography, Kiev, Ukraine
S. Kobelsky
Strength Problem Institute, Kiev, Ukraine
ABSTRACT: GIS principal scheme includes pipeline sections technical state database and analytic block for
making corresponding decision. GIS distinctive feature is opportunity to estimate pipeline accident risk taking
into account: pipelines remaining life prediction using statistical treatment of pipeline thickness measurements
in pits or/and during in-line inspection; stress-strain state determination of pipeline typical elements with corrosive defects using FEA; estimation of hazardous geodynamical processes capable to upset balance between
nature and man-caused system in the pipeline territory. At present demo version of the system realized at one
pipeline in Ukrainian oil producing company.
INTRODUCTION
(Maiorova 2005). Under certain conditions operational life growth in combination with impact of
thermal and mechanical loads, environment influence
could involve corrosion and flow accelerated processes and base metal saturation by new chemical
elements. As a result this may initiate development
of defects that at the beginning stage of operation
had allowable size. Beside that failure probability of
some pipeline sections increases in consequence of
bulging and corrugate forming, stress rising due to
landslide and mechanical damage. This way operation extend decision should be made differentially
and based on operation conditions singularity of the
pipeline section and its technical state. These circumstances and change from traditional pipeline schedule
technical service to service according to technical state
predetermined GIS principal scheme that includes
pipeline sections technical state database and analytic
block for making corresponding decision.
As a base program product for GIS development
was used MapInfo. Its main advantages are low cost,
hardware recourses tolerance and ease using.
Efficient technical state monitoring is major condition of pipeline system safe operation. Continuing pipelines aging influences their actual reliability
level and is accompanied by accident rate rising
DATABASE DESCRIPTION
1141
ANALYTICAL BLOCK
Predicted category
consequence of hazard
III
II
I
B
Low
Moderate
Severe
Very severe
Table 3.
Probability
Description
Low
Moderate
High
Essentially impossible
Potentially possible
Might happened during operation
time
It is likely that the event has
occurred at the site if the facility
is more than a few years old
Very high
Characteristic
Point ID
Distance along pipeline, m
Height, m
Technological N o
Diameter wall thickness
Assembling date
Last inspection date
State
Drawing
Photo
(1)
Table 1.
Value/Reference
Table 4.
Probability
Low
Moderate
Severe
Very severe
Low
Moderate
High
Very high
1
1
2
3
1
2
3
3
2
3
3
4
3
3
4
4
1142
Table 5.
Technical
documentation
Residual life
Hazard
consequence
categories
Pipeline section
failure
probability point
estimation by
major criteria
( 1, 2 A10)
Separated
criterion risk
estimation
Total failure
point
estimation
years
Probability description
>30
20 30
10 20
<10
Essentially impossible
Potentially possible
Might happened during operation time
It is likely that the event has occurred at
the site if the facility is more than a few
years old
Inspection
results
Figure 1.
Tp =
Risk estimation is fulfilled for every of abovementioned criterion separately. After that pipeline section
failure total risk ST is defined by addition 10 separated
estimations obtained for every criterion:
ST = A1 + A2 + + A10
(2)
(3)
1143
eaj
ey
(4)
Geodynamical stability
Numerous researches carried out in last years demonstrated negative influence of soil subsidence, landslides, karst and erosion processes, impoundment,
Figure 3.
1144
Uniform
corrosion
Inspection
results
Local
corrosion
Figure 6.
Figure 4.
Pipeline
residual life Tp
static
calculation
Damaged
section
residual life
Tss static
calculation
Tp
min Tss
Tsc
Damaged
section
residual life
Tsc cyclic
calculation
b/2
c
d
Figure 5. Volumetric surface defect modeled with halfellipsoid: clength, bwidth, ddepth; hpipe wall
thickness.
1145
moderately active, low active and not active structures and territory geodynamic stability index is one
of the criterions of pipeline technical state integrated
estimate.
5
CONCLUSIONS
REFERENCES
DSTU 40462001. Technological equipment of oil-refining,
petrochemical and chemical manufacture. Kiev: Derjstandart of Ukraine.
1146
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
F.I. Khan
Faculty of Engineering & Applied Science, Memorial University, St. Johns, NL, Canada
P.R. Amyotte
Department of Process Engineering & Applied Science, Dalhousie University, Halifax, NS, Canada
ABSTRACT: The design of the layout of chemical processes offers significant opportunities for improving
safety through the implementation of inherent safety principles. However, the application of inherent safety as
a guideline for layout design is seldom explored and practical support tools for the design activity are totally
missing. In the current contribution, a metric is introduced to assess the safety level in the analysis of layout
options. Several parameters are mapped by the indices in the metric: the hazard of the plant units, the domino
hazard of the layout plan, the application of inherent safety principles, the use of traditional risk control devices
and the safety economics. The framework is simple, auditable and easily applicable. Specific support rules are
defined in order to reduce subjectivity in the assignation of scores. The application to a case study points out as
the assessment yields effective indications by a swift procedure.
INTRODUCTION
1147
Inherent safety aims for the elimination, or the reasonably practicable reduction, of the hazards in a
system (Kletz 1978). Moreover, inherently safer systems can reduce the high costs usually associated with
the full plant lifecyclefrom hazard management to
regulatory liabilities and safety system maintenance
(Kletz 1998, Khan & Amyotte 2005, Gupta et al.
2003). The basic principles of inherent safety can be
described trough an set of key guidewords (Hendershot
1999): minimization, substitution, attenuation, simplification, and limitation of effects. The guidewords
attenuation and limitation of effects can be generally
combined as moderation. Layout design is, however, an area where the distinction of the latter two
guidewords is deemed beneficial.
The inherent safety guidewords can be applied both
at different stages of the system design cycle and at
different levels of the safety strategies for control.
In the present work, the design stage of concern is
layout design, with particular reference to the early
phases (design of process items and utilities location,
building locations, on-site roads and access routes,
etc.). The safety strategies for control can be conventionally and hierarchically classified as inherent,
passive (engineered), active (engineered), and procedural. Application of the inherent safety guidewords to
the inherent safety strategies themselves is obviously
the most effective and straightforward approach, and
has received the majority of attention in prior development of assessment tools (Khan & Amyotte 2003).
However, the guidewords can also be applied at the
other levels of the hierarchy, for example leading to
add-on measures that are more reliable, effective and
thusin a broad senseinherently safer. In the current work, both strictly inherent measures as well as
passive measures have been investigated for their ability to improve the safety performance of the layout
plot. The perspective of layout design considered here,
therefore, is one in which the entire set of items placed
in the facility (no matter if they are pieces of equipment or blast walls) contributes in defining the global
hazard of the plant as a system (e.g. the potential for
maximum domino effect escalation). This shift in perspective justifies the choice to consider both safety
strategy levels (inherent and passive) in the current
analysis. Active and procedural safety strategies are
not considered here because, by their definition, they
do not generally belong to the first stages of layout
design.
The design of plant layout requires consideration
at once of several different issues: process requirements, cost, safety, services and utilities availability,
plant construction, regulations, etc. Furthermore, constraints related to choices in previous design steps
1148
domino escalation events, considering the integrated action of inherent and passive strategies.
Note that this is a different aspect than the one
considered for the applicability of the attenuation guideword. With attenuation, the focus
was on reduction of the embedded hazard, such
reduction being attained only by inherent measures. With limitation of effects, the focus is on
the effects themselves that can be controlled by
both inherent and passive strategies.
ii. limitation of the damage potential to target
buildings: appropriate location of buildings
(workshops, administrative buildings, trailers,
etc.) and control or emergency structures (control room, medical centre, etc.) in the layout plan
so as to limit harm to people and impairment of
accident response.
iii. limitation of the affected area: limitation (generally by passive measures) of the spatial area
affected by the consequences of an accidental
event, regardless of the effects on other units,
buildings, etc.
The conclusion from the above analysis is that out of
the five guidewords, three (attenuation, simplification
and limitation of . . . ) are of particular interest for the
safety assessment of layout plans.
ISPI
HI
(1)
I 2SIsystem =
N
1/2
I 2SIi
(2)
i=1
The Inherent Safety Potential Index (ISPI) is comprised, of two sub-indices: an Inherent Safety Index
(ISI) and a Hazard Control Index (HCI). The ISPI for
single units is computed as shown in Equation 3:
ISPI =
1149
ISI
HCI
(3)
HI =
DI
PHCI
(4)
10
10 0
80
60
40
20
0
1
Figure 1.
word.
0.8
0.6
0.4
Reference index
0.2
10
10 0
(a)
3.2
80
(b)
60
(c)
40
20
0
1
0.8
0.6
0.4
Reference index
0.2
1150
ISI
Non-applicable
0
Minor complication & no substantial hazard increase 10
Minor complication & hazard increased moderately
20
Complicated moderately & moderate hazard increase 30
Complicated moderately & hazard increased
40
Complicated & hazard increased
50
Complicated & new hazards introduced
60
Complicated to large extent & moderate hazard incr.
70
Complicated to large extent & hazard increase
80
Complicated to large extent & significant hazard incr. 90
Complicated to large extent & hazard highly increase 100
where the subscripts refer to the considered guidewords (a for attenuation, si for simplification and
l for limitation of effects). Equation 5 allows negative values for the simplification parameter, although
limiting to 0 the minimum of the final ISI. In
the subsequent analysis, the minimum of the ISI range
(i.e. ) is set equal to the minimum of HCI ( = 5),
in order to balance the calculation of ISPI.
3.2.1 ISI for attenuation
Figure 1 reports the monograph proposed to convert
the extent of applicability of the guideword attenuation into an ISI value. The extent of applicability
of this guideword is assessed mainly as the ability of
the layout option to reduce the hazard potential from
domino effects. To overcome the subjectivity in the
assessment, a reference index based on the ratio of
the Domino Hazard Index (DHI) among the unit of
concern and the base option is adopted:
Reference Index =
DHIoption
DHIbase case
(6)
k
maxh (DHSi,k,h )
(7)
1151
combined by Equation 8:
ai,j >0
ai,j
CConvSafety
CLoss
CSCI =
ISCI =
CInhSafety
CLoss
LSIoption =
(11)
(12)
(14)
(9)
(10)
where Di,j is the distance between the i-th unit and the
j-th building, and Bi is the maximum damage distance of the i-th unit for fire, explosion and acute toxic
effects.
iii) Limitation of the affected area (ISIla ) accounts
for the effects of passive measures to decrease the
area susceptible to dangerous consequences, no matter if particular structures are located there (e.g. units
or buildings), but simply because final targets (e.g.
people, environment) can potentially be present. The
suggested guideline for quantitative assessment of this
aspect is based on the percentage decrease of damage
area compared (i.e. ratio) to the same unit in the base
option. The parameter used in the ratio is thus the
affected area (AA) exposed to the consequence from
the unit considered (e.g. if no protective devices exist,
this is the area encompassed by the damage radius; if
protective devices exist, the upwind protected areas
are subtracted).
1152
k
(15)
1.0
0.8
sk
0.6
0.4
0.2
0.0
5
6
DHSi,k
10
EXAMPLE OF APPLICATION
2 3
2 3 5
7 4 6
5
(16)
7
6
1
0
50 m
(a)
(b)
1153
Table 2. Summary of the principal indices evaluated in the assessment. AA: acrylic acid, AcA: acetic acid, Sol: solvent,
P: propylene.
#
Unit
ISI
Option 1
HCI
DI
(base case)
1
2
3
4
5
6
7
8
9
Reactor
AA storage
AA storage
AA storage
AcA storage
AcA storage
Sol storage
P storage
P tanker
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
42
24
24
24
24
24
24
36
36
47
36
36
36
38
38
14
40
27
PHCI
I2SI
CSCI
LSI
92
56
56
56
56
56
55
73
73
Total
0.23
0.32
0.32
0.32
0.31
0.31
0.82
0.25
0.38
0.0077
0.46
0.28
0.28
0.28
0.28
0.28
0.15
0.66
0.18
0.46
0.28
0.28
0.28
0.28
0.28
0.15
0.66
0.18
Option 2
1
2
3
4
5
6
7
8
9
Reactor
AA storage
AA storage
AA storage
AcA storage
AcA storage
Sol storage
P storage
P tanker
5.0
44.0
32.6
33.0
35.4
40.6
68.7
5.0
5.0
42
24
24
24
24
24
24
36
36
47
36
36
36
38
38
14
40
27
92
56
56
56
56
56
55
73
73
Total
Option 3
1
2
3
4
5
6
7
8
9
Reactor
AA storage
AA storage
AA storage
AcA storage
AcA storage
Sol storage
P storage
P tanker
5.0
50.8
50.8
50.8
68.5
68.5
94.9
5.0
60.5
42
24
24
24
24
24
24
36
36
47
36
36
36
38
38
14
40
27
92
56
56
56
56
56
55
73
73
Total
I2SI
ISCI
LSI
0.23
2.85
2.11
2.14
2.17
2.49
11.3
0.25
0.38
4.1
0.39
0.25
0.25
0.25
0.25
0.25
0.14
0.63
0.18
0.19
0.46
0.44
0.44
0.45
0.46
0.81
0.57
0.18
I2SI
ISCI
LSI
0.23
3.29
3.29
3.29
4.20
4.20
15.5
0.25
4.54
53.0
0.33
0.25
0.25
0.25
0.25
0.25
0.15
0.71
0.15
0.27
0.47
0.47
0.47
0.50
0.50
0.82
0.52
0.47
1154
CONCLUSIONS
Layout plays a key role in defining the safety of a process plant. An attempt to bring inherent safety criteria
into layout design has been presented in the current
paper. A novel indexing approach was developed to
guide inherently safer choices in the early phases of
layout design. Application of the proposed safety
1155
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
K. Hausken
Faculty of Social Sciences, University of Stavanger, Norway
ABSTRACT: Protecting against intentional attacks is fundamentally different from protecting against accidents or natural cataclysms. Choosing the time, place, and means of attacks, the attacker has always an advantage
over the defender. Therefore, the optimal defense policy should take into account the attackers strategy. When
applied to multi-state systems, the damage caused by the destruction of elements with different performance rates
will be different. Therefore, the performance rates of system elements should be taken into account when the
damage caused by the attack is estimated. This paper presents a generalized model of damage caused to a complex
multi-state system by intentional attack. The model takes into account the defense strategy that presumes separation and protection of system elements. The minmax defense strategy optimization methodology is suggested,
in which the defender chooses a separation-protection strategy that minimizes the damage that the attacker can
cause by most harmful attack.
Keywords: multi-state system, survivability, defense, attack, minmax, universal generating function.
INTRODUCTION
2
2.1
THE MODEL
Basic definitions
1157
2.2
c() =
Mn
nm = n ,ni
nj = ,
i = j.
(1)
m=1
Mn
N
(2)
n=1 m=1
1158
(, , F) = hF
S
(3)
s=1
(, , F) = h
S
qs (, ) max(F gs , 0).
(4)
d(, , F) =
(7)
n=1 m=1
In the case when the attacker has perfect knowledge about the system and its defenses, the attackers
strategy is
= (n, m), where
(n, m) =
vn (nm , nm )
n=1 m=1
ynm +
arg
1nN ,1mMn
(8)
wnk + (, , F).
knm
(5)
where ynm is inherent value of the m-th PG infrastructure in component n, vn (nm , nm ) is expected
vulnerability (destruction probability) of PG m in
component n for attack nm and protection nm .
We consider the case when the defender has limited
budget b. The defender builds the system over time.
The attacker takes it as given when he chooses his
attack strategy. Therefore, we analyze a two period
minmax game where the defender moves in the first
period, and the attacker moves in the second period.
This means that the defender chooses a strategy in
the first period that minimizes the maximum damage
that the attacker can cause in the second period:
arg
1nN ,1mMn
nm = 1
ATTACK MODELS
Mn
N
s=1
Mn
N
Mn
n=1
(6)
1159
(10)
nm Cnm B,
nm = {0, 1},
Cn (nm ).
(11)
n=1 m=1
C() =
strategy is
(12)
n=1 m=1
J
j z j ,
(14)
j=0
J
j z j
U(,) (z) = u (z)u (z) =
j=0
I
i=0
i z i
I
J
j i z (j ,i )
j=0 i=0
(15)
This polynomial represents all of the possible mutually exclusive combinations of realizations of the
variables , and by relating the probabilities of
each combination to the value of the function (, )
for this combination.
In our case, the u-functions can represent performance distributions of individual system elements,
and their groups. Any element k of component n
can have two states: functioning with nominal performance xnk (with probability pnk ), and total failure
(with probability 1-pnk ). The performance of a failed
element is zero. The u-function representing this
1160
(16)
(17)
(18)
1161
Double-GA
CONCLUSION
1162
REFERENCES
Azaiez, N. and Bier, V.M. (2007). Optimal Resource Allocation for Security in Reliability Systems. European Journal
of Operational Research, 181 (2), pp.773786.
Bier, V.M., Nagaraj, A. and Abhichandani, V. (2005). Protection of simple series and parallel systems with components
of different values, Reliability Engineering and System
Safety 87, pp. 313323.
Fudenberg, D. and Tirole, J. (1991). Game Theory. MIT
Press, Cambridge, MA.
Levitin, G. (2005). Universal generating function in reliability analysis and optimization, Springer-Verlag.
Levitin, G. (Ed.). (2006a). Computational Intelligence in
Reliability Engineering. Evolutionary Techniques in Reliability Analysis and Optimization, Series: Studies in
Computational Intelligence, Vol. 39, Springer-Verlag.
Levitin, G. (2006b). Genetic Algorithms in Reliability Engineering. Guest editorial. Reliability Engineering and
System Safety, 91(9), pp. 975976.
Levitin, G. (2007). Optimal Defense Strategy against Intentional Attacks. IEEE Transactions on Reliability, 56(1),
pp. 148157.
Levitin, G. and Hausken, K. (2008). Minmax defense strategy
for complex multi-state systems. Reliability Engineering
and System Safety. (submitted).
Marks, R.E. (2002). Playing games with Genetic Algorithms. In S.-H. Chen (editor), Evolutionary Computation
in Economics and Finance, Physica Verlag.
Whitley, D. (1989). The GENITOR Algorithm and Selective Pressure: Why Rank-Based Allocation of Reproductive Trials is Best. Proc. 3th International Conf. on
Genetic Algorithms. D. Schaffer, ed., Morgan Kaufmann,
pp. 116121.
1163
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: A multicriteria decision model for risk assessment and for risk ranking of sections of natural
gas pipelines is proposed. The model is based on Multi-Attribute Utility Theory (MAUT). The reasons for a
multicriteria approach for risk assessment are discussed. The model takes into account the assessment of human,
environmental and financial dimensions of the impacts of pipeline accidents. Therefore, three dimensions of
impact and the need to translate decision-makers preferences into risk management decisions are highlighted.
The proposed model approaches these factors by using a multi-attribute utility function which is combined with
probability distributions for consequences and with prior probabilities of accident scenarios in order to result
in a multicriteria risk measurement. Pipeline hazard scenarios and risk assessment on natural gas pipelines are
discussed. To help natural gas companies to prioritize critical sections of pipeline, the multi-attribute approach
also allows sections of pipeline to be ranked into a risk hierarchy. In order to illustrate the use of the model, a
numerical application based on a real case study is presented.
INTRODUCTION
Risk assessment is a widely used tool for identifying and estimating technological risks, such as in
pipeline facilities. It is traditionally based on the survey of probabilities related to human fatalities (Jo &
Ahn 2005). However, accidents recorded around the
world indicate that, due to the complexity of their
consequences, the environmental and financial dimensions of accident impacts must also be considered, in
order to avoid incomplete and inadequate approaches
to pipeline risk assessment. These arise from the
need to reconcile the concerns of society, the State
and gas companies regarding the safe operation of
pipelines.
The decision-making process involving risks in
pipelines is complex. It often involves analyzing conflicting aspects, usually produces long-term results,
and not rarely is performed against a background of
lack of information. Despite these characteristics, several methodologies used for risk assessment are still
vague, subject to doubtful interpretations, and are not
effective in supporting reliable decisions, such as prioritizing pipeline sections in order that, as appropriate,
they might receive supplementary safety investments
within a limited budget.
According to Papadakis (2000), the limitations
of traditional deterministic techniques deployed for
risk assessment may be overcome by probabilistic
approaches such as risk ranking, which combines
probabilities and consequences in a single measure
of risk. This has been shown to be more adequate for
PROBLEM ANALYSIS
Among the several modes for transporting dangerous substances, such as natural gas, pipelines are one
of the safest options (Papadakis 2000). Their accident frequencies are less then those related to road
or rail haulage. However, even with low probabilities, the consequences of accidents arising from a
natural gas leakage cannot be neglected. These consequences are often associated with a wide-ranging set of
monetary and non-monetary impacts. By taking into
1165
2.1.3 3 : CVCE
When gaseous mixtures are accumulated in confined
spaces, and undergo delayed ignition, a scenario called
Confined Vapor Cloud Explosion (CVCE) happens.
This is very dangerous to buildings within an area
where the gas is not out of its flammability limits, due
to the possibility of the confined vapor cloud exploding inside these buildings or in underground spaces
(Sklavounos & Rigas 2006).
2.1
1166
where i ( ) are the probabilities of the accident scenarios and the operational normality scenario for a
given pipeline section si .
4
1167
values will be influenced by the physical and geographical particularities related to each section of
pipeline. These assumptions allow the probabilities
P(h| , si ), P(e| , si ), and P( f |, si ), to be separately
estimated by means of the probability density functions f (h|, si ), f (e|, si ) and f ( f |, si ), respectively.
The above probabilities can be obtained by means
of eliciting experts knowledge or using results of
object exposure analyses. These probability functions
may have several shapes, which depend on the mathematical models and on the simulation computer tools
adopted.
As presented in Equation 1, for the three-dimension
set of consequences, the utility function U (h, e, f )
is combined to the probability density function
f (h, e, f | , si ). Equation 1 becomes:
L( , si ) =
(4)
As previously stated, additive multi-attribute utility function and the independence of P(h| , si ),
P(e| , si ), and P( f |, si ) have been assumed (Brito &
Almeida 2008). Thus, Equation 4 becomes:
L(, si ) = f (h|, si ) k1 U (h)dh
H
f (e|, si ) k2 U (e)de
+
E
+
f ( f |, si ) k3 U ( f )df
(5)
(3)
r(si ) =
i () f (h|, si ) k1 U (h)dh
f (e|, si ) k2 U (e)de
+
E
+
f ( f |, si ) k3 U ( f )df
+ (1) i (N )
(6)
1168
1169
3.50
4.50
3.90
3.50
2.50
5.00
3.60
4.30
7.80
6.20
0.01
0.00
0.01
0.00
0.01
0.01
0.00
0.00
0.04
0.03
0.17
0.04
0.22
0.28
0.28
0.14
0.05
0.09
0.67
0.53
0.10
0.04
0.09
0.11
0.07
0.06
0.02
0.05
0.28
0.22
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.95
0.36
0.84
1.01
0.63
0.54
0.29
0.49
2.53
2.01
(P1 )
(P2 )
(P3 )
(P4 )
(P5 )
0.01
0.00
0.02
0.00
0.00
0.03
0.02
0.01
0.03
0.03
0.26
0.20
0.31
0.29
0.37
0.56
0.37
0.43
0.61
0.49
0.30
0.57
0.36
0.27
0.22
0.54
0.03
0.48
0.54
0.43
0.00
0.01
0.00
0.00
0.00
0.01
0.00
0.00
0.01
0.00
1.70
3.27
2.04
1.54
0.90
3.10
2.82
2.73
3.10
2.46
Tables 1 and 2. In order to perform an object exposure analysis, several simulations on computer tools
were performed for each scenario and section of
pipeline si .
The estimation of probability density function
demanded intensive and focused efforts. Some discussions were held with the company support group and
the consultants on how to model f (h|, si ), f (e| , si )
and f ( f | , si ). Based on the simulation results, these
consequence probabilities were adjusted to some wellknown families of probability density functions, such
as Lognormal, Gamma and Exponential. The parameters of these functions were changed from section to
section based on technical knowledge of the particularities related to each section.
In order to obtain a multi-attribute utility function
U (h, e, f ), consultants put several questions for the
DM to answer, based on a structured elicitation protocol (as presented in Keeney & Raiffa 1976). The
decision-maker was asked to make choices between
deterministic consequences and lotteries involving
these consequences. This is a process which allows
some utility values to be obtained for the human, environmental and financial dimensions of consequences.
These values were plotted and used for exponential
curve regressions, which have been shown to represent satisfactorily the decision-makers preferences.
From this elicitation procedure, the following shape
parameters were obtained for the exponential utility
functions:
For U (h) : h = 0.12(R 2 = 0.90);
For U (e) : e = 0.0017(R 2 = 0.88);
For U ( f ) : f = 3.5 107 (R 2 = 0.92).
The scaling constants k1 , k2 and k3 presented in
Equation 3 were also obtained based on similar elicitation procedures (Raiffa 1970, Keeney & Raiffa 1976).
These procedures were also formed by probabilistic
Table 3.
Table 4.
0.455
0.498
0.277
0.280
0.687
0.469
0.099
0.336
0.301
0.274
0.394
0.349
0.198
0.216
0.567
0.420
0.077
0.298
0.247
0.191
0.605
0.409
0.420
0.377
0.786
0.638
0.253
0.488
0.590
0.346
1
1
1
1
1
1
1
1
1
1
0.950
0.857
0.902
0.840
0.967
0.873
0.829
0.849
0.912
0.848
1170
0.512
0.581
0.390
0.367
0.740
0.531
0.158
0.415
0.385
0.386
0.472
0.448
0.295
0.290
0.681
0.438
0.128
0.393
0.319
0.269
0.714
0.492
0.532
0.554
0.872
0.718
0.382
0.619
0.734
0.582
1
1
1
1
1
1
1
1
1
1
0.971
0.945
0.946
0.894
0.977
0.906
0.884
0.871
0.938
0.875
r(si )
S3
S2
S1
E2
N4
N3
W1
N2
N1
E1
0,176
0,164
0,097
0,095
0,094
0,083
0,079
0,069
0,047
0,033
CONCLUSIONS
This paper has approached a multicriteria perspective for risk assessment of natural gas pipelines. It
has discussed that risk management involves complex decision-making processes which must reconcile
concerns from different stakeholders: society, the
State and gas companies. Therefore, it tackles how to
REFERENCES
Almeida A.T. de, Bohoris, G.A. 1996. Decision theory in
the maintenance strategy of a standby system with gamma
distribution repair time. IEEE Transactions on Reliability.
45 (2): 216219.
Bartenev, A.M., Gelfand, B.E., Makhviladze, G.M. &
Roberts, J.P. 1996. Statistical analysis of accidents on the
Middle Asia-Centre gas pipelines. Journal of Hazardous
Materials. 46: 5769.
Berger J.O. 1985. Statistical Decision Theory and Bayesian
Analysis. New York: Springer.
Brito, A.J. & Almeida, A.T. de. 2008. Multi-attribute
risk assessment for risk ranking of natural gas
pipelines. Reliability Engineering and System Safety. doi
10.1016/j.ress.2008.02.014.
Dziubnski, M., Fratczak, M. & Markowski, A.S. 2006.
Aspects of risk analysis associated with major failures of
1171
1172
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
A. Rauzy
IML/CNRS, Marseille, France
J.-P. Signoret
Total SA, Pau, France
ABSTRACT: The aim of this paper is to give a new insight into some fundamental concepts of the IEC 61508
standard. In a first part, we examine low and high or continuous demand modes of operation. We study how
to determine the accident frequency in the case the system under study is made of one element under control
and its associated safety instrumented system. In a second part, we study the relationship between the average
probabilities of failure on demand and the risk reduction factor. Finally, we consider the probability of failure
per hour of a safety instrumented system. We propose different ways to compute it.
INTRODUCTION
2.1
Official definitions
1173
OK
OK
KO
STOP
OK
1
OK
3
2
r
wacc = wSIS
MDTSIS
KO
ACCIDENT
4
wd >> =
D
d
d +
(1)
d
wSIS
(d + )
(3)
(2)
1
MDTSIS
(4)
(5)
The above considerations show the expected criterion suitable to discriminate low demand mode of
operation from high or continuous demand mode is
obtained by comparing the product wd MDTSIS to
the unity as follows.
2.3 A generic model for the accident frequency
Low and high demand modes of operation are considered separately in IEC 61508 standard. This point
of view is a simplistic, because any SIS can be successively in both of them. As an illustration, consider
a shutdown valve, which has to close when an overpressure occurs in an upstream part of a pipe. The
considered SIS is working in accordance with low
demand mode until the overpressure arises but, after
that and until the overpressure is exhausted, the SIS
works under continuous mode of operation to preserve
the pipe against this overpressure.
This general configuration must be taken into
account when EUC is not put instantaneously into a
safe state after the demand. This corresponds to the
state 2 of the model depicted in Figure 2, which extends
the previous one (Fig. 1).
The meaning of all the states and parameters used
in this new model remains the same as in the previous
one. Here d stands for the inverse of the mean latency
time of the demand, and state 2 is the demand state.
Table 1.
A discriminatory criterion.
Modes of operation
Conditions
Low demand
High demand
Continuous demand
wd MDTSIS < 1
wd MDTSIS >= 1
wd MDTSIS >> 1
1174
OK
DEMAND
2
OK
STOP
4
OK
KO
OK
1
OK
3
MDTSIS
KO
ACCIDENT
5
wacc =
1
T
T
1
T
PFDAVG VS RRF
w(t)dt
wacc PFDavg wd
T
0
T
p3 (t)dt +
T
p2 (t)dt
(6)
(7)
D
d
d +
D
(d + )
(D + d )
wacc
Protection layer 1
(8)
Protection layer 2
(1- p1) .wd
(safe outcome)
1 p1
wd
(9)
1 p2
Demand
p1 = PFD1
(11)
wd
1
=
= RRF
wacc PFDavg
(10)
RRF1 = 1/ p1
p1 .wd
Accident
frequency
1175
Standard approach
3.3
Conclusion
(13)
=
3.2
DU 1 DU 2
2
T12
Exact approach
PFDavg =
1
T2
T2
PFD1 (t) PFD2 (t) dt
. . . Determine the required probability of failure of the safety function during the mission time
and divide this by the mission time, to give a
required probability of failure per hour.
(14)
By approximating PFD1 (t) and PFD2 (t) by respectively DU 1 .t and DU 1 .t over a proof-test period, we
have:
PFDavg =
DU 1 DU 2
2T1
T
1
2T1
t 2 dt + (t T1 ) tdt
0
T1
PFH =
In the sequel, we examine these two pseudodefinitions and we propose further investigation to
converge onto an acceptable definition.
7
1
DU 1 DU 2 T12 =
12
RRFex
(16)
What does mean the expression given in the numerator of this ratio? A reasonable interpretation is the
cumulative distribution function F(t), in other words
the unreliability, of the SIS.
1176
PFH =
F(T )
T
(18)
1 R(T )
=
T
1 exp avg .T
=
T
PFH =
T
(19)
dt
wavg
1
=
T
T
w(t)dt =
W (0, T )
T
(24)
PFH
avg .T
= avg
T
(20)
T
f (t) dt = favg
(21)
dt
1177
wS (t) =
IB (S, ci ) wi (t)
(25)
AN ILLUSTRATIVE EXAMPLE
i pi (t)
1oo2 dangerous
failure
G1
CCF failure
Independant
failures
G2
G3
(26)
ic
T
DU-CCF
Channel 1 failed
Channel 2 failed
e1
e2
G4
G5
DD_failure
DU_failure
DD_failure
DU_failure
e3
e4
e5
e6
Figure 4.
i pi (t) dt
iMC
T
1
i pi (t) dt
=
T iM
C
DD_CCF
4
2(1- D
1
i CSTi [0, T ]
T iM
DD
DD
1KO (DD)
1OK
(27)
2OK
DU
DD
1KO (DD)
1KO (DU)
1
DD
and finally:
PFHmoy =
2KO
(DD)
DD
DD
i APSi [0, T ]
1KO (DU)
1OK
(28)
2(1-)
DU
iMC
Figure 5.
DD
2KO
(DU)
DU
DU
1
p1 (bi )
p2 (bi ) 0
p3 (bi ) 0
p (b ) = 0
4 i
p5 (bi ) 0
0
p6 (bi )
1178
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
p1 (ei1 )
0
0 p2 (ei1 )
0 p3 (ei1 )
1 p4 (ei1 )
0 p5 (ei1 )
0
p6 (ei1 )
Table 2.
Domain
Name
Definition/Initial Value
Real
Real
LDDIN
LDUIN
Real
Real
Real
Real
Real
Real
Boolean
Boolean
Boolean
Boolean
Boolean
Real
LDDCD
LDDCU
BETA
DC
LAMBDA
BETAD
A_KO
B_KO
CCF_DD
CCF_DU
TEST
1oo2_KO
5.0E-06
Approaches
Figure 6.
60
90
0
FT-model
DC
(%)
60
90
0
MPM-model
DC
(%)
60
90
0
NP-model
5.2
DC
(%)
DC
(%)
60
90
Numerical results
= 2D
= 2D
= 2D
= 2D
= 2D
= 2D
= 10%
= 20%
= 10%
= 20%
= 10%
= 20%
2.90E-7
5.40E-7
1.90E-7
3.70E-7
1.40E-7
2.80E-7
= 2D
= 2D
= 2D
= 2D
= 2D
= 2D
= 10%
= 20%
= 10%
= 20%
= 10%
= 20%
2.93E-7
5.33E-7
1.93E-7
3.65E-7
1.42E-7
2.79E-7
= 2D
= 2D
= 2D
= 2D
= 2D
= 2D
= 10%
= 20%
= 10%
= 20%
= 10%
= 20%
2.93E-7
5.33E-7
1.93E-7
3.65E-7
1.42E-7
2.79E-7
= D
= D
= D
= D
= D
= D
= 10%
= 20%
= 10%
= 20%
= 10%
= 20%
2.97E-7
5.37E-7
1.91E-7
3.66E-7
1.46E-7
2.81E-7
1179
CONCLUSION
This paper summarizes some qualitative and quantitative results from recent advanced work (Innal, in prep.)
on several topics related to IEC 61508 standard. New
insights have been given about some of the main definitions and concepts of this standard, including the
following.
The low demand and high demand or continuous
mode of operation.
The relationship between the whole risk reduction
factor obtained by associating of several layers of
protection and the combination of their individual
PFDavg .
The true nature of the PFH.
We hope our contribution will provide the reader
with a better understanding of part 6 of the standard.
Some important concepts however, such as safe failure
fraction and spurious failures, have not been discussed
here, due to the lack of space.
REFERENCES
Birnbaum, Z.W. 1969. On the importance of different components and a multicomponent system. In Korishnaiah P.R.
(ed.), Multivariable analysis II. N.Y: Academic Press.
1180
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The BORA project has developed a model based on use of Fault Trees, Event Trees, Influence
Diagrams, Risk Influencing Factors and simplified modelling of dependencies between these factors. The work
has been further developed in the OTS project, which in 2007 has carried out a pilot project in order to test the
approach to assessment of performance of Risk Influencing Factors with effect on human and organisational
barriers. One of the challenges that so far has not been researched extensively, is the influence of common
cause failures in the human and organisational barriers. The topic is well known with respect to the technical
barriers, but is far from well developed for human and organisational barriers. Some case studies are reviewed
in order to illustrate the possible effect of common cause failures on human and organisational barriers, from
incidents on offshore petroleum installations. The dependencies that are thus created are important to integrate
into risk analysis studies relating to human and organisational barriers. The common cause failures in human and
organisational barriers is one of the main topics in a new research project at UiS, with emphasis on modelling of
risk of major hazards relating to organisational, human and technical barriers. Some of the challenges that are
focused on in the project are discussed.
1
1.1
INTRODUCTION
Background
BORA project
1181
OTS/TTS
Performance
standards
(PS)
9 RIFs
22 Safety
barriers
Performance requirements
(PR)
- Main work
processes
- PUFF (Plan,
Do, Check,
Improve)
Functionality
Integrity
Vulnerability
Management
The OTS system is an extension of the TTS verification system, as outlined above. The TTS and OTS
systems may be integrated when fully developed.
Check lists
Assessment activities
OTS project
The objective of OTS is to have a system for assessment of the operational safety condition on an offshore
installation/onshore plant, with particular emphasis on
how operational barriers contribute to prevention of
major hazard risk and the effect of human and organisational factors (HOFs) on the barrier performance.
This objective is reached through a development
process with the following targets:
1. Identify and describe human and operational barriers for selected accident scenarios, with major
hazard risk potential.
2. Identify those tasks that are most critical from a risk
point of view, either through initiation of failures
(initiating events) or failures of important barrier
functions.
1.5 Abbreviations
BBN
Bayesian Belief Network
BORA
Barrier and Operational Risk Analysis
FPSO
Floating Production, Storage and
Offloading
MTO
Man, Technology and Organization
OMT
Organization, huMan and Technology
PSA
Petroleum Safety Authority
PSF
Performance Shaping Factor
RIF
Risk Influencing factor
RNNS
Risk level project
SMS
Safety Management System
1182
UK
installations
Norwegian
installations
11.8
11.2
4.1
15.3
1.7
0.2
0.5
18.2
4.1
4.8
2.7
38.1
3.1
2 Norwegian
companies
Other
companies
14.0
6.9
27.3
3.4
6.7
6.7
6.7
61.3
1.4
1.4
0.7
13.8
1183
Operational error
resulting in leak
Work practice in
maintenance
operations
Communication
Procedures &
documentation
Work practice in
.....
Physical
working
environment
Competence
Work practice in
.....
Working time
factors
Change
management
Supervision
Management
3.3
Figure 2. Relationship between Risk Influencing Factors
and operational leaks, based on Steen et al., (2007).
Modelling of dependencies
The Risk Level Project, (Vinnem et al., 2005) has collected considerable amounts of incident data (lagging
indicators) and barrier elements test data (leading indicators). The data has not been analyzed extensively
with respect to identify differences in risk management practices and approaches, to the extent that the
effectiveness of the various alternatives has been established. A limited study has on the other hand been
1184
0.50
0.46
0.40
5. Maloperation
of valve(s)
during manual
operation; 6
6. Maloperation
of temporary
hoses; 1
1. Incorrect
blinding/isolation;
11
4.Erroneous
choice or
installations
of sealing
device; 2
3. Valve(s) in
incorrect
position after
maintenance; 6
2. Incorrect
fitting of
flanges or bolts
during
maintenance; 10
Figure 3.
20012006, which have occurred during, or as a consequence of, maintenance activities in the hydrocarbon
processing areas. Previous studies have also considered the same data, see for instance Vinnem et al.,
(2007b), but the present study is the first to discuss
the detailed failure scenarios and the risk influencing
factors involved.
1185
Resetting; 4
Planner; 5
Planning &
preparation; 7
Execution &
resetting; 1
Area
responsible; 4
CCR personnel;
6
Planning &
resetting; 2
Area
technician; 20
Execution; 12
Maintenance
personnel; 15
Preparation; 11
Prod
technician; 2
Preparation &
execution; 1
Figure 5.
errors.
Figure 4.
Planning
Preparation
Execution
Resetting & restarting
The distribution of errors in the leaks on the different work phases, including the cases where errors have
occurred in two work phases, is shown in Figure 4.
The errors that occur during the planning phase are
all requiring at least one additional error in order to
cause a leak, most of the additional errors are during
preparation. In virtually all of the cases with second
failure in the preparation phase, these errors appear
to be strongly correlated with the error made during
the planning phase. As an illustration, if the planning
phase failed to complete a checklist for which valves
to operate and which blindings to install, then the failure to close all relevant valves or omission to install
blindings are examples of strongly correlated errors.
In the remaining phases, preparation, execution and
resetting, the majority of the errors are single errors
that lead to latent failures that have caused leaks when
the systems are being or have been started.
It is also worthwhile to consider what personnel
that have been involved in the errors made during the
1186
100 %
90 %
Work time
Documentation;
factors; 1
5
Communication;
6
80 %
70 %
Communication
60 %
Competence;
12
Competence
Management/ supervision
50 %
Work practice
Documentation
40 %
30 %
20 %
Work practice;
28
10 %
0%
Planning
Management/
supervision; 16
Figure 8.
Figure 6.
Execution
Resetting
100 %
90 %
80 %
70 %
60 %
50 %
40 %
30 %
20 %
10 %
0%
Communication
Figure 7.
Preparation
Competence
Management/
supervision
Work practice
Documentation
high contribution during execution and resetting. Management is a lower contribution during the execution. These differences are most likely not significant
differences in a statistical sense.
It could be mentioned that the root causes identified were determined by the research team, based on
interpretation of the descriptive texts in the investigation reports. This should imply that the root causes
are identified in a systematic manner, and thus more
reliable than what is often found as root causes
in investigation reports. But the reliability in each
case is obviously dependent on the correctness of the
description of circumstances in the investigations.
DISCUSSION
1187
CONCLUSIONS
Dependencies amongst root causes of human and organizational errors that may cause major hazards are very
challenging. It has been shown that there is little scientific work done in this field, especially based on
empirical data. The considerable challenge involved
in such work is probably one of the factors.
This paper has reported some results from an analysis of circumstances and root causes of major accident
precursors on offshore installations. 38 cases where
leaks have occurred as a result of the maintenance work
in the process area have been considered in detail.
The results may illustrate some of the dependencies, but are not capable of showing what these
dependencies are. The study has demonstrated that
inadequate work practice is a contributing factor to
more than 90% of the hydrocarbon leaks that occurred
on Norwegian installations in the period 200106.
It has further been shown that inadequate management practices have contributed in about 50% of the
cases.
A hierarchical structure of risk influencing factors
is suggested, whereby work practice is the immediate
cause of operational leaks, with a number of influencing factors, amongst which management system may
be the ultimate common cause. This structure is at
present a hypothesis, we will attempt to confirm or
modify this through the programme.
The Risk OMT programme will continue until the
end of 2010, and will make an attempt to study common causes and dependencies amongst human and
organizational errors one of the key subjects during
the second half of the programme period.
The work in order to identify possible risk reducing measures and the most effective risk management
approaches may be considered to be the most valuable
input to enhancement of safe operation. This will be
thoroughly researched in the programme, and we hope
to be able to devote papers to this topic in the future.
ACKNOWLEDGEMENT
The Research Council of Norway and StatoilHydro are
acknowledged for the funding of the programme. Also
the team members of the BORA and OTS projects
are gratefully acknowledged for their contributions,
in addition to Professor Andrew Hale for valuable
discussions around this topic.
1188
REFERENCES
Chen, H. & Moan, T. 2005. DP incidents on mobile offshore
drilling units on the Norwegian Continental Shelf. European Safety and Reliability Conference, Gdansk, Poland,
2630 June 2005.
Hale, A. 2007. Modelling safety management & culture,
presentation to Risk OMT workshop, 1112 December
2007.
Hurst, N.W., Young, S., Donald, I., Gibson, H. & Muyselaar,
A. 1996. Measures of safety management performance
and attitudes to safety at major hazard sites, J. Loss Prev.
Process Ind., 9, 161172.
Robson, L.S. et al., 2007. The effectiveness of occupational
health and safety management system interventions:
A systematic review, Safety Science, 45, 329353.
Sklet, S., Vinnem, J.E., and Aven, T. 2006. Barrier and
operational risk analysis of hydrocarbon releases (BORARelease); Part II Results from a case study, Journal of
Hazardous Materials, A137, 692708.
Steen et al., 2007. OTSMethodology for execution of OTS
verification (in Norwegian only), StatoilHydro report
December 2007.
Vinnem, J.E. et al., 2003a. Operational safety analysis
of FPSO Shuttle tankercollision risk reveals areas of
improvement, presented at Offshore Technology Conference, Houston, 58 May, 2003.
Vinnem, J.E. et al., 2003b. Risk assessment for offshore
installations in the operational phase, presented at the
ESREL 2003 conference, Maastrict, 1618 June, 2003.
Vinnem, J.E., Aven, T., Hauge, S., Seljelid, J. & Veire, G.
2004a. Integrated Barrier Analysis in Operational Risk
Assessment in Offshore Petroleum Operations, presented
at PSAM7, Berlin, 1418 June, 2004.
1189
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In The Netherlands, Quantitative Risk Assessment (QRA) is applied in land use planning around
establishments handling, processing or storing dangerous substances. The QRA method for warehouses with
packaged hazardous materials has recently been evaluated and redrafted. This study discusses the revised QRA
method proposed to the Dutch Ministry of Housing, Spatial Planning and the Environment. Important riskdetermining parameters, such as the conversion factor for nitrogenous substances to the toxic combustion
product NO2 and the survival fraction of non-combusted extremely toxic substances, have been adapted. The
consequences for the calculated individual risk 106 /year distances are presented.
INTRODUCTION
2
2.1
1191
Table 1.
Probability per fire area scenario and applied ventilation rate per firefighting systems.
Probability of different fire sizes
Ventilation
rate [hr 1 ]
20 m2
50 m2
100 m2
300 m2
900 m2
4&
4&
4&
4&
45%
63%
63%
99%
44%
26%
26%
n.a.
10%
10%
10%
n.a.
0.5%
0.5%
0.5%
n.a.
0.5%
0.5%
0.5%
1%
4&
4&
89%
89%
35%
n.a.
9%
9%
45%
20%
1%
1%
10%
30%
0.5%
0.5%
5%
28%
0.5%
0.5%
5%
22%
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
55%
78%
78%
45%
22%
22%
n.a.
n.a.
n.a.
78%
22%
Firefighting system
Protection level 1: Firefighting system or company
fire brigade
Automatic sprinkler installation
Automatic in rack sprinkler installation
Automatic deluge installation
Automatic inerting gas installation
Automatic high expansion foam installation
with OUTSIDE air
with INSIDE air
Company fire brigade with manual deluge system
Company fire brigade only
1192
the size of the fire. The maximum burn rate for most
substances is 0, 025 kg/m2 s (if the fire is not ventilation controlled). For flammable liquids a burn rate of
0, 100 kg/m2 s is applied.
2.3.1 Toxic combustion products for ventilation
controlled fires
For a warehouse with an average composition of
Ca Hb Oc Cld Ne Sf X the release rate for NO2 , SO2
and HCl (HF and HBr are considered as HCl) are:
NO2 = min(Bvc , Bfr ) e 46 /Mw
(1)
(2)
(3)
with
Bfr = Bmax A
(4)
(5)
(6)
(7)
(8)
1193
solids) as well as spontaneously combustible substances (4.2) and substances which emit flammable gas
(in contact with water; 4.3) the rule is that they must
be stored separately or separately from flammable substances (ADR class 3). If the risk of these flammable
solids stored is not determined solely by the release of
toxic combustion products, the QRA method implies
that additional safety measurements are being taken
to prevent these scenarios from happening. Therefore no additional QRA scenarios are considered for
this heterogeneous category (stored in relatively small
quantities).
According to the PGS-15 guideline, storage of
organic peroxides (ADR class 5.2) in warehouses is
only allowed for a small quantity of the less dangerous
peroxides (<1000 kg per establishment) packaged in
so-called limited quantities (LQ). Therefore organic
peroxides are not considered separately from other
hazardous materials in the method as well.
3.2
Flammable/non-flammable materials
Failure frequency
1194
Storage height
B
Bvc
Bfr
1.8 m
>1.8 m
10%
1%
30%
10%
Bfr
1%
10%
Bfr
Bfr
1%
1%
10%
10%
Bfr
1%
1%
RESULTS
This section describes the results of the risk calculations according to the revised QRA method proposed
to the Ministry of Housing, Spatial Planning and the
Environment.
4.1
For several types of establishments in the Netherlands, tables with individual risk 106 /year distances
are available for land use planning. The new safety
distances for warehouses with packaged hazardous
materials will be published this year and will be based
on the results given in Table 3. In this table individual risk 106 /year distances are given for warehouses
with different firefighting systems and storage sizes
(100 m2 , 300 m2 , 600 m2 and 9002500 m2 ). For protection level 1 warehouses, safety distances are only
given for the hazardous materials stored which contain
less than 15% nitrogen. For protection level 2 and 3
warehouses, the table discriminates between three different nitrogen contents (5%, 510% and 1015%).
The individual risk 106 /year distances for protection
level 1 warehouses are relatively small (except the firefighting system company fire brigade only), since
the automatic firefighting systems or firefighting by a
company fire brigade make an immediate or fast interference possible in the event of a warehouse fire. In
protection level 2 and 3 storage facilities, an initial fire
will expand to larger sizes before firefighting starts.
This results in greater safety distances.
Although the conversion factor for nitrogenous substances has been reduced from 35% to 10%, the
nitrogen content is still the most dominant parameter
determining the risk of warehouse fires. The contribution of HCl and SO2 may even be neglected if the
1195
Table 3.
Individual risk 106 /year distances for warehouses with different storage sizes
Individual risk 106 /year distance [m]
Firefighting system
Protection level 1: Firefighting system or company
fire brigade
Automatic sprinkler installation
Automatic in rack sprinkler installation
Automatic deluge installation
Automatic inerting gas installation
Automatic high expansion foam installation
with OUTSIDE air
with INSIDE air
Company fire brigade with manual deluge system
Company fire brigade only
Protection level 2: Fire detection & local fire
brigade < 15 min
Flammable liquids (ADR class 3; max 100 tonnes)
stored in synthetic packaging
Flammable liquids (ADR class 3; max 100 tonnes)
NOT stored in synthetic packaging
No flammable liquids (ADR class 3)
Nitrogen
content
100 m2
300 m2
600 m2
9002500 m2
15%
15%
15%
15%
30
30
30
20
30
30
30
20
40
40
40
20
50
50
50
35
15%
15%
15%
15%
50
20
20
310
50
20
30
550
60
20
40
660
60
20
85
720
5%
10%
15%
5%
10%
15%
5%
10%
15%
290
290
340
290
290
340
50
130
210
360
500
620
360
500
620
150
360
530
230
470
660
170
380
560
170
380
560
290
550
750
220
400
570
220
400
570
5%
10%
15%
30
65
90
75
150
210
80
170
240
85
180
270
1196
Non-combusted products
450
400
350
300
sf 1%
sf 10%
sf 30%
250
200
150
100
50
0
1%
10%
100%
6.1 group II substances is relatively high, the noncombusted release of these substances hardly ever
dominates the risk.
5
CONCLUSIONS
1197
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The concern about the possibility of malicious acts of interference involving process plants
greatly increased in the last years, since the industrial sites may represent a very attractive target. The methods
proposed for the evaluation of the vulnerability of a plant to external attacks generally lack of quantitative criteria,
dealing mainly with security-related issues, and not affording in depth the technical aspects of the problem. The
aim of the present work was the development of a method for the quantitative evaluation of the attractiveness
of a plant with respect to the possible consequences of external attacks, based on the characteristics both of
the industrial site and of the surroundings area. A set of indices was defined to quantify these hazard factors.
The final aim of the procedure was to provide criteria and priorities for emergency planning and for protection
actions.
1
INTRODUCTION
Consequences of attacks or acts of interference involving industrial sites where relevant quantities of dangerous substances are processed or stored may result in
very severe consequences. Several Security Vulnerability Assessment (SVA) methods were developed to
analyze the problem (SFK, 2002; API-NPRA, 2003;
Uth, 2005; Moore, 2006). However, all the SVA
procedures deal mainly with security and target
vulnerability, but not much with safety and do not
take into account the evaluation of the possible consequences of acts of interference. This is a crucial point,
because the aim of an external attack is usually to
obtain the most severe consequences as possible: in
order to rank the attractiveness of a plant the vulnerability of the surrounding areas can not be neglected,
because higher is the number of people affected, higher
is the attractiveness of the target.
In the following, a method to evaluate the hazard
posed by an intentional act of interference is presented. A procedure based on a two-step assessment
was developed. The first part of the procedure was
based on the assessment of the attractiveness of the
target. This was based on the screening of the hazard of the site, the distribution of the population and
of aggregation points for population in the surroundings of the plant were taken into account by the use
of indexes. The combination of the indexes allowed a
ranking of the plant attractiveness.
2
2.1
1199
2.2
Wi
Ti
(1)
where Wi is the quantity of material i (or of all substances in the i-th category, as defined by Annex 1
of the Directive), and Ti is the corresponding threshold reported in the Directive (the thresholds may be
provided for categories of substances or for named
substances in the Annex I of the Directive). Obviously,
for at least one substance or substance category this
index will be higher than 1, if the site falls under the
obligations of the Directive.
This index has to be calculated for each substance
or substance category present in the plant.
A simple sum of the indexes for all the substances
or substance categories present provides an overall
site value: the so-called site hazard index. It is
convenient to sum separately toxics and flammable
substances, thus defining flammable hazard and a
toxic hazard site indexes:
Isub =
Iifl +
Ijtox
(2)
i
ITP
<10
11 50
51 150
151 300
301 650
>650
6
5
4
3
2
1
1200
Table 2.
Calculation of IRes .
IRes
IA
Attractiveness level
<1000
1000 10000
10000 500000
>500000
4
3
2
1
36
7 10
11 14
High
Medium
Low
Table 3.
Calculation of Icv .
ICV
<2
2 10
11 50
>50
4
3
2
1
(3)
(4)
1201
Type of
interference
Deliberate
misoperation
Interference by
simple means
Interference by
major aids
Arson by simple
means
Arson by incendiary devices
Shooting
(minor)
Shooting
(major)
Explosives
Vehicle impact
Plane impact
Required
level of
information
Expected
release rate
(atmosp.
Eq.)
Expected
release
rate (pressur. Eq.)
R2
R1
R2
R1
R3
R2
R3
R2
R4
R3
R1
R1
R4
R4
B
B
A
R4
R3
R4
R4
R3
R4
2
1
2
1
2
1
1
1
equipment was produced, based on several characteristics: the type of hazard associated to the substances
present (flammable, toxic or both), the physical conditions (influencing the behaviour of the substance after
the release: formation of a toxic cloud, flash, vaporization, liquid spread, and so on) and the amount of
substance present in the unit (which may be considered
a function of the type of device: given the equipment
volume, e.g. a tank may have a higher hold-up than a
column). Table 6 reports this attractiveness index.
1202
Damage
vector
Damage
correlation
Deliberate
misoperation
Interference by
simple means
Interference by
major aids
n.d.
Not available in
the form of
mathematic
expression
Arson by simple
means
Arson by
incendiary
devices
Shooting (minor)
Shooting (major)
Explosives
radiation
Vehicle
impact
Plane impact
impact
n.d.
n.d.
radiation
Table 8.
Name
Impact radius
to be considered (m)
Plant A
Plant B
Plant C
Plant D
Plant E
Plant F
7000
7000
1000
1000
7000
1000
640
599
0
0
30
27
Ifl
50
60
50
30
0
0
Itox
Isub
ITP
690
659
50
30
30
27
1
1
5
5
5
5
Equipment
vulnerability
model (Uijt de
Haag et al. 1999)
impact
impact
overpressure
Specific studies or
simplified vulnerability models
(Susini et al. 2006)
4
4.1
CASE STUDY
Part 1: Ranking the attractiveness
In order to validate the procedure and to better understand the results obtained by its application, the
method was applied to a case study.
In the case study, real data were used for the population. These refer to an industrial area of an Italian
region and are supplied by Italian government. Also
the plants considered in the area actually exist, but their
position is completely different from that considered
in the case study (data from competent authorities of
a different region of Italy were used).
In Table 8 the characteristics of the industrial sites
considered are reported, while Figure 5 shows the
position considered in the case-study.
As shown by Table 8, the site hazard index is quite
high for two of the plants considered. This means that
the quantities of hazardous materials present in these
sites are far above the thresholds for the application of
the Seveso-II Directive.
On the other hand, plants A, E and C are quite close
to populated zones (shaded areas in Figure 3), while
the other plants are farer from these areas.
In order to calculate the vulnerability index of the
potential impact areas, a specific GIS extension was
1203
Figure 4. Impact areas of plants E, A and C. It is evident that impact area of plant E involves highly populated
areas and several vulnerability centers, while, for example,
consequences of an accident in plant C impact on scarcely
populated areas.
Table 9.
values.
Plant
Inhabitants
Number of
vulnerability
centers
A
B
C
D
E
F
225832
23916
1640
0
386538
0
4
12
0
0
210
0
IRes
ICV
IVT
2
2
3
4
2
4
3
2
4
4
1
4
5
4
7
8
3
8
ITP
IVT
IA
Attractiveness level
A
B
C
D
E
F
1
1
5
5
5
5
5
4
7
8
3
8
6
5
12
13
8
13
High
High
Low
Low
Medium
Low
4.2
Figure 5.
Table 11.
Act of interference
Device
Exp.
release
rate
Deliberate
misoperation
Interference by
simple means
Interference by
major aids
Arson by simple
means
Arson by incendiary
devices
Shooting (minor)
Shooting (major)
Explosives
Vehicle impact
Plane impact
TUM
R1
448
TUM
R1
448
TUM
R2
448
AT
R2
260
PV
R3
1048
AT
PV
SF
SF
SF
R1
R4
R3
R3
R4
92
1048
6118
6118
1204
Damage
distance
(m)
CONCLUSIONS
In the present study a method to estimate plant vulnerability towards external acts of interference is presented.
The method consists in a two-step assessment. In the
first step, a preliminary screening is carried out, by
the use of indexes quantifying the inherent hazard
due to the site and the vulnerability of the potential
impact area. In a second step, a detailed analysis may
be carried out, in order to refine the analysis and to
calculate more in detail the impact areas of the potential scenarios that may be triggered by external acts of
interference.
The methodology was applied to the analysis of a
case-study, based on realistic data. The results allowed
a preliminary ranking of the attractiveness of the targets, and a detailed analysis of the potential areas of
impact of scenarios that may be triggered by acts of
interference. The methodology thus provides criteria
to orient more detailed analysis and provides criteria to prioritize emergency planning, protection and
prevention actions.
REFERENCES
American Petroleum Institute, National Petrochemical &
Refinery Association 2003. Security Vulnerability
Assessment Methodology for the Petroleum and Petrochemical Industries.
Moore D.A. 2005. Application of the API/NPRA SVA
methodology to transportation security issues. Journal of
Hazardous Materials 130: 107121.
Str-fall Kommission (SFK) 2002. SFKGS38, Report
of the German Hazardous Incident Commission.
Susini A. 2006. Analisi del rischio connesso alla proiezione di
frammenti in strutture off-shore. M. D. Thesis, University
of Pisa, etd-06232006-101740.
Uth H.-S. 2005. Combating interference by unauthorised persons. Journal of Loss Prevention in the process industries,
18: 293300.
Uijt de Haag PAM, Ale BJM, 1999. Guidelines for Quantitative Risk Assessment (Purple Book). The Hague, The
Netherlands: Committee for the Prevention of Disasters.
1205
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
T. Aven
University of Stavanger, Norway
ABSTRACT: Uncertainty is a key concept in risk assessment, but its meaning varies a lot between different
risk assessments schools and perspectives. This causes confusion. Various attempts have been made to structure the uncertainties and in this paper we review and discuss some of the existing taxonomies. Some restrict
uncertainty to lack of knowledge about unknown quantities. Others distinguish between stochastic (aleatory)
uncertainty and epistemic (lack of knowledge), and distinction is being made between knowledge uncertainty,
modelling uncertainty and limited predictability/ unpredictability. More comprehensive structures also exist.
For example, in the well-known Morgan and Henrion classification system it is distinguished between uncertain
types of quantitiessuch as empirical quantities, constants, decision variables, value parameters and model
parametersand sources of uncertainty, such as random error and statistical variation, systematic error and
subjective judgment, linguistic imprecision, variability, randomness and unpredictability, disagreement and
approximation. In the paper we introduce a structure to group the taxonomies and the uncertainty categories.
We argue that it is essential to clarify ones perspective to risk and probability to prevent confusion.
INTRODUCTION
1207
UNCERTAINTY TAXONOMIES
A number of uncertainty taxonomies have been suggested in the literature, and in this section we give a
brief review of some of these:
A. Epistemic uncertainty vs. aleatory uncertainty
B. Knowledge uncertainty, modelling uncertainty and
limited predictability
C. Sources of uncertainty in empirical quantities
D. Metrical, structural, temporal and translatoric
uncertainty
E. Uncertainty due to lack of knowledge
Other taxonomies also exist, see e.g. NRC (1994),
Helton (1994), Ferson & Ginzburg (1996), and Levin
(2005), but these will not be presented in this paper.
2.1
1208
2.3
Morgan & Henrion (1990) state that subjective probability distributions often are a good way to express
uncertainty, but emphasize that this holds for only a
certain type of quantities used in risk analysis. They
refer to such quantities as empirical or chance quantities, representing properties or states of the world.
However there are other types of quantities that play a
role in a risk analysis model, including decision variables, value parameters and others. In case these other
quantities are unknown (or not determined) at the time
of the analysis they should be treated as parameter or
switchover, i.e. defined. See Table 1.
The only uncertain quantity found appropriate to be
treated probabilistic is empirical quantities. Empirical
quantities represent measurable properties of the realworld systems being modelled, i.e. they will, at least
in principle, be measurable now or at some time in the
future. The reason why empirical quantities are the
only type of quantity whose uncertainty may appropriately be represented in probabilistic terms is that
they are the only that is both uncertain and could be
said to have a true, as opposed to an appropriate or
good value. The list of quantities is context dependent, which means that e.g. what is decision variable
or value parameter in one analysis can be empirical
quantity in another.
Morgan and Henrion state that uncertainty in empirical quantities can arise from a variety of different
Type of quantity
Example
Empirical
Thermal efficiency,
parameter or
oxidation rate, fuel
chance variable
price
Defined
Atomic weight, ,
constants
joules per
kilowatt-hr
Decision
Plant size (utility),
variables
emission cap
Value parameters Discount rate,
value of life,
risk tolerance
Index parameters Longitude and
latitude, height,
time period
Model domain
Geographic region,
parameters
time horizon,
time increment
Outcome criteria Net present value,
utility
Treatment of
uncertainty
Probabilistic,
parametric or
switchover
Certain by
definition
Parametric or
switchover
Parametric or
switchover
Certain by
definition
Parametric or
switchover
Determined by
treatment of
its inputs
1209
Table 2. A list of sources from which uncertainty in empirical quantities can arise (Morgan & Henrion 1990).
Source of uncertainty
Description
Linguistic imprecision
Variability
Inherent randomness
Disagreement
Approximations
Table 3.
Uncertainty class
Unknown information
Discriminator parameter
Valuation parameter
Method
Temporal
Temporal
Structural
Metrical
Translational
Future
Past
Complexity
Measurement
Perspective
Probability
Historical data
Usefulness
Precision
Goals/values
Luck
Correctness
Confidence
Accuracy
Understanding
Prediction
Retrodiction
Models
Statistics
Communication
1210
Table 4.
Type of uncertainty
Sources of uncertainty
Temporal (future)
Prediction
Measurement
Temporal (past)
Retrodiction
Interpretation of data
Measurement uncertainty
Structural
Systematic fluctuations
Parameter interaction
Interpretation of models
Model choice uncertainty
Metrical
Empirical observations
Interpretation of observations
Interpretation of measurements
Translational
1211
imposed. Structural uncertaintyIs a result of complexity, and depends on both the numbers of parameters used to describe a situation and their interaction as
well as the degree to which models of the complex situation are useful. Metrical uncertaintyMeasurement
uncertainty lies in our inability to discriminate among
values within a parameter (i.e. imprecision). Translational uncertaintyThis differs from the other three
and occurs after the three have been considered. People
have different perspectives, goals, and values as well
as different capabilities and levels of training. These
affect the interpretation of results and, thereby, interact with these interpretations, often contributing to the
uncertainty.
2.5
1212
Table 5.
AEpistemic vs.
aleatory uncertainty
Epistemic
BKnowledge
uncertainty, modelling
uncertainty and
limited predictability/
unpredictability
Knowledge uncertainty:
Variation in data in a
classical statistical
setting, for example
expressed by a
confidence interval
for a statistical
parameter
CSources of
uncertainty
in empirical
quantities
Uncertainty about
unknown empirical
quantities
Sources of uncertainty:
Expert disagreement
Systematic error
Random error and
statistical variation
Aleatory
Probability model
uncertainty: A
way of describing
aleatory uncertainty.
A probability model,
such as the exponential
is a model of the
variation in lifetimes
for a population
of similar units
DTemporal,
structural,
metrical, and
translational
uncertainty
Temporal
uncertainty in
future and
past states
Inherent randomness
Approximations
(the model is a
simplified version
of the real world)
Lack of
knowledge
Metrical
(measurement
errors)
Variability
Modelling uncertainty,
this relates to
the accuracy of the
model to represent
the real world.
EUncertainty
is due to lack
of knowledge
Variability,
a source of
uncertainty
Structural
uncertainty, relates
to both accuracy
of models and
confidence in
models
Physical models:
Included in the
background
information
to assess
uncertainties
(assign
probabilities)
Limited predictability/
unpredictability
Linguistic
imprecision
Translational
1213
epistemic uncertainties about . This leads to the probability of frequency approach to risk analysis. In this
approach frequency refers to EX and probability to
subjective probabilities.
In this context variation in the observed data and
disagreement among experts are sources of epistemic
uncertainties.
Different types of models are introduced to assess
. Consider the following two examples:
a. The number of failures in year i, Xi , is given by a
Poisson distribution with parameter c + d, where
c and d are constants.
b. = p q, where is the number of failure
exposures, p equals the event that barrier 1 fails
(given exposure) and q equals the probability that
barrier 2 fails given that barrier 1 fails.
Model a) assume a linear relation between the
expected number of failures from one year to the next.
Model b) assume the expected number of failures to be
a function of the parameters on a lower level. Model
uncertainty is here an issue. The issue is to what degree
the model is close to the correct values. In case a),
the model uncertainty relates to the deviation between
the model c + d and the true value of EXi , and the
probability distribution given by the Poisson distribution and the underlying true distribution. In case b) the
model uncertainty relates to deviation between pq,
when these values are true, and the true value of .
Both models are simplified descriptions of the system or activity being studied. And approximations are
sources of uncertainty.
If the assumption of the linear form c + d is
wrong, we may refer to this as a systematic error. It
would systematically lead to poor estimates of the EXi ,
even if we have accurate estimates of .
4.2 Assessing the physical quantity X
The physical quantity X, for example the number of
failures of a system, is unknown at the point of assessment, and hence we have uncertainties about its true
value.
To be able to accurately predict X, we introduce
models, denoted g, such that
X = g(Y),
where Y = (Y1 , Y2 , . . . , Yn ) is a vector of quantities on a lower system level, for example representing
lifetimes of specific components of the system.
To describe uncertainties, we use subjective probabilities P, which are conditional on the background
knowledge (referred to as K). This uncertainty is
epistemic, i.e. a result of lack of knowledge.
Let H(y), where y = (y1 , y2 , . . . , yn ) denote the
assigned probability distribution for Y, i.e. H(y) =
(4.1)
CONCLUDING REMARKS
As stated in the introduction, the intention of performing a risk analysis is to provide insight about risks
related to an activity or a given system in order to support decisions about possible solutions and measures.
Various types and sources of uncertainty are associated with the risk analysis and its results. We have
seen that the terminology varies a lot depending on
the taxonomy adopted. For risk analyst and decision
makers, the situation is confusing.
A structure is required. We believe that such a structure best can be obtained by distinguishing between
1214
1215
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Risk analysis in the frame of the ATEX Directive and the preparation
of an Explosion Protection Document
Alexis Pey, Georg Suter, Martin Glor, Pablo Lerena & Jordi Campos
Swiss Institute for the Promotion of Safety and Security, Basel, Switzerland
ABSTRACT: Directive 1999/92/EC, known as ATEX Directive, lays down minimum requirements for the
safety and health protection of workers potentially at risk from explosive atmospheres. Specifically, its article
8 enforces the employer to ensure that a document, known as the explosion protection document, is drawn
up and kept up to date. In the explosion protection document it has to be demonstrated that the explosion risks
have been determined and assessed thus consequently adequate measures will be taken to attain the aims of the
Directive. Notwithstanding this fact, no further reference to the way in which those risks should be analysed
can be found on the Directive. Therefore, quite a high number of different ways are used to assess explosion
risks, showing, some of them, clear difficulties to unveil certain types of ignition sources. The aim of this work
is to make a contribution in the field of risk analysis related to the potential presence of explosive atmospheres,
specially taking into account the RASE Project (EU Project No: SMT4-CT97-2169) which has been already
published on this subject.
1
1.1
1.2
ATMOSPHERES
Source of risk
It is well known that during handling flammable liquids and gases or combustible solids, many situations
may be given in which a mixture of those substances
with air is generated in concentrations between the
explosion limits.
In the frame of the so called ATEX 137 Directive (DIRECTIVE 1999/92/EC), these situations are
defined as follows:
For the purposes of this Directive, explosive
atmosphere means a mixture with air, under atmospheric conditions, of flammable substances in the
form of gases, vapours, mists or dusts in which, after
ignition has occurred, combustion spreads to the entire
unburned mixture.
From this definition, it is clear that the physical
space where this mixture is present is the so-called
explosive atmosphere. As well, it is obvious, that
once this atmosphere is generated, the presence of
an ignition source may cause it to ignite; therefore causing certain effects in the space occupied
by the atmosphere and, to a certain extend, in its
surroundings.
These situations are not exclusively linked to a
certain type of industry, activity or equipment. Therefore, the risk of generating dangerous atmospheres
may be widely present on many different industrial
fields.
Explosive atmosphere?
The ATEX 137 Directive has the aim to protect workers from the potential consequences that the ignition
1217
Table 1.
1.4
Ignition effects
Thinking on the situations described and on the properties of flammable atmospheres generated the risk is
caused by rather small amounts of substances, in an
open system and with the presence of operators in or
next to the atmosphere.
From then on it easy to assess the fact that, in case
of ignition, mainly the radiation effects may easily
overcome the operator, causing serious burnings.
Obviously bigger amounts of substances can play a
role if they are dispersed due to the primary explosion
and get ignited as well.
But even if only the initial, rather small, flammable
atmosphere is generating low radiation and/or overpressure effects, the key fact to consider while assessing this risk is that a flash fire of vapour, gas or
dust atmospheres could generate grave consequences
on the operators and almost have not effect on the
equipment or the installation.
2
2.1
RISK FACTORS
Classical risk factors
Occasional
Moderate
Frequent
Catastrophic
Critical
Marginal
Negligible
1218
Zone 0
Zone 20
Zone 1
Zone 21
Zone 2
Zone 22
3
3.1
EX-ZONES
Definitions
Zone 21
A place in which an explosive atmosphere in the
form of a cloud of combustible dust in air is likely to
occur in normal operation occasionally.
Zone 22
A place in which an explosive atmosphere in the
form of a cloud of combustible dust in air is not likely
to occur in normal operation but, if it does occur, will
persist for a short period only.
After reading the definitions used to rate the ExZones it can be said that they are defined based on a
qualitative assessment of the probability of presence
of an explosive atmosphere.
The definition of a Zone 0 or 20, 1 or 21 and 2 or 22
is equivalent, the only difference comes from the fact
that Zones 0, 1 and 2 are defined due to the presence
of gases, vapours or mists and Zones 20, 21 and 22 are
defined due to the presence of dust.
1219
4.1
PROPOSAL OF A METHODOLOGY
TO ASSESS THE RISK LINKED TO ATEX
Table 2.
Ignition Source
probability
Ex-Zone Classification
Zone
Zone
0/20
1/21
Zone
2 / 22
A
B
C
D
1220
Factor independence
CONCLUSIONS
1221
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In situations where chemical industries and residential areas are situated close to each other,
the population runs a safety risk associated with the accidental release of toxic gases. TNO has investigated the
possibilities to reduce this risk by integrating safety measures in the area between the chemical industry and the
residential area, the so called buffer zone. Two measures were selected and have been studied in more detail,
being; the effect of physical obstacles such as buildings, walls and trees and the effect of water curtains. The
effect of different configurations of physical obstacles was measured with wind tunnel experiments. The project
pointed out that physical obstacles and water curtains lead to a reduction of concentration of the gas cloud and
reduces the number of victims in the residential area behind the buffer zone.
1
INTRODUCTION
The Netherlands is a small country, with a high population density. This leads to an intensive use of the
available land. In specific situations it happens that
chemical industries and residential areas are situated
close to each other, causing a safety risk for the
population.
In a town in the south of the Netherlands a chemical plant and a residential area are separated only a
few hundreds of meters (Wiersma 2007). The area in
between, the so called buffer zone, will be developed
in the future. This provides the opportunity to take
measures to prevent the dispersion of a gas cloud in
the direction of the residential area, or to dilute such a
gas cloud.
The first step in the investigation was the inventory of measures that influence the dispersion of a gas
cloud.
As a next step two measures were selected and have
been studied in more detail, being (a) the effect of
physical obstacles such as buildings, walls and trees
and (b) the effect of water curtains.
The investigation took place under the authority of
the administration of this town. Representatives of the
administration, ofthechemicalindustryandtheregional
fire department participated in the project group.
INVENTORY OF MEASURES
Technical feasibility
Effectiveness
Suitability in the area
Costs
Delay time of activation of the system
Side effects (e.g. noise or aesthetic value)
WATER CURTAINS
1223
4
4.1
Actual
Modelled
140280 m
1.7 km
1 km
200 m
470 m
600 m
4001150 m
400 m
Figure 1.
Figure 2.
zone.
1224
The residential area is modelled as a flat plane, covered with masonry sand, to prevent the build up of an
internal boundary layer.
The nature of the buffer zone itself is grass land,
modelled as material with the same roughness as grass
land.
4.3
Description of configurations
All buildings have a length of 100 meters. The distance between the rows, in wind direction is 40 meters.
The distance between the buildings in the direction
perpendicular to the wind is 25 meters.
4.4 Measurements
The following effects were evaluated:
the concentrations for perpendicular wind flow on
the buffer
the concentration at the edges of the cloud, at a
distance of 140 m and 280 meters
the concentrations for winds angular to the buffer
zone
Configuration 0: no obstacles
Configuration 1: wall of 50 meters
k value
2,50E-04
2,00E-04
6: row of
buildings 30
meters
1,50E-04
2: rows of
buildings: 10,
30, 50 meters
8: wall 20
meters
1,00E-04
1: wall 50
meters
5,00E-05
0,0 0E+00
0
20 0
4 00
6 00
m eters
1225
Configuration
1: wall of 50 m height
2: rows of buildings of 10 m , 30 m,
50 m height
3: 3 rows of buildings of 50 m, 30 m,
10 m height
4: 1 row of trees 10 m, 2 rows of
buildings 30 m, 50 m height
5: 1 row of trees 10 m, 2 rows of
buildings 30 m, 10 meter
6: 1 row of buildings 30 meter height
(distance between buildings
perpendicular to the wind: 25 m)
7: wall of 20 m height
8: 1 row of buildings 30 meter height
(distance between buildings
perpendicular to the wind: 50 m)
9: row of trees 20 m height
Average
concentration
reduction
76%
55%
62%
59%
An unbroken wall is more effective than an unbroken row of trees (of 20 meters height)
Increasing the distance between the buildings (perpendicular to the wind direction) from 25 to 50
meters did not influence the concentration reduction
for this specific configuration. It can be expected
that the concentration will finally increase, when
increasing the distance further
In case of rows of buildings with different heights,
it is more effective to situate the highest building
the nearest to the source
36%
5.2
38%
58%
36%
11%
The effect on concentration reduction of winds, angular to the buffer zone has also been studied. Results
showed that when the wind direction was at an angle
of 30 degrees to the buffer zone, a concentration
reduction occurred (compared to the situation without
obstacle).
Experiments on angular winds described in literature (Gandemer 1975) show examples where gas
clouds role over an obstacle like a screw driver causing
a higher concentration behind an obstacle.
Wind tunnel experiments on the studied configurations pointed out that for the configurations 1, 6, 7
and 9 the angle of the flow has no influence on the
concentration reduction. For the configurations 2, 4
and 8 it did have a considerable influence. For these
configurations the concentration for flows angular to
the buffer zone showed a higher concentration compared to the concentrations for flows perpendicular to
the buffer zone.
The effect of an increase of concentration behind
an obstacles has not been observed when the first row
of buildings is high enough (50 meters) and with a row
of trees.
The results of angular flow for the remaining configurations (3, 5) are not presented, because there were
insufficient receptor points available on the axes of the
plume behind the configuration.
5.3
1226
Table 3.
Hazard source
NH3 release
HCN release
NO2 release
1150 m
400 m
600 m
Withbuffer zone
5.4
Conclusion
1227
Table 4.
D5
m
F2
m
625
400
775
400
<400
<400
2300
1000
900
650
900
700
Weather class
NH3 scenario
Without buffer zone
Wall of 20 m height
HCN scenario
Without buffer zone
Wall of 20 m height
NO2 scenario
Without buffer zone
Wall of 20 m height
NH3
Scenarios
HCN
NO2
1350
600
800
<1%
88%
19%
<1%
19%
<1%
CONCLUSION
RECOMMENDATIONS
1228
REFERENCES
Committee for the Prevention of Disasters, 1999, Purple Book, Guidelines for quantitative risk assessment,
CPR-18E, The Hague, The Netherlands.
Dandrieux A., Dusserre G., Ollivier J. & Fournet, H. 2001,
Effectiveness of water curtains to protect firemen in case
of an accidental release of ammonia: comparison of the
effectiveness for two different release rates of ammonia, Journal of Loss Prevention in the Process Industries,
volume 14: 349355.
Dandrieux, A., Dusserre G., Ollivier J., 2002, Small scale
field experiments of chlorine dispersion, Journal of Loss
Prevention in the Process Industries, volume 15, 510.
Dutch Association of Cost Engineers, Prijzenboekje, 24ste
editie, november 2006.
EPA-report to Congress, 1992, Hydrogen fluoride study.
1229
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J. Szpytko
AGH University of Science and Technology, Cracow, Poland
ABSTRACT: The paper is focusing on safety engineering, as a fast developing discipline branch. The paper
is describing major objectives of safety engineering, based on the system approach, understanding the structure
of safety management system, including the human factor, and all interrelationships between the system components. Major issues on safety engineering of any business include: selection of suitable tolerance zones of
selected key operation parameters of the system, taking the human factor as a part of the system into account,
possible risk under the system operation identification with taken under consideration the time and cost factor.
Designing of complex business systems is always associated with the risk that the system fails to reach the
desired safety level and losses and damages shall be higher than the assumed acceptable level and the surrounding conditions may alter so much that the presumed respond measures and objects of the system shall no longer
be adequate.
INTRODUCTION
1231
1232
Development of safety systems, based on real circumstances (on an existing system), is started from
a systematic search for real equivalents to the applied
terminology, which is already known from systematic
analysis. Then the system layout should be present
in a perceivable form (verbal definition, tabularized
scheme, graph, decision diagram and/or computer
presentation, etc.). The system development process
incorporates definition of components and interconnections among them. Some selected factors are
significant, whereas other are neglecting. Possible
interferences between system components must be
taking into account, as well as sequences and frequencies of system processes and events are a subject
of analyzing. Then categorization of the engineering
problem must be carrying out, but unity of terminology
must permanently being keeping in mind.
Systematic approaches have a direct input to the
mental activities of researchers in the field of safety
engineering in relatively fixed sequences that comprises the following steps:
identification and diagnosis of a research subject
which have been extracted from the entire system,
system identification, definition of the system on
real and objectively existing circumstances and
environment,
representation of a real system by means of its model
(meta-system designing),
quantization of characteristics attributable to the
designed system,
algorithmization and experimental modelling (solutions, calculations),
analysis and interpretation of result obtained from
systematic studies,
analysis of implementation opportunities associated with result obtained from systematic studies.
The above classification is only an arbitrary assumption.
So-called safety systems should met the following
requirements:
be capable to correctly perform their functions with
no conflicts with the surrounding environment,
provide sufficient controllability and maximum
possible degree of automation and computerization
of operators activities,
guarantee that technical durability, economic lifetime and innovative qualities of he facilities shall
be employed in reasonable degree,
make sure that environmental requirements shall
be met,
be secured against unauthorized and undesired
access to the facilities,
be furnished with ergonomic technical solutions,
including easy control and operation facilities,
guarantee that the facilities are ready to perform
their function in fault-free manner wit due regard
to technical condition of the facilities as well as
physical and mental status of operators,
enable current and permanent insensitivity to exposure to internal and external harmful factors, tolerance to inherent faults and operators errors,
be equipped with technical and medical neutralizing
and counteracting means as well as measures for
elimination of effects of critical disasters,
meet requirements to insure the facilities against
damages and operators against third-party responsibility as a consequence of failures and injures to
operators and damages to the facilities and goods.
Methodological development of safety systems
should give answers to the following questions: Who
claims needs to initiate safe execution of the assignment with system use? Whose needs related to system
safety assurance must be fulfilling? Whose point of
view is taking into account when the needs for system
safety assurance are considered? How many viewpoints can be respect when the need for system safety
assurance is considered? What is the position of system safety assurance in the hierarchy of organizations
needs? When fulfilling of the system safety assurance
needs should start and how long it should last? Which
techniques for system safety assurance are applied,
and. which methods and expenditures are anticipated?
What external circumstances are predictable during
system assurance process? Which disturbances and
which conductive factors are expected? What are
constraints related to safety assurance means and
methods? How make the system assurance process
effective? What methodological and subject matter
reasonable approaches are to be applied? How to
1233
Protection techniques,
protection devices
Figure 1.
Safety of transportation devices is a priority problem in exploitation, particularly for inspected one.
Requirements placed to the device maker, as well
as technological progress, make possible to improve
constructions taking into consideration users (clients)
needs and expectation. The device safety concept is
drifting from a passive approach to mixed taking into
account active and adaptation type of solutions. The
above required system approach to device safety shaping, with taken into account distinguished devices
subsystems, integration of all actors engaged into
the operating process (people and loads movements,
impact with surrounding), continuous and evolutionary character. The device safety is reached in all
devices life phases and is verified in operation phase
(exploitation is including operation and maintenance).
Obtained user needs in a field of safety is in practise difficult to obtain, because the impacts have a
complex character, the device unit costs is very high,
and mostly affects the unit production. It is the reason why in practise special attention is focusing on
integrated undertaken which rationalise exploitation
safety of the device in a particular environmental conditions, as well as modernisation type (reengineering
approach) based on such fields of activity like: technology, data mining, management. To keep overlooked
safety level the complex approach is necessary.
The market globalization places new challenges in
the management of activities in particular in transportation activities. There are searching techniques
make possible effective management of transportation systems, with taken safety and reliability into
consideration. Such chances produce telematics technique (Szpytko, Jazwinski, Kocerba, 2007). As the
result of telematics used in different fields of man
activities it is possible to reduce exploitation costs of
technical devices. Techniques, which support telematics (e.g. software, hardware and tools), are rapidly
improved. It is growing requirement on so-called intelligent transport service ITS type (Intelligent Transport Services) and dynamical type management of
1234
Figure 2. The transport active knowledge model: Iknowledge and skills module, INinputs, SAauto-corrective
module, TEtelematic unit.
transportation devices DTM (Dynamic Traffic Management), both on large distances and at integrated
automated manufactures.
The transport system is mostly composed from
three categories of agents/actors: device A, manoperator B (device B1, service/maintenance B2, general coordinator/management B3), safety subsystem
C (in complex systems the total number of actors
is equal to N, e.g. D-surrounding). Between each
actor exist specified relation/controls (Figure 2).
For example between operator and device attributes
exists several correlations, for example: perception
information visualization, knowledgemonitoring,
skillsoperation realization ability, decision making
abilitycorrective auto-activity, reaction on external
stimulussafety device and strength.
Example expectations from the safety subsystem
are as follows:
recognition and signal of the device technical state,
recognition and signal of the psychical and psychological operator predisposition,
recognition and signal of the possible hazards coming from surroundings (e.g. anticollision sensors),
passive safety (over-dimensioning of the subsystems),
active safety: overloads limiter, active and oriented
energy dissipation during undesirable events, intelligent based safety systems used sensors estimated
acting forces and hand-worked special safety
(1)
D(i, k, j)
activity,
activity D implemented by i-th
agent,
j-th category of information accompanying of k-th category of activity
D realized by i-th agent including
safety indicators (which are describing system safety feature),
1235
M = f (x, y, z)
SK
FINAL CONCLUSION
ACKNOWLEDGEMENT
The research project has been financed from the Polish
Science budget.
REFERENCES
Aven, T. & Vinnem, J.E., Eds. (2007). Risk, Reliability and
Societal Safety. Taylor & Francis Group, London.
Bertalanfy von, L. (1973). General Systems Theory.
Brazilier, NY.
Borgon, J., Jazwinski, J., Sikorski, M. & Wazynska-Fiok, K.
(1992). Niezawodnosc statkw powietrznych. ITWL,
Warszawa.
Cempel, C. (2006). Teoria i inzynieria systemw (Theory and
Engineering of Systems). ITE, Radom.
Elsayed, A.E. (1996). Reliability engineering. Addison
Wesley Longman Inc., Reading.
Jazwinski, J. & Wazynska-Fiok, K. (1993). Bezpieczenstwo
systemw. PWN, Warszawa.
1236
1237
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Why ISO 13702 and NFPA 15 standards may lead to unsafe design
S. Medonos
Petrellus Engineering Ltd, UK
R. Raman
Lloyds Register, Australia
ABSTRACT: Many international standards specify firewater deluge rates intended to protect personnel from
thermal radiation from the fires during escape, and to cool equipment and structures affected by thermal radiation
or direct flame impingement. ISO 13702 and NFPA 15 are popular international Standards, frequently used for
the design of firewater systems in the oil & gas industry both offshore and onshore, and the chemical process
industry. Process fires in these industries are often pool fires and jet fires, where jet fires are high momentum
gas or multi-phase fires. Recent attempts for practical applications in plant design of the deluge rates specified
in these Standards have shown that these deluge rates are not fire specific, but generic. Since fires vary widely
in intensity and mechanism of escalation to structures, use of generic deluge rates may lead to inadequate fire
protection of personnel and plant. This Paper:
examines potential fire situations, and escalation mechanisms
provides a critique of existing fire protection standards with respect to deluge systems
identifies and suggests further development of technical information and Standards to provide design guidance,
and
proposes deluge rates for interim practical applications in the hydrocarbon process industries until new
information becomes available.
1.1
Figures 1 and 2 show examples of offshore installations on fire. The following definitions are normally
used:
Pool fire: A turbulent diffusion fire burning on a
pool of liquid fuel spilt onto an open surface, the burning rate of which is determined by fuel type and pool
size.
Jet fire: A turbulent diffusion flame resulting from
the combustion of a steady release of pressurised
gaseous, multi-phase or liquid fuel: Significantly
1239
Figure 1.
Figure 2.
1240
Table 1 gives an overview of the equipment to be protected and the guidance that can be obtained from
current Standards and Guidelines NFPA 15, FABIG,
OTI, NORSOK and ISO 13702 (Refs. 4, 5, 6, 7, 8
respectively). In Table 1, the information missing in
these Standards and Guidelines is written in the form
of questions in italics.
3
1241
Table 1.
Deluge rates from standards and guidelines (the missing information required
for correct practical application is described in the form of questions in italics)
Wellheads
To be protected mostly against
jet fires, including jet fires
from well blowouts
Jet fires are directly from
very high well pressure
NFPA 15 (2007 version) does not specifically address fire protection of wellheads.
FABIG Technical Note 8 (2005) gives information on page 128 for design guidance.
It gives a figure of 400 litres/min/well for the protection of a wellhead. Is this for
pool or jet fires, impinging/engulfing, non-impinging/non-engulfing?
The rates given in the FABIG Technical Note are based on the document Interim
Guidance Notes for the Design and Protection of Topside Structures against Fires
and Explosions, The Steel Construction Institute, Publication Number SCI-P-112,
1992, Ref. 9.
OTI 92 607 gives 400 litres/min/well, which is taken from Availability and properties
of passive and active fire protection systems, Steel Construction Institute for the HSE
OTI 92 607. Is this for impinging/engulfing or non-impinging/non-engulfing flame?
NORSOK does not specifically address fire protection of wellheads.
EN ISO 13702 gives typical minimum 10 litres/min/m2 or 400 litres/min/wellfor
wellhead/manifold areas.
What type of fire is this for? These rates do not seem to be equivalent to each other, as
it would implicitly mean that a wellhead may occupy an area of 40 m2 .
Wellhead areas
To be protected against
jet fires and pool fires
Jet fires may be directly
from very high well pressure
but their jet flame momentum
may be reduced by the flame
impacting on equipment and
structures
Processing areas and auxiliary
equipment, and pressure
vessels
To be protected against pool
and jet fires
NFPA 15 (2007 version) does not specifically address fire protection of wellhead areas.
FABIG and OTI do not specifically address wellhead areas.
NORSOK S001 (Ref. 7) gives 20 litres/min/m2 .
What type of fire is this for?
EN ISO 13702 gives typical minimum 10 litres/min/m2 or 400 litres/min/well
for wellhead/manifold area, and 400 litres/min/m2 for blowout preventer area.
What type of fire is this for? The 10 litres/min/m2 or 400 litres/min/well do not
seem to be equivalent to each other.
1242
Table 1.
(continued)
Deluge Rates from Standards and Guidelines (the missing information required
for correct practical application is described in the form of questions in italics)
NORSOK S001 (Ref. 7) gives 10 litres/min/m2 for pool fire and jet fire. Is this for
impinging/engulfing or non-impinging/non-engulfing flame?
Deluge Rates from Standards and Guidelines (the missing information
EN ISO 13702 gives typical minimum 10 litres/min/m2 with the exception of pumps/
compressors where it gives 20 litres/min/m2 . What type of fire is this for?
EN ISO 13702 does not specifically address fire protection of pressure vessels, but gives
typical minimum10 litres/min/m2 with the exception of pumps/compressors where
it gives typical minimum 20 litres/min/m2 . What type of fire is this for?
To be protected against
pool and jet fires
NFPA 15 (2007 version) specifies a minimum of 4.1 l/min/m2 over the wetted area
regardless of type of fire. Is this for pool fire, jet fire, impinging/engulfing,
non-impinging/non-engulfing?
FABIG Technical Note 8 (2005) Same as for other equipment above.
OTI 92 607 Same as for other equipment above.
NORSOK does not specifically address fire protection of horizontal structural steel.
EN ISO 13702 gives 4 litres/min/m2 .
What type of fire is this for?
NFPA 15 (2007 version) states that vertical structural steel should be protected at a rate
of 10.2 l/min/m2 regardless of type of fire. Is this for pool fire, jet fire, impinging/
engulfing, non-impinging/non-engulfing?
FABIG Technical Note 8 (2005) Same as for other equipment above.
OTI 92 607 Same for other equipment above.
NORSOK does not specifically address fire protection of vertical structural steel.
EN ISO 13702 gives 10 litres/min/m2 .
What type of fire is this for?
1243
1244
Process areas
and auxiliary
equipment (incl.
pressure vessels,
pumps and
compressors)
Wellhead
areas
Wellheads
Objects to be
protected
400 litre/min/m 2
10
20
20
10
First
impingement
400 litre/min/m2
Subsequent
impingements
First
impingement
No
No
No
No
No
No
No
No
Yes
Yes
Yes
No
No
Yes
No
No
Yes
No
Yes
No
No
Yes
No
No
Yes
No
Yes
Extinguishment
of fire
Yes
Subsequent
impingements
400 litre/min/well
or
400 litre/min/m2
Deluge rates in
stnds/guids
(litres/min/m2 )
(unless stated
otherwise)
Jet fire
Cooling
of the
objects
to be
protected
Reduction
of fire
intensity
Pool fire
No
No
No
No
No
Yes
No
No
No
Yes
No
No
No
Yes
Extinguishment
of fire
1245
Vertical
structural
steel
Horizontal
structural
steel
10
20
No
No
No
No
No
No
No
No
Yes
Yes
Yes
No
No
Yes
No
No
No
No
No
No
No
No
No
Yes
No
Yes
400 litre/min/m2
10
20
No
Yes
400 litre/min/m2
No
Yes
No
No
No
No
No
Yes
No
No
No
No
No
No
No
Yes
No
No
Figure 3.
Computer Plot Representation of Impingement of Gas Jet Flame onto an Object as Obtained from Brilliant.
of its path and away from the surface that the deluge
is supposed to cool. The flame kinetic energy will be
lower for small releases than for large releases. The
kinetic energy will be higher upstream of the jet flame
than downstream of the flame. Also, most of the jet
fires will be obstructed, where:
The part of the jet flame before it impacts on a first
obstacle will have a high momentum, but a large
part of the momentum may be lost by the impact,
depending on the size and shape of the obstacle; and
The part of the jet flame downstream of the impact
will have lower momentum than the part upstream
of the impact.
It is therefore possible that deluge of a rate that will
be pushed away by the flame upstream of the impact
will be effective (i.e. not pushed away) downstream of
the impact.
Taking this into consideration and due to lack of
other data, Table 2 (Ref. 16) has been developed qualitatively for considerations in risk assessments before
any quantitative information and/or calculation models will become available. Table 2 takes deluge rates
from the Standards and Guidelines in Table 1 and provides information on what fire protection can or cannot
be practically achieved for various fire scenarios.
Table 2 covers scenarios where the deluge is applied
into the flame to reduce the flame intensity and onto
surfaces of equipment and structures to provide cooling. The Table does not address the situation whereby
the deluge is applied into the area between the flame
and the object or person to be protected from thermal
radiation. The radiative heat flux received by an object
or person reduces exponentially with the distance from
the heat source. The effectiveness of protection against
thermal radiation therefore strongly depends on how
far the threatened object or person is from the fire.
In a review on fire research in offshore installations,
Dalzell (Ref. 17) has stated that the design of deluge
systems requires much more focus with the assignment
of specific roles, the cooling of exposed critical equipment, the cooling and control of pool fires through the
suppression of fuel vaporisation, and the interaction
with combustion in the high level hot zone. These
require totally different design philosophies and one
catch all design cannot achieve them all.
1246
CONCLUSIONS
[2] Bennet, J.F., Shirvill, L.C., and Pritchard, M.J., Efficiency of Water Spray Protection Against Jet Fires
Impinging on LPG Storage Tanks, HSE Contract
Research Report, 137, 1997.
[3] The Effectiveness of Area Water Deluge in Mitigating
Offshore Fires; Safety in Offshore Installations 1999;
Gosse A.J., and Evans, J.A.
[4] NFPA 15, Standard for Water Spray Fixed Systems
for Fire Protection, 2007 Edition.
[5] Availability and Properties of Passive and Active Fire
Protection Systems, Prepared by the Steel Construction Institute for the Health and Safety Executive, OTI
92 607.
[6] Protection of Piping Systems Subject to Fires and
Explosions, FABIG, Technical Note 8.
[7] Technical Safety, NORSOK S001, Revision 3, January 2000.
[8] Petroleum and Natural Gas Industries. Control and
Mitigation of Fires and Explosions on Offshore Production Installations. Requirements and Guidelines,
EN ISO 13702, June 1999.
[9] Interim Guidance Notes for the Design and Protection
of Topside Structures against Fires and Explosions,
The Steel Construction Institute, Publication Number
SCI-P-112, 1992.
[10] Loss Prevention in the Process Industries, Hazard
Identification, Assessment and Control, Second Edition, Frank P. Lees.
[11] Directed Deluge System Designs and Determination
of the Effectiveness of the Currently Recommended
Minimum Deluge Rate for the Protection of LPG
Tanks, T.A. Roberts, Health and Safety Laboratory,
Journal of Loss Prevention in the Process Industries
17 (2004) 103109.
[12] Fire Protection Considerations for the Design and
Operation of Liquefied Petroleum Gas (LPG) Storage
Facilities.
[13] FABIG Newsletter 11, December 1994.
[14] Raman, R., Accounting for Dynamic Processes in
Process Emergency Response Using Event Tree Modelling, Paper presented at the 19th CCPS International
Conference, Orlando, Florida, 2004.
[15] Roberts, T.A., Medonos, S. and Shirvill, L.C., Review
of the Response of Pressurised Process Vessels and
Equipment to Fire Attack, Offshore Technology
ReportOTO 2000 051, Health & Safety Executive,
2000.
[16] Consideration of Deluge Systems in Risk Assessment,
an internal research project, Petrellus Engineering Ltd,
2007.
[17] Dalzell, G., Offshore Fire ResearchA Summary,
Hazards VXIIIInherent safety, 2006.
[18] Berge, G., Theory and Description of the Structure of
the Multi-Physics Simulation System Brilliant, Petrell
as, 2006.
[19] Berge, G., Multi-Physics Simulation of Big Scale Jet
Fire Experiments, Petrell as, 2004.
REFERENCES
[1] Guidelines for the Design and Protection of Pressure
Systems to Withstand Severe Fires, The Institute of
Petroleum, March 2003, ISBN 0 85293 279 0.
1247
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
R. Miles
Health and Safety Executive, Bootle, UK
ABSTRACT: Work on so-called high reliability organization suggests that it is highly generalized qualities of
behaviour like collective mindfulness, not particular choices of organizational structure, that confer very high
levels of reliability on inherently vulnerable and hazardous organizational activities. Similarly, actors in such
organizations seem to stress the capacity to refine and perfect a chosen way of organizing, rather than choosing a
specific way of organizing in the first place. Our aim in this study, which is part of a larger programme of work,
was to examine what this means in the context of the offshore hydrocarbons industry. In particular, we propose
the idea of rigour as a general concept for encapsulating the capacity to make whatever model of organization is
chosen a safe and reliable one. The paper describes an attempt to characterize what rigour means in this context
by analyzing a dataset of offshore incident reports, in which the diagnosis of the report writers were interpreted
in terms of shortfalls in rigour. This led to a characterization of rigour in terms of multiple aspects including,
for example reflexiveness (the capacity to reflect on the limitations and consequences of the activity itself) and
scepticism (the capacity to be aware of and inquisitive about potential discrepancy and problem). Some of these
aspectsfor example situatedness and uniformityoften appear contradictory in particular situations, however.
So rigour has to be seen recursively: its various aspects are brought to bear in particular situations in a way that
is itself rigorous.
INTRODUCTION
1251
established firms and new operators following deregulation, analysis tends to be inconclusive (for example
Bier et al, 2003; Oster et al, 1992). Moreover, even if it
could somehow be determined that being a new entrant
made an operating firm more reliable or less reliable,
this knowledge could not easily be acted on. Firms cannot simply be made into new entrants or established
operators, and cannot simply have the qualities of new
or established operators imposed on them. From a regulatory standpoint, if a company is operating within
the law the regulator has no basis for telling it to change
its nature or practice.
In fact, when we look at the understandings of
experienced, knowledgeable and reflective observers,
these seem to indicate that the type of organization
including the question of whether it is a new entrant
or established firmis not central to explanations of
its reliability. In particular, in a small number of interviews that preceded this work, regulatory inspectors
pointed to a somewhat vaguer, but deeper, explanation
of reliability. This essentially concerned the capacity
of organizations to recognize the implications of whatever ways of organizing they had chosen, to refine
whatever practices they had chosen, and to follow
through as it were on their choices. It was less the particular choices they made (for example about whether
to subcontract certain activities or operate them internally), and more the thoroughness with which those
choices were then organized and managed. This is
not to say that the choices were arbitrary. But the
basic feeling was that they were less instrumental in
achieving reliability than the capacity to make those
choices work. For example, in some cases the organization owning an installation was quite different from
the organization that operated it. What mattered to
safety, in the experience of some actors at least, was
not whether ownership and operation were separated,
but whether the consequences of this separation were
fully understood and managed.
Part of our study of this problem has involved
the analysis of historical incidents and accidents, as
explained in more detail in section 3. The idea was
to analyse the way in which official sense was made
of these incidents in terms of what it said about this
notion of making choices work.
2
LITERATURE
The relevant literature is that on high reliability organizations (or HROs). This literature points to a number
of ways in which social organization solves the problem of making inherently hazardous systems acceptably reliable. It refers to redundancy and slack of
various kinds (for example Roberts, 1990; Rochlin
et al, 1987; Schulman 1993). It places considerable
stress on learning from simulated rather than concrete
1252
1253
METHOD
RESULTS
DISCUSSION
1254
Table 1.
Aspects of rigour.
that rigour is not necessarily a specifiable condition, but can also be a process that may not have
reached a particular end-point. It is important to find
ways of making systems achieve proper, substantive
CONCLUSION
1255
Table 2.
Quality of
activity implying
rigour
Nature of rigour
implied
Shortfall in rigour
Incident description
Historicism
During attempts to open a valve on a mothballed test header, a plug shot out of the valve body; personnel evacuated;
then assumed that test header isolation valve closed and decision made to repressurise line; this resulted in a second
larger hydrocarbon release from bleed port as personnel fitting a replacement plug. Review conducted to bring valve
back into service inadequate and did not identify need to replace bleed plug; previous incident involving corrosion
of bleed plug prompted replacement of several plugs on operating valves but as this valve was mothballed at the time
the plug was not replaced; following initial release, PTW [permit to work] not reviewed to ensure new valve plug
fitted, gas trapped in system isolated and vented, new or changed hazards identified, adequate controls in place,
clearly defined and communicated process steps; lack of communication between control room and personnel in
field such that people not made aware of what actions taken or about to be taken
Planfulness
Tubing supporting BOP [blow-out preventor] stack assembly bent suddenly under the load causing the BOP to break
free and fall overboard. A CT technician was attached to the BOP Assembly by his safety line and was pulled
overboard. There were no engineering calculations, the temporary work platform precluded attachment of
safety lines, there was a lack of detailed planning and no discussion of potential anomaly, the contractors were not
involved in the planning who did not know the unusual nature of the set-up until reaching site, there was no JSA
meeting or shift hand-off meeting, and no oversight of the contractor by the operator
Reflexivity
Gas release from corroded section of riser above water-line and not accessible to internal or external inspection;
no inspection hatches on the caisson, and diameter of this section of gas riser too small to allow intelligent pigging.
Should highlight any latent defects in inspection and maintenance regimes in inspection reports; should ensure
equipment designed to provide adequate access for inspection & maintenance; should recognise that when
responding to a potential gas release the response to the initiating event can also contribute to harm to people
Scepticism
Attempt to free stuck tubing led to parting and ejection of slips that struck an individual and led to loss of well control;
failure by the deceased to calculate adequate safety factor when determining maximum pull force; deceased acted
beyond his authority; ineffective managerial oversight at all levels; operator entirely reliant on consultants; operation
should in any case have been shut down and regulator approval sought; neither of the operators supervisors shut
down the job in spite of the safety violations and unsafe work practicesnor report these to their managers; the
operations manager approved the activity with assumptions he failed to verify and was disengaged from day-to-day
operation; he also failed to follow up an unmet request for details of the procedure; an office consultant was
misinformed about the operation but failed to make any visit and failed to recognise potential conflict of interest
with the deceased representing two firms
Situatedness
Loss of drilling fluids to highly permeable, geologically anomalous, thick sand deposit encountered while drilling;
sand zone not identified in 4 wells previously drilled from platform, and encountered less than 100 ft from nearest
location of those wells; failure or inability to forecast presence of thick anomalous sand through use of sparker and
shallow gas hazard surveys; lack of forewarning of odd morphology contributed to failure to plan for difficult
circulation problems; generic setting depth of drive-pipe prevented emergency isolation of thief zone, possibly
set by construction department at generic depth, rather than at a depth tailored to actual well-specific
requirements determined by drilling operations
(Continued)
1256
Table 2.
(continued).
Quality of
activity implying
rigour
Nature of rigour
implied
Shortfall in rigour
Incident description
Substantiveness
Offshore supply vessel having mud and base oil tanks cleaned; oxygen levels checked and three entered the
tank with suction hose; when all base oil removed, team started to remove hose but one connection jammed; to
facilitate removal hose end section in engine room removed; pumping operation ashore stopped and storage
tanker valves closed; misunderstanding about correct sequence of operations allowed residual vacuum in
suction hose to be lost, allowing base oil remaining in hose to flow over running engine, which ignited; permit
to work system not fully understood and permit issued inadequate for the activity.
principle there does not have to be one way of organizing that achieves it. Reflexiveness, for instance, can
be institutionalized and built into activity through the
formal means of controlling the activityand this is
conspicuous in the idea of doing risk assessments. But
this is not the only way to achieve reflexiveness, since
people can choose to reflect on the consequences of
what they are about to do without institutional constraints. Of course in certain circumstances one way
of achieving it might look better than another, and
sometimes we might come to a completely consensual
view about this. But in general there is nothing in the
aspects of rigour we have identified that indicates there
is only one way, or always an optimal way, of achieving
them.
ACKNOWLEDGEMENT
Many thanks are due to staff at the UK Health and
Safety Executive for their contributions to this work.
The programme of which this was a part has been
partly funded by the HSE under contract JN3369.
REFERENCES
Bier, V., Joosten, J. and Glyer, D. 2003. Effects of Deregulation on Safety. Kluwer Academic Publishers.
Bigley, G.A. and Roberts, K.H. 2001. The incident command system: high reliability organizing for complex
and volatile task environments. Academy of Management
Journal, 44: 12811300.
Grabowski, M. and Roberts, K. 1997. Risk Mitigation
in Large-Scale Systems: Lessons from High Reliability Organizations. California Management Review, 39:
152161.
1257
1258
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The optimization of system safety involves a tradeoff between risk reduction and investment
cost. The expected utility model provides a rational basis for risk decision making within the context of a
single decision maker. Yet the costs of risk reduction and the consequences of failure are generally distributed
over a number of individuals. How to define optimality in terms of (different) individual preferences? This
paper applies insights from welfare and financial economics to the optimization of system safety. Failures of
technological systems are typically uncorrelated with market returns, so that technological risks couldat least
theoreticallybe financed against expected loss. It will be shown that mechanisms for risk transfer such as
insurance sometimes allow us relax assumptions that would otherwise have to be made concerning the exact
shape of individual utility functions and the interpersonal comparability of utility. Moreover, they allow us
to lower risk premiums as insurance premiums are typically lower than certainty equivalents. The practical
significance of these results is illustrated by a case study: the optimal design of a flood defense system.
Keywords:
1
INTRODUCTION
2
2.1
1259
(1)
(4)
2.2
ai aj if and only if E [U (ai )] E U (aj )
(2)
Consider an individual that is confronted with a probability of losing q dollars due to the failure of a
technological system. He or she can reduce this probability from unity to pm by investing m dollars in risk
reduction, where the probability pm of losing q dollars is a strictly decreasing function in m. Denote
initial wealth by w. The optimal investment m can
now be determined by maximizing the expected utility E[U (x)] of total income x, which is a function of
investment m.
E [U (x)] = pm U (w m q) + (1 pm )U (w m)
(3)
Differentiating expected utility with respect to m
then yields the optimal investment m . This optimal
investment depends, amongst other, on the shape of the
utility function. A risk averse individual that considers
a risk worse than expected loss should for instance
invest more than a risk neutral person. Conversely, the
certainty equivalent of a risk seeking individual would
be lower than expected loss, and such an individual
should invest less.
2.3
Optimal insurance
(5)
(6)
1260
(7)
3
3.1
n
k=1
E [Uk (ai )]
n
E Uk (aj )
(8)
k=1
(9)
Or
E [U1 (x1 ) + U2 (x2 )] = pM U1 (w1 m1 q1 )
+ (1 pM )U1 (w1 m1 )
+ pM U2 (w2 m2 q2 )
+ (1 pM )U (w2 m2 )
(10)
E [U1 (x1 ) + U2 (x2 )] = pM (U1 (w1 m1 q1 )
+ U2 (w2 m2 q2 ))
+ (1 pM )(U1 (w1 m1 )
+ U (w2 m2 ))
(11)
1261
+ (1 pM )U1+2 (W M )
(12)
(13)
Though convenient, the assumption that all individuals have linear and equal utility functions is rather
unrealistic. After all, it implies risk neutral behavior,
and the assumption that the marginal utility of income
is constant across individuals. But it seems unlikely
that an extra dollar in the hands of a millionaire brings
the same amount of pleasure as an extra dollar in the
hands of a poor man.
Optimizing the safety of technological systems
is clearly not without considerable difficulty. The
expected utility framework can be extended to a multiactor setting, but only when additional (disputable)
assumptions are made. But in defense of the optimization of total output, one might argue that its
ramifications remain limited when the unfair distributive consequences of a large number of decisions
cancel out. This, however, need not be the case.
3.2
(14)
With
m1 = M
q1
q1 + q2
m2 = M
q2
q1 + q2
(15)
When mechanisms for risk transfer operate efficiently, and when costs are borne in proportion to
individual exposures, the practical and normative
dilemmas associated with the aggregation of individual preferences could be completely overcome.
We could then consider gains and losses under the
condition of risk neutrality.
It seems unlikely that any of the two conditions
that allow us to strongly simplify the optimization
of system safety will ever be completely met. While
investment cost and exposures might for instance
be distributed in a fairly proportional way, it seems
unlikely that they will ever be distributed in an exactly
proportional way. And perfect insurance presupposes that all consequences of failure can be compensated for, that insurance is offered against the
actuarially fair premium (expected loss), and that
full compensation is so frictionless that people do
not even notice that they have suffered a loss. The
optimization of risks to the public thereby typically
involves a trade-off between practicality and scientific
rigor.
1262
T
pQe
rt
dt
(16)
ha
b
(17)
T
e
ha
b
Qe
rt
dt
(18)
Where M0 = fixed cost of dike heightening (euro);
M = variable cost of dike heightening (euro); h =
dike height (m); a, b = constants (m).
The optimal standard of protection (h ) can now
be found by differentiating NPV with respect to h.
Figure 1 shows how the net present value of total
12
10
11
NPV in euro
10
10
10
10
10
6
7
height (h) in meter
10
1263
Min M0 + M h + K
T
e
ha
b
Qert dt
(19)
(20)
with
T
E [U ] = M0 M h
14
K=100
K=10
K=5
K=1
13
10
12
11
10
10
10
10
6
7
height (h) in meter
Cp,Q ert dt
(21)
10
NPV in euro
Max [E [U ]]
10
10
10
1264
for < 1
(22)
1
T
rt
Max np w m0 m h qe dt
0
+ n (1 p) w m0 m h
1
(23)
CONCLUSIONS
REFERENCES
Arrow, J.K. 1971. Insurance, risk and resource allocation. In
K.J. Arrow (ed.) Essays in the Theory of Risk-Bearing:
134143. Amsterdam and London: North-Holland Publishing Company.
Arrow, J.K. 1963. Social Choice and Individual Values.
New Haven and London: Yale University Press.
Bernoulli, D. 1738. Exposition of a new theory on the
measurement of risk (original publication: Specimen
Theoriae Novae de Mensura Sortis, Commentarii Academicae Imperialis Petropolitanas, Tomus V). Econometrica 1954, 22: 2236.
Chavas, J.P. 2004. Risk Analysis in Theory and Practice.
San Diego: Elsevier Academic Press.
Cummins, J.D., Lewis, C.M. & Phillips, R.D. 1999. Pricing
Excess-of-Loss Reinsurance Contracts against Catastrophic Loss. In K.A. Froot (ed.) The Financing of
Catastrophe Risk: 93141. Chicago and London: The
University of Chicago Press.
Faber, M.H., Maes, M.A., Baker, J.W., Vrouwenvelder, T. &
Takada, T. 2007. Principles of risk assessment of engineered systems. In 10th International Conference on
Application of Statistic and Probability in Civil Engineering (ICASP10). Tokyo, Japan.
Froot, K.A. 2001. The market for catastrophic risk: a clinical examination. Journal of Financial Economics 60:
529571.
Kok, M., Vrijling, J.K., Van Gelder, P.H. A.J.M. & Vogelsang, M.P. 2002. Risk of flooding and insurance in The
Netherlands. In Wu (ed.) Flood Defence, New York:
Science Press Ltd.
Kunreuther, H. 1996. Mitigating disaster losses through
insurance. Journal of Risk and Uncertainty 12: 171187.
Kunreuther, H. & Pauly, M. 2006. Rules rather than discretion: Lessons from Hurricane Katrina. Journal of Risk
and Uncertainty 33: 101116.
1265
1266
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Volatile Organic Compounds (VOCs) are the main factors involved in environmental pollution
and global warming in industrialized nations. Various treatment methods involving incineration, absorption,
etc., were employed to reduce VOCs concentration. Activated carbon, zeolite, silica gel or alumina, and so on
were broadly used to adsorb VOCs in many applications. Zeolite has been widely applied in various industrial
fields and it has generally been used in the form of beads or pellets. Differential Scanning Calorimetry (DSC)
was utilized to analyze the thermal characteristics of our self-made Y type zeolite. This investigation was used
to assess the adsorptivity of Y type zeolite under high temperature situations. In view of pollution control and
loss prevention, versatility and analysis of recycled zeolite were necessary and useful for various industrial and
household applications.
1
INTRODUCTION
beads or pellets. Many studies have been applied various types of zeolite to analyze the adsorption effect for
VOCs chemicals. Many investigations were employed
to analyze the adsorption capability with various formats of zeolite (Ahmed et al., 1997; Breck, 1974;
Dyer, 1988; Ichiura, et al., 2003; Yang and Lee,
1998). This study was used to appraise the thermal
characteristic of self-made Y-type zeolite in the chemical industries. Figure 1 demonstrates the self-made
Y-type zeolite manufacturing flowchart in Taiwan.
VOCs were assimilated in zeolite rotor-wheel system.
The zeolite rotor-wheel system was composed of three
parts that include the adsorption area, desorption zone,
and the recuperative thermal oxidizer (RTO). Figure 2
exhibits the cross-sectional area of the zeolite rotorwheel. The zeolite rotor-wheel was minced with two
zones that involve the adsorption area and desorption
zone. The zeolite rotor-wheel system is continuous
equipment for the VOCs handling. Here, the green
zone is an adsorption area for process gases. Red zone
is a desorbtion area that was penetrated for hot air
under the boiling point of wastes. The thermal stability
of the zeolite is an important factor under the best the
VOCs handling. The differential scanning calorimetry
(DSC) was employed to evaluate and to assess the thermal stability of zeolite and activated carbons under 200
to 600 C (after chamber furnace tests). Results showed
that the zeolite is a quite stable material in the VOCs
1267
0.1
SiO
Y- Type
Zeolite
0.0
-0.1
-0.2
-0.3
Admixture
Alkaline
solution
-0.4
-0.5
-0.6
1st Hydrothermal
synthesis
-0.7
-0.8
2nd Hydrothermal
synthesis
100
200
300
400
500
600
700
Drying
Table 1. Heat of endothermic reaction for three tests of selfmade Y-type zeolite under at 4 C min1 by DSC.
Test no.
Sample mass
(mg)
H
(J g1 )
1
2
3
13.82
10.40
10.00
209
190
148
Calcination
Figure 1. The self-made Y-type zeolite manufacturing
flowchart from our laboratory.
2.1 Sample
Zeolite was made in five steps: mixing, gelling, forming, drying, and calcination, in the manufacturing
process. First of all, we used the SiO2 mixed with
sodium silicoaluminate powder (mixed ratio was 6)
and sodium hydroxide (NaOH) solution in the well
stirring machine. Secondly, zeolites were put into the
extruding forming machine. Finally, zeolite was dried
at 100 C for 8 hrs and calcined at 450 C in oven
(10 hrs). Figure 3 delineates the sample reconstruction
tests of zeolite under heating rate () at 4 C min1 by
the DSC. Initial reaction behavior of Y-type zeolite was
an endothermic reaction at 30 to 200 C. The final reaction has an exothermic reaction at 550 to 640 C. Heat
of endothermic reaction was calculated as displayed in
Table 1. Sample reconstruction tests of zeolite under
at 4 C min1 by the DSC were calculated about
150200 J g1 . Adsorbent (such as activated carbon
and zeolite) has adsorbed a few wastes in air. To adsorb
a great deal of wastes, adsorbent must be desorbed at
high temperature.
1268
2.2
DSC
3.1
Analysis of zeolite
0.1
0.0
zeolite(isothermaltimeis10hrsat200 C)
0.0
-1
Heat flow (W g )
zeolite(isothermaltimeis0hratroomtemperature)
-0.1
-0.2
-0.3
-0.1
-0.2
-0.3
-0.4
-0.4
-0.5
0
100
200
300
400
500
600
700
100
200
300
400
500
600
700
Temperature( C)
Temperature( C)
0.1
0.1
Heat flow (W g )
-1
zeolite(isothermaltimeis10hrsat300 C)
0.0
zeolite(isothermaltimeis10hrsat400 C)
0.0
-0.1
-0.2
-0.3
-0.1
-0.2
-0.3
-0.4
0
-1
-2
-3
-4
-5
-6
-7
-8
-9
-10
0
100
200
300
400
500
600
700
Sample mass
(mg)
Holding time
(hr)
H
(J g1 )
13.82
10
209
11.00
9.50
8.80
10.00
14.30
10
10
10
10
10
128
146
21
104
99
-0.5
-0.6
-0.4
0
100
200
300
400
500
600
700
100
200
300
400
500
600
700
Temperature( C)
Temperature( C)
0.1
0.1
3.2
0.0
Heat flow (W g )
zeolite(isothermaltimeis10hrsat500 C)
-1
-0.1
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
zeolite(isothermaltimeis10hrsat600 C)
0.0
-0.2
-0.3
-0.8
-0.4
-0.9
0
100
200
300
400
500
600
700
-0.1
100
200
300
400
500
600
700
1269
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
-0.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0
100
200
300
400
500
600
700
500
600
700
500
600
700
500
600
700
500
600
700
500
600
700
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
-0.5
100
200
300
400
o
5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0
100
200
300
400
o
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0
100
200
300
400
o
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
-0.5
-1.0
100
200
300
400
100
200
300
400
60
Activated carbon (isothermal time was 10 hrs at room temperature)
o
Activated carbon (isothermal time was 10 hrs at 200 C)
o
Activated carbon (isothermal time was 10 hrs at 300 C)
o
Activated carbon (isothermal time was 10 hrs at 400 C)
o
Activated carbon (isothermal time was 10 hrs at 500 C)
o
Activated carbon (isothermal time was 10 hrs at 600 C)
50
40
30
20
10
-10
0
100
200
300
400
500
600
700
3.0
VOCs included various kinds of wastes and chemicals, such as trichloroethylene (TCE), acetaldehyde,
phenol, 1, 2-dichloroethane (DCE), etc., in the chemical industries. Adsorption material for VOCs must
control the thermal stability and safety characteristics for boiling point of wastes and chemicals. Zeolite
is a quite stable substance from room temperature
to 650 C. Activated carbon was a dangerous adsorbent that showed flame characteristic over 500 C.
In essence, the best desorption temperature of zeolite
is 500 C.
2.5
2.0
1.5
1.0
CONCLUSIONS
Activated carbon
o
Initial temperature = 30 C
o
End of temperature = 640 C
o
-1
Heating rate = 1 C min
-1
Heat of decomposition = 19,200 J g
o
Exothermic onset temperature = 300 C
0.5
0.0
-0.5
0
100
200
300
400
500
600
ACKNOWLEDGMENT
700
1270
1271
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The main objective of this study is to define Emergency Response Team Location in a specific
area based on the risk of plants and facilities. The Center of Gravity and Haikini Network methodologies are the
two different approaches used to define the location in a network based on index values and distance between
locations. These methodologies are different in regard to one basic concept concerning the possibility of defining
critical locations in the network in the first case and their boundaries in the second case. The index in this case
will be the frequency of hazardous events in each facility or plant located in the network.
The two methodologies will be implemented and the results will be assessed. Therefore, a sensitivity analysis
will be carried out looking at specific elements such as alternative routes and population dislocation in the case
of accidents. Furthermore, the real historical data and the usual data used in Brazil in hazardous event will be
assessed to check the influence on final results.
The refinery case study will be carried out to define the Emergency Response Team location in a Brazilian
refinery.
INTRODUCTION
emergency team location problem in the refinery taking into account risks, distances and other important
related issues.
2
ANALYSIS METHODOLOGY
1273
1 Boundaries Limits
2 Critical Points Positions
3 Importance Index
4 Gravity Center Methodology
5 Hakini Methodology
6 Critical Analysis
7 Conclusion
Figure 1.
Network methodology.
3.1
MIN Z =
(Gx Xi )2 + (Gy Yi )2
Gy =
NETWORK METHODOLOGIES
Fi
i=1
n
n
i=1
n
i=1
Fi Yi
DEi
Fi
DEi
Gx =
i=1
n
i=1
Fi Xi
DEi
Fi
DEi
And
DEi =
1274
(Gx Xi )2 + (Gy Yi )2
k
G G k1
y
y
And
Gx1
n
Fi Xi
n
Gy1
Fi
Fi Yi
n
Fi
Gxk
1
n
1
3.2
Fi Xi
DEik1
Fi
DEik1
n
and
Gyk
1
n
1
Fi Yi
DEik1
Fi
DEik1
1275
Figure 2.
PDF corrosion.
The above figure represents PDF corrosion for different types of risk resources in a refinery. It means
that corrosion is the basic cause of pipe linkage which
in turn causes catastrophic accidents. Despite different
PDF results, all resources have similar characteristics
for this event. Therefore, the center point in this
network will not change considerably because the
importance index varies similarly over time.
If new events, such as critical and non-critical situations, are taken into account the importance index for
pipe linkage will vary over time, forcing center point
change over time.
Regarding its PDF behavior over time, a new interaction will be required in both network methodologies
in order to analyze the index changes over time in center point decisions. It must be analyzed whenever the
event frequency index will not be constant anymore.
CASE STUDY
1276
6.2
Points
U-1
U-3
U-4
U-5
U-2
Off site
Gate 1
Substation
1282,78
1291,35
1203,69
1376,08
1558,06
497,697
0
300
716,15
480,57
500,45
672,97
451,4
939,599
300
800
0,001024
0,001286
0,001548
0,003317
0,002447
0,013255
0
0,013255
Table 2.
Gravity center
GX1
GY1
Erro X
Erro Y
815,7277
GX2
752,4623
729,2447
GY2
585,2872
0
Erro X
63,2654
0
Erro Y
143,9575
Hakini methodology
The main objective of this methodology is to compare the impact of emergency localization in terms of
response capacity. Its takes into account the probability of catastrophic events occurring over time and
response duration time.
The first step is create the Safety Block Diagram
in order to represent catastrophic events and simulate
1277
Table 3.
Hakini matriz 1
Distance
U-1
U-3
U-4
U-5
U-2
Off-site
Gate 1
Substation
Total
U-1
Gate 1
Substation
U-3
U-4
U-5
U-2
Off-site
0
1,6384
1,0752
0,256
0,3072
0,1024
0,6144
0,6144
0,3215
1,929
1,6718
0
0,1286
0,2572
0,3858
1,8004
0,4644
2,1672
1,8576
0,1548
0
0,4644
0,6192
1,7028
0,3317
5,6389
3,6487
0,6634
0,9951
0
1,82435
3,6487
1,4682
4,4046
3,9152
0,7341
0,9788
1,3459
0
2,9364
7,953
13,255
2,651
18,557
14,5805
14,5805
15,906
0
0
0
0
0
0
0
0
0
11,9295
10,604
0
17,2315
15,906
14,5805
21,208
2,651
22,4683
39,6371
14,8195
37,5968
32,8962
31,33085
40,55775
13,3537
Table 4.
Hakini matriz 2
Distance
U-1
U-3
U-4
U-5
U-2
Off-site
Gate 1
Substation
Total
U-1
Gate 1
Substation
U-3
U-4
U-5
U-2
Off-site
0
1,6384
2,6112
0,256
0,3072
0,1024
0,6144
2,1504
0,3215
1,929
3,6008
0
0,1286
0,2572
0,3858
3,7294
0,4644
2,1672
4,1796
0,1548
0
0,4644
0,6192
4,0248
0,3317
5,6389
7,2974
0,6634
0,9951
0
1,82435
8,6242
1,4682
4,4046
7,5857
0,7341
0,9788
1,3459
0
6,6069
7,953
13,255
22,5335
18,557
14,5805
14,5805
15,906
0
0
0
0
0
0
0
0
0
11,9295
10,604
0
17,2315
15,906
14,5805
21,208
22,5335
22,4683
39,6371
47,8082
37,5968
32,8962
31,33085
40,55775
47,6692
Table 5.
Figure 4.
0,288
0,288
100,00%
0,00%
0,2953
1,0254
0
CRITICAL ANALYSIS
1278
SENSITIVITY ANALYSIS
Figure 5.
CONCLUSION
REFERENCES
A.M. Cassula, Evaluation of Distribution System Reliability Considering Generation and Transmission Impacts,
Masters Dissertation, UNIFEI, Nov. 1998.
Boaventura, Paulo Oswaldo Netto, Grafos: Teoria, Modelos,
Algortimos. So Paulo, E. Blucher, 2003.
1279
1280
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
H. Schbe
TV Rheinland InterTraffic, Cologne, Germany
ABSTRACT: Imprecise definitions and use of safety terms and principles result in misunderstanding and
misconception of safety principles. In this article we present a systematic overview and explanations of safety
principles as fail-safe, safe-life, fail silent, redundancy etc. For these safety principles we give a systematic
approach and show their advantages, shortcomings and pitfalls. All principles are presented in detail, compared
and their premises, use and possible problems are discussed. We present an overview of possible problems that
can occur when applying these safety principles erroneously. In addition, some examples for the application of
the safety principles are given.
1
MOTIVATION
2
unclear terminology
unclear assumptions
unclear descriptions
2.1
In addition, there are differences between the versions in different languages of the same standard. This
situation has motivated the authors to consider the
following questions:
Which safety principles do exist at all?
How are they defined?
When can they be applied and under which conditions and assumptions?
Safety principles are widely applied in safety techniques. The most known is the fail-safe principle in
railway technology.
A safety related railway system must enter a safe
state upon occurrence of a failure. This implies the
following questions:
What is a failure?
Has the state the system was in before the failure
occurred been an unsafe state?
How is a safe state characterised?
BASIC TERMINOLOGY
Process, system boundaries and environment
Safe state
1283
Specified requirements
Correct
development and
manufacturing
Incorrect development
and/ or
manufacturing mistake
System is correct
Deficiency
System has a
physical failure
System has
wrong
information
System has
inherent faults
fault
failure
System is
defective
System fails
Figure 1.
Safe failure
state
malfunction
Figure 2.
Safety terminology
failure
Demand to system
coincides with the defect
SAFETY PRINCIPLES
To facilitate this, efficacy and scope of safety principles must be clear. Safety principles can be used to
control e.g.
a.
b.
c.
d.
random failures,
systematic failures,
hardware failures,
software failures.
1284
Feeding pipe
inherent fail-safety
reactive fail-safety
composite fail-safety
max.level
Over flow
duct
Outlet
servovalve
Figure 3.
1285
Table 1.
3.2
Safety principles
Failures
Systematic
faults
Inherent fail-safety
Reactive fail-safety
Composite fail-safety
X
X
X
X1)
The results of all these sub-systems must be identical (composite fail-safety with comparison) or a
majority of the sub-systems must come to the same
result (majority vote)
Detection of failure of sub-system and subsequent
safety reaction must be fast enough that probability
of a failure of a second sub-system is negligible.
A safe stopping state must exist, as with the other
two fail-safe principles
Note that, also the comparison or majority vote must
be carried out fail-safe and fast enough.
Composite fail-safe systems can also be adaptive.
E.g. a 2-out of-3 system with majority vote can be
transformed into a 2-out-of-2 system with comparison, after malfunction of one sub-system has been
detected and the sub-system has been disabled. Only
if a second sub-system fails too, the system has to enter
the safe stopping state. This is necessary, since with
two remaining sub-systems, a difference of the subsystems can be detected, but no decision can be made,
which of the sub-systems has failed.
Composite fail-safe systems with majority vote
need an uneven number of sub-systems, to prevent
deadlock situations (k-out-of-n with k (n + 1)/2).
With an even number of systems, the deadlock with
n/2 against n/2 votes would leave stopping the systems
as the only solution.
3.1.4 Detection of failures and faults
The following table gives a comparison how different principles contribute to detection and toleration of
failures and faults. It is evident that only composite
fail-safety is able to detect and tolerate failures. This
is caused by the fact that these systems are built of several sub-systems. Now it becomes clear, why for higher
safety integrity levels (SIL 3 and SIL 4) a system is
advised to be constructed from several subsystems,
since failure detection is very important.
Note that, for composite fail-safe systems, fault
detection does not come automatically. Fault detection is implemented only in the same degree as the
sub-systems are diverse. That means, that composite
systems with diverse sub-systems detect only specific
faults due to the kind of diversity.
A system is designed according to the safe life principle if the safety relevant function is available with
high probability. Usually this is achieved using redundancy of the relevant sub-systems so that the function
is ensured even in case of failures. In addition, the
system has a failure detection function in order to prevent accumulation of failures. So, the mission can be
finalized in case of occurrence of a failure. There exist
also systems, which can be repaired during their mission without switching them off. This holds, e.g. for
many chemical plant control systems. Here, the failed
sub-system is repaired and restored to a functioning
state, while the other intact sub-systems carry out the
safety-relevant control function.
The safe-life principle is applied for systems that
must not fail during a certain mission time. Engines of
airplanes serve as an example. The safe life principle
is also applied to magnetic levitation trains. The principle of safe levitation and guidance is applied here.
This principle considers magnetic levitation and guidance as safe-life functions. That means, that the rate of
dangerous failures of the levitation and the guidance
function must not exceed a rate of 106 per year.
There exist several possibilities for the implementation of the principle:
Dimensioning (no damage)
Such an approach is usually applied for buildings,
in particular to bridges. These systems have only
one safe state (correct function) and there is no
transition into another safe (stopping) state. Therefore, their function must be warranted with a very
high probability even under extreme environmental
conditions.
Redundancy
Several components or sub-systems carry out the
same function, however, not all of them are necessary. An example is an airplane having more
than one engine. Magnetic levitation trains have
three operation control systems, four power supply
networks etc.
1286
Table 2.
System
Is the
safe stopping
state always
reachable?
Applicable
safety
principle
Railway
Airplane
Nuclear power station
yes
no
yes
Fail-safe
Safe-life
Fail-safe
Note
applied, of course the safe-life principle is also applicable. Such a choice does not necessarily bring a gain
in safety, but usually improves availability. Using one
safety principle in place of another must not corrupt
the safety level.
For the example of the railway and other guided
transport systems we refer to cases, where the safe
stopping state does not always exist. Then, specific
questions arise whether the propulsion must be always
available [5].
3.4
Redundancy
Redundancy is the presence of more means than necessary for the desired function (VDI/VDE 3542). For
safety related functions it has to be distinguished,
which means are used to ensure the desired level of
safety and which of them are present additionally. Only
the last can be declared as redundant. For example,
a composite fail-safe system with two sub-systems
and comparison is sufficient for safe applications.
However, in order to increase the availability, a third
sub-system can be added and the system can be redesigned as a 2-out-of-3 system with majority vote.
Hence, the third sub-system does not improve the
safety level at all, but it increases system availability.
Redundancy is introduced to improve reliability
and availability, but safety is not directly improved.
When considering an entire system, it might come out
that improvement of reliability and availability also
might improve safety. The following railway example
demonstrates this.
After the railway system has been brought into the
safe stopping state, railway operations need to be continued in a fall-back mode. Operation in the fall-back
mode is usually not on the same safety level as normal
operation. This is caused by the fact, that the systems
used in fall-back mode are much simpler and responsibility for safety relies much more on personnel. This
is acceptable, if the fall-back solution is not frequently
used. If now the availability of the safety system,
which is used for normal operations is improved,
the fall-back solution is used less frequently and,
therefore, overall system safety is improved.
Redundancy
Several sub-systems
realise the same
function.
There are more subsystems present than
necessary to carry
out the function
No comparison
Existence of more
than one sub-systems
improve availability
and reliability
1287
Diversity
Diversity characterises a situation, where several subsystems carry out the same function in a different
manner. Diversity can be in relation to the development
process, the algorithm, software, hardware etc. The
diversity principle can be applied for composite failsafe systems and redundant systems. In most cases,
comparisons and majority votes can be present only
on the functional level, caused by differences between
the sub-systems.
The following kinds of diversity are distinguished:
3.6
Fail silent
1288
4.2
Failure
dangerous
Actuator works
in time
Figure 4.
4.1
Especially for the inherent fail-safe principle identification of all failure modes is important. The failures
have to be divided into safe and dangerous ones,
depending on the function of the component in the
system context. It must be proven, that dangerous failures are controlled. As a prerequisite, a safe stopping
state must exist.
safe
und etected
detected
detected
undetected
Actuator fails
4.3
Actuators
Failure detection
4.4
1289
Table 4.
CCF
CMF
X()
X
X
X
X()
X()
X()
() It
Diversity
In the past, many systems have been developed according to the fail-safe principle. The applied method is
a deterministic one, i.e. in fail-safe systems dangerous failures are excluded and thus do not occurat
least according to the applied philosophy. For systems
with reactive fail-safety or composite fail-safety probabilistic arguments are used. That means, there exists
a probability of unsafe system failures. When moving
forward from inherent fail safe systems to more complex safety principles, deterministic considerations
must be replaced by probabilistic ones.
5
CONCLUSION
REFERENCES
APPLICATION EXAMPLE
In this section, design principles for guided transport are derived in order to demonstrate the use of
1290
Bouwman, R. Phileas, Tram on Virtual Rail, Increasing Standards, Shrinking BudgetsA Bridgeable Gap?,
2nd International Rail Symposium, TV Rheinland,
67.11.2007, Cologne, Proceedings, 22 p.
DIN 40041, Zuverlssigkeit; Begriffe, (Reliability, terms)
199012.
EN 50129, Railway applicationsCommunication, signalling and processing systemsSafety related electronic
systems for signalling, february 2003.
EN 61511-1, Functional safetySafety instrumented systems for the process industry sectorPart 1: Framework,
1291
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The present investigation identifies the key images that British newspapers use to represent
climate change risks. In doing so, it widens the scope of the burgeoning literature analysing textual content
of climate change media information. This is particularly important given the visual informations ability to
arouse emotion and the risk literatures increasing focus on the importance of affect in shaping risk perception.
From a thematic analysis of newspaper images, three broad themes emerged: the impact of climate change,
personification of climate change and representation of climate change in graphs. In particular, the depiction
of climate change as an issue affecting domestic populations rather than just other areas of the world brings the
threat closer to home. Challenging the perception that climate change is still a long term and future orientated
threat, visual images concretise the risk by providing viewers with tangible examples of climate changes impact.
INTRODUCTION
course, textual and visual portrayals are often combined. The visual, in particular, has the ability to
arouse emotion making it an effective medium for the
social construction of risk messages (Joffe, 2007).
Perlmutter (1998) argues that images, amongst
other factors, arouse and stimulate affective responses
from the viewer. In his typology outlining effects of
visual images, manifest content is thought to foster an emotional connection between the viewer and
what is viewed. Investigating responses to photographs
of the Kenneth Bigley kidnapping in 2004, Iyer and
Oldmeadow (2006) corroborate this theory identifying that individuals respond with greater feelings of
fear when presented with visual images of the kidnapping compared to individuals presented with textual
information only. Moving the viewer along affective
pathways, visual images illicit particular emotional
responses due to the vividness of their content.
In recent years there has been a considerable
increase in the quantity of both textual and visual
climate change information in the British press. At
a textual level, British newspapers are representing
climate change as an issue falling in and out of public consciousness (Carvalho & Burgess, 2005). There
is an under reporting, however, of images accompanying newspaper articles. Given the potential visual
information has to arouse emotion and concretise risk
messages for members of the public (Joffe, 2007), the
current investigation will explore the content of this
more recent coverage with particular focus on the types
of images being depicted and the impact this is likely
to have for public engagement.
Due to pragmatic and logistical challenges of
obtaining visual material from online newspaper
1293
METHOD
Sample
comparison between highbrow and lowbrow viewpoints, they also represent a broad spectrum of political editorial style (Washer, 2004). Sunday newspapers
are considered to contain longer and more analytical
articles than those in daily newspapers and also offer
a feel for how coverage has shifted over the preceding
week (Washer, 2004).
As there are no online databases cataloguing visual
images in newspapers, Lexis-Nexis was used to
initially identify articles about climate change. As
an online newspaper archive, Lexis-Nexis provides
researchers with an invaluable resource for accessing the text of newspaper articles. Using key words
global warming OR climate change OR greenhouse effect appearing in the headline or with a
major mention, all articles from the six Sunday newspapers sampled were selected between 1st January
2000 and 31st December 2006. Duplicates, letters and
other articles not specifically about global warming,
climate change or the greenhouse effect were removed
leaving a workable sample of 300 Sunday newspapers included in the media analysis (208 broadsheet,
92 tabloid). As the focus of this investigation is to
examine content of visual images accompanying these
articles, microfilm copies of all newspapers sampled
were obtained. Photocopies were taken of any image
accompanying an article. Of the 300 newspaper articles sampled, 188 (approximately 60%) had one or
more accompanying image.
2.2
Coding frame
1294
Data analysis
RESULTS
Impacts of climate change: Bringing the threat
closer to home
1295
1296
DISCUSSION
1297
1298
themes might influence public engagement with climate change risk. Given the complex empirical link
between mass media presentation and public uptake
of risk information, it is important to explore the role
visual imagery might play in mediating this relationship. The vividness of visual images, in particular,
has been highlighted as an important component in
understanding emotional reactions to graphic images
(Iyer & Oldmeadow, 2006). Photographs appear
to direct emotionally fearful responses due to the
graphic nature of the content presented. This has
enabled Iyer and Oldmeadow (2006) to speculate
about the role visual imagery can play in influencing individual responses to threatening events. At a
minimum, such images should at least attract attention to media reports through which information
about unknown and threatening phenomena is often
obtained.
This line of enquiry is particularly important to
consider given the strong affective component
researchers attribute to risk perception processes.
Whereas traditional risk perception theories focus
on the rational and probabilistic reasoning practices
individuals are hypothesised to employ when making judgements about unfamiliar information (Slovic,
1987), more recent research has emphasised the
important role positive and negative affect plays in
guiding judgement and decision making (Finucane,
Alhakami, Sloivc & Johnson, 2000). In their investigations of individual associations with climate change,
Leiserowitz (2005) and Lorenzoni, Leiserowitz, de
Franca Doria, Poortinga and Pigeon (2006) highlight
the importance affective images can play in public
engagement with risk information. More specifically, this identification helps develop a conceptual
understanding of the important role emotion plays in
engagement with risk. Furthermore, given the emotive
power of visual information, it will be important to
explore the suggestion that newspaper imagery helps
shape public engagement with climate change risk.
This is particularly pertinent given identification of
visual themes does not automatically uncover how the
public engages with the issue.
5
CONCLUDING COMMENTS
The present study has identified salient images newspapers choose to visually represent the climate change
threat. In doing so, it builds upon previous media
analysis research focused on the textual content of
newspaper articles. Notwithstanding the importance
findings from these studies have uncovered, visual
imagery offers researchers a novel and exciting opportunity to investigate media coverage of climate change
information. Furthermore, given the power the visual
has to evoke strong affective responses, newspaper
1299
NEWSPAPER REFERENCES
Mail on Sunday. May 6, 2001. Stars urge a boycott of all Esso
garages.
Mail on Sunday. September 24, 2006. A future too hot to
handle.
News of the World. September 3, 2006. Weve got ten years
to save the planet.
1300
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
T. Horlick-Jones
Cardiff School of Social Sciences, Cardiff University, UK
ABSTRACT: There exist situations in which people who perceive themselves exposed to a technological
risk feel uninformed but, at the same time, refuse to absorb new information about the said risk. With the
objective of understanding this paradoxical situation, in this text, we are going to compare some empirical
data obtained in two different studies carried out in Spain about the perception of technological risks, the first
one, very specific and visible, related to petrochemical plants; and the second one, more abstract and opaque,
related to nuclear fusion. According to the results some of the reasons that could explain this situation can
be found in the structure of the social and institutional context in which people experience and perceive risks;
a context in which acknowledging new information implies assuming new responsibilities on factors out of
peoples control.
INTRODUCTION
1301
2
2.1
METHODOLOGY
Research project on communication and social
perception of risk in the petrochemical area of
Tarragona
Eight focus groups were created with the objective of capturing the principal discourses about the
experiences, perceptions and expectations around the
chemical and petrochemical industries of Tarragona
and their potential risks. Two types of municipalities and neighbourhoods of Tarragona were chosen
according to their territorial proximity to the petrochemical plants: a) close (Bonavista, La Canonja,
Constant, La Pobla de Mafumet); and b) distant
(TarragonaCentro, Reus, Vila-seca). While in those
areas in the first group the predominant socioeconomic activities are closely related to the chemical
industry sector, those in the second group have a
concentration of administrative and economic activities based on the service industries such as mass
tourism. Field work was carried out between April
and June 2006. A total of 50 people participated (60%
women and 40% men) with an average age of 51 and
a range of ages from 1692, of whom 8% were uneducated, 36% had completed primary school, 42% had
completed secondary education and 14% were university educated; and had diverse labour situations.
For the analysis of the information we used a constant
comparative method according to grounded theory
procedures.
2.2
1302
RESULTS
Evaluation of available information
about technologies
1303
could be expected for a nuclear complex technological facilitythey would prefer the plant to be far from
their homes. And it seems that they do not need more
information to reach this conclusion. Furthermore they
are afraid that the developers are not a reliable source
of information about risks (as it happens in other siting contexts with which our participants are familiar)
and consequently do not have clear ideas on how to
get more information on the subject. On the other
hand they feel that the decision over the nuclear fusion
project has already been taken and it is unlikely to
be rescinded which also stops them from demanding
more information.
3.3
1304
CONCLUSIONS
1305
ACKNOWLEDGMENTS
The fusion related work reported in this paper
was supported by the European Fusion Development
Agreement (EFDA) (contract TW6-TRE-FESS-B1)
and UKs Economic & Social Research Council (award
RES-000-22-2664). Parts of this work, supported by
the European Communities under the contract of Association between EURATOM/CIEMAT, were carried
out within the framework of the European Fusion
Development Agreement. The views and opinions
expressed herein do not necessarily reflect those of the
European Commission. The research project on perception of petrochemical risks was supported by the
Spanish Ministry of Education and Science (SEJ200400892) and was carried out by the Universitat Rovira
i Virgili, the Universitat Autnoma de Barcelona and
the Universitat Pompeu Fabra.
REFERENCES
Bauer, M. 1995. Resistance to New Technology. Cambridge
University Press, Cambridge, UK.
Burger, J., Greenberg, M., Gochfeld, M., Shukla, S.,
Lowrie, K. and Keren, R. (2007) Factors influencing
acquisition of ecological and exposure information about
hazards and risks from contaminated sites. Environmental
Monitoring and Assessment, 137 (13), 413425.
Farr, J. and Fernndez Cavia, J. 2007. Comunicaci i
risk petroqumic a Tarragona. De les definicions a les
prctiques institucionals. Tarragona: Universitat Rovira i
Virgili.
Flynn, J. 2003. Nuclear stigma, in: Pidgeon, N. Kasperson, R.
and Slovic, P. (eds.): The Social Amplification of Risk.
Cambridge University Press, Cambridge, 326352.
Frewer, L.J. 1999. Risk perception, social trust, and public
participation into strategic decision-making: implications
for emerging technologies. Ambio, 28, 569574.
Frewer, L.J., Howard, C., Hedderley, D. and Shepherd, R.
1996. What determines trust in information about foodrelated risks? Underlying psychological constructs. Risk
Analysis, 16, 473486.
Horlick-Jones, T. 2005. Informal logics of risk: contingency and modes of practical reasoning. Journal of Risk
Research, 8 (3), 253272.
Horlick-Jones, T., Walls, J. and Kitzinger, J. (2007) Bricolage in action: learning about, making sense of, and
discussing issues about genetically modified crops and
food, Health, Risk & Society, 9(1), pp. 83103.
Irwin, A., Simmons, P. and Walker, G. 1999. Faulty environments and risk reasoning: the local understanding of
1306
1307
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: We propose to analyze risks inside and outside technological organizations, through their
representations in the social imaginary by the media and fictions. We try to show how they can be interesting
and useful objects of research, what they tell us in complement to risk assessment and expert communication.
After presenting our assumptions justifying how the study of fictions is interesting to analyse risks, we briefly
present the disciplines and the methods used for our analyses: semiotics, semantics and storytelling. We finish
with two examples. First, the imaginary representation of nuclear power plant: how it is perceived, what are its
representations, questions directly related with social acceptability of technological organizations; second, the
narrative interpretation of the collapse of Air Libert, a commercial air transportation company. We conclude
with benefits of the approaches.
INTRODUCTION
2
2.1
ASSUMPTIONS
Imaginary of risks outside of the organization
1309
2.2
2.2.1 No proceduralisation
There is an important difference in term of decision making between the operator of a technological
system, driven by procedures; and a manager of a technological organizations, acting on strategy, investment
decisions, risk management. . . . As the proceduralization of such managerial decisions is substancially
impossible; role playing, visions of the world, personal
stakes have a larger importance.
2.2.2 Heroisation
The hero is presented in [12], as an individual mandated to solve a problem or transform a situation and
acting at the same time from is own will, motivated
by its personal desires of power, wealth or accomplishment. That is exactly the way many management
manuals describe the leading decision manager, highlighting individualistic values, individualistic decision
making, concentration of power, charisma, instinctive fast decision made in action. Being an executive
manager is like to drive a race car. If you like speed and
sensations, no problem, otherwise you are lost! [29].
Some managers like to present themselves as wearers
of a heroic mission that monopolize their thoughts:
Increase profit? It is continuously in my mind! Even
during the weekend, when I go to theatre. But it does
not bother me, I am a do-er. That is my mission.
2.2.3 Mass production of fictions and imaginary
frameworks
There may be also the emergence of a generation of
managers grown up in an environment and a culture
feed by movies, video games, telling standardized
stories produced in large quantities by mass culture
production centers, supported by the standardization
of semantic techniques. For example, the studies
of Campbell [5],[7], are influential on many action
movies.
Furthermore, managerial techniques may be also
influential on the way organizations build their own
stories [27], as storytelling is often used to make sense
and consolidate organizations.
The whole lay contribute to a standardization of
imaginary framework used by managers to make
decisions in an uncertain world.
2.2.4 Virtualization of reality
The quote of A.C. Clarke1 : Any sufficiently advanced
technology is indistinguishable from magic, illustrate the power of information technologies, creating
cognitive bubbles capable to produce a so called
virtual reality, immerging the decision maker in a
1 Author
1310
2 Tages
1311
4
4.1
but links between the bomb and the nuclear power plant
are very frequent.
1312
and the nuclear power plant in particular, the knowledge and the techniques are represented like complex
and unintelligible; the lifespan of the elements evokes
the infinite; the radioactive waste, a quasi eternal dirt,
is bad substances and we fear for their invasion; the
radiations are invisible, the dangers are conceived in
terms of collective. Moreover, the nuclear power plant
is secret and what appears secret is renowned seldom
innocent. All these elements contribute to create the
effect of meaning of the anguish that characterizes the
nuclear6 .
We can also affirm that it is not possible to separate the discourses on the nuclear power plant and the
atomic bomb. We did not find any element in our corpus of fictions of the nuclear power plant in which a
bond, even weak, with the bomb is not identifiable.
In imaginary, nuclear civil and atomic bomb form an
inseparable whole. From the elements highlighted, we
can make the assumption that the nuclear would be
represented, in the unconscious one, like a non acceptable creation, in the same way that certain desires are
prohibited because punishable. If these desires find
any expression, they generate a guilt feeling and thus
anguish. The nuclear power plants (and also the atomic
bomb) are connected unconsciously with the prohibited desire (to become very powerful, to be able to
control everything); they have a concrete existence and
thus distressing.
Consequently, we make the proposition that the
anguish of the nuclear, as well as the anguish of
the catastrophe and the end of planet, reflects the
mechanism that make people feeling guilty of their
desires.
4.3
Sender
Scenario Writers
meet the same phenomena in the media (newspapers, Internet . . .) with a strong tendency to dramatization
moderated for a certain time by the arguments related to
the climate change (non emission by the nuclear of gases
contributing to the greenhouse effect).
Receiver
Spectators
International Public
Opponent
The company which
manages the nuclear power
plant
The management who
minimizes or conceals the
risk
ri
7 We
6 We
Subject
The hero of the drama,
Person who announces the risk
do not pretend to provide here the only interpretation of an history. There are others, like for example the
ambivalent strategy of on of its main investors, the possible role of the government, objective managerial failures
on account reporting. . . . However they do not explain the
strategic failure.
1313
Sender
Receiver
Main Shareholders
Customers, media
The company
Competitor
(the former
monopoly)
Employees, assets
(material and
immaterial)
Helpers
Object
Deus ex machina
Opponents
Figure 2.
1314
CONCLUSION: BENEFITS
OF THE APPROACHES
REFERENCES
[1] Aumont, J., Marie, Michel, Lanalyse des films, Paris,
Nathan, 1988.
[2] Aumont, J., Bergala, Alain, Esthtique du film, Paris,
Nathan, 1988.
[3] Barthes, R., Mythologies, Paris, Le Seuil, 1957.
[4] Barthes, R., Elments de smiologie, Communications, 1964, 4, pp. 91134.
1315
1316
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: In 2000 disaster struck Enschede in The Netherlands. Due to explosions at a fireworks factory,
22 people were killed. This study aims to describe the developments in the media coverage of this disaster.
Content analysis was performed on 4928 articles, derived from four distinct newspapers. After a period of
intense coverage, media attention for the disaster declined. In this first month 772 articles were run (local paper
457, national papers 315). After three years this number is approximately 30 articles per month. This decline can
be best described by an inverse function. This paper focuses on the changes in the amount of news coverage, and
concentrates on some methodological issues researchers encounter when they try to analyse the news coverage
of a disaster over time.
INTRODUCTION
issues arise and win the battle for the medias and
publics attention.
In the coverage of a disaster one may expect differences between local and national newspapers. Newspapers have to write stories which are interesting to
their readers. Local newspaper can be expected to pay
more attention to the disaster than the national newspapers. Also, local newspapers can be expected to focus
on the interests of and the personal relevance for the
local readership, which may be different from those
of the readership of national newspapers. One might
expect more stories with a human interest frame or
economical consequences frame in the local press, and
more stories with an responsibility or conflict frame
in the national press (Anderson & Marhadour, 2007).
In this paper we analyze the media coverage of the
Enschede disaster in Dutch local and national newspapers for a three-year period after the disaster. We
focus on the changes in the amount of news coverage, and concentrate on some methodological issues
researchers encounter when they try to describe these
changes in a mathematical function. We are not aware
of any attempts to do so before. These issues are:
Does the coverage in the local newspaper resemble
that in the national newspapers and does a similar
mathematical function apply to both? Or does one
have to analyse the developments in media coverage
for local and national newspapers separately?
Which mathematical function fits the observed
number of articles per period the best?
To analyse the developments in coverage over
time, the total period under investigation should be
divided into smaller time segments. Does the length
of these smaller periods affect the results?
1317
A content analysis of four Dutch newspapers was performed over a three year period after the disaster.
Sampled were the local newspaper in the disaster area
(TcTubantia), two national newspapers (Volkskrant,
Telegraaf) and one national/regional newspaper (AD)
from one of the major industrial regions of the country. These newspapers were selected on the basis of
their region of delivery, size of readership, and political and economic point of view. For these newspapers,
all articles were selected from electronic databases
that were published in the three years after the disaster. Selection criteria were the occurrence of keywords
as vuurwerkramp (fireworks disaster) or Fireworks
(SE Fireworks is the name of the company that owned
the plant). Analysis indicated that these keywords
returned the largest number of hits in the electronic
databases.
2.2
2.4 Analysis
2
2.1
METHOD
Design and sample
Coding
The articles were identified by newspaper, publication data and title. The 3.927 articles were coded by
several students based on a trained coding instruction.
The articles were read and the coders coded whether
one or more of the following frames were present:
conflict frame, human-interest frame, responsibility
frame, and economic-consequencesframe (yes, no).
The choice of frames was based on previous work
by Semetko & Valkenburg (2000) and De Kort &
dHaenens (2005). The conflict frame focuses on conflicts between individuals, groups or organizations.
This may reflect in the description of disclosure of
conflicts of opinions, judicial procedures, criticizing
the other party or defending oneself against the critique of others. The human-interest frame presents the
human emotional aspects of an event, issue or problem. Typical are stories describing individual victims
and disaster relief workers, identified with their full
name. The responsibility frame presents an issue in
1318
3
3.1
RESULTS
3.2
Data analysis has taken place by means of the SPSS12. This package allows one to investigate to what
extent observed data fit with a large number of hypothetical, mathematical function (R 2 ). The higher R 2 ,
the better the fit. Estimates for the regression weights
and a constant term are also given.
Table 1 shows the fits between the observed number
of articles and all available mathematical functions.
R square (R 2 )
Function
Local (n = 2.679)
National (n = 1.248)
Inverse
Logistic
Logarithmic
Cubic
Exponential
Growth
Compound
Quadratic
Power
S
.90
.76
.76
.71
.66
.66
.66
.62
.58
.32
.78
.47
.47
.43
.41
.41
.41
.37
.50
.39
Based
on 39 periods of 4 weeks.
The best fit between the observed number of articles and the hypothetical, mathematical function was
found for the inverse function. This function provided a very good fit for both the local newspaper
(R 2 = .90, standardized beta .95), and the national
newspapers (R 2 = .78, standardized beta .88). For
the local newspaper, the full mathematical function
describing the expected number of articles is E(Nt ) is
E(Nt ) = 22 + 426/t, where t is the number of the
four-week-period after the disaster. Similarly, for the
national newspapers, the full mathematical function
is E(Nt ) = 4 + 255/t. These formulas predict that
the local newspaper started out paying much more
attention to the disaster than the national newspapers
together (local E(N1 ) = 444, national E(N1 ) = 259)
and that the coverage in the local newspaper eventually (E(Nevent. ) = 22) remains higher than that in the
national newspapers (E(Nevent. ) = 4).
The inverse function described the developments
in the disaster coverage in the local newspaper (.90)
better than it did the coverage in the national newspaper
(.78). The question arose whether the higher R 2 for the
local newspaper could be attributed to the different
sample sizes or to other factors such as differences in
publication policy. To answer this question, a random
sample of n = 1.248 was drawn from the articles in
the local newspaper. The fit with the inverse function
for this reduced sample was R 2 = .88, which is hardly
any lower than for the full sample. The lower fit for
the national newspapers can thus not be explained by
a lower number of published articles.
3.3 Duration of the time periods
The analyses up till now have been based on the number of articles published in 39 periods of four weeks.
A relevant question was, whether the results were
affected by the chosen duration of the time-periods.
1319
1 week
2 weeks
4 weeks
Local (n = 2.679)
National (n = 1.248)
.75
.88
.90
.78
.76
.78
500
400
300
200
7
100
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Local newspaper
Four-weeks-periods
National newspapers
1320
500
Explosion destructs residential
area, 22 death, 1000 injured.
Start independent investigation
450
400
350
Suspect arrested.
Fireworks decision in
Council of Ministers
Report independent
investigation published
Internal criticism
investigation suspect.
300
Cafe-fire
Volendam
250
200
Debate parliament.
Authorities not
prosecuted.
150
Board
convicted.
Suspect
convicted.
Suspect released
after appeal.
100
50
0
1 2
3 4 5 6 7
2000
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
2001
2002
2003
Local newspaper
Figure 2.
four-week-periods
National newspapers
Events in the aftermath of the Fireworks disaster and number of articles in the local and national newspapers.
900
800
R square (R 2 )
700
600
Local
National
.90 n = 2.679
.94 n = 1.163
.78 n = 1.248
.70 n = 238
.91 n = 1.131
.62 n = 195
.75 n = 1.617
.71 n = 1.329
.53 n = 747
.47 n = 688
500
400
300
200
100
0
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Based
-100
Observed number
Lower limit
conflict and responsibility frame in the period following the publication of the report of the Independent
Investigation Committee. This suggests the disaster
coverage with a conflict and responsibility frame to
be more event driven than the disaster coverage with a
human interest and an economic consequences frame.
Again, the fit was better for the local newspaper
than for the national newspapers. It was observed that
there was a larger increase in articles with the conflict and responsibility frame in the period following
on 39 periods of 4 weeks.
DISCUSSION
1321
1322
1323
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This qualitative study presents a novel perspective on risk amplification, applying it to zoonosis.
We introduce the concept of folk epidemiology to trace a narrative dimension to understandings about zoonotic
disease. We see this narrative influence as an element that is bound-up with action around actual cases of zoonosis
drawing on sensemaking approaches. We adopt a wider perspective on communication theory drawing
on communion based readings beyond the narrow transmission metaphors that limit existing approaches.
The intention is to reanimate the actor from passive receiver of transmitted risk messages into someone who
concurrently shapes and is shaped by risk. This social action is described in terms of actors using accounts
from a reservoir of dominant narratives and selecting from a repertoire of available actions. Actors perform
these operations in the context of a specific case of zoonosis. Focus groups of lay publics and expert groups
provide our empirical data. Instances of narrative and action combination centered on cases are presented. This
empirical work aims to demonstrate the underlying structure of this folk epidemiology; a three element structure
of narrative, action and case that allows novel insights into how we humans understand the risks from animal
diseases that infect us.
INTRODUCTION
Probabilistic accounts do not capture the full complexity of risk. Human judgements about risk often
have profoundly consequential outcomes that seem
unrelated to the magnitude of the possible harm multiplied by the estimated frequency of the event. The
Social Amplification of Risk Framework (SARF),
1325
Zoonosis
LIMITS OF SARF
1 Horlick-Jones
1326
lends itself favourably to ideas about folk understanding and ethnoscience (Atran and Medin 1999;
Coley, Medin et al. 1999; Rip 2006) that are presented here as complimentary to SARF. Communion
encompasses more fully cultural perspectives of risk
perception (Douglas and Wildavsky 1982; Douglas
1992). We see this second perspective as a fertile area
for our research given that is not well represented
currently in SARF. This analytical distinction offer
some prospects for the overarching framework that
SARF aspires to, avoiding the dangers of making a
monopoly out of transmission to the exclusion of the
communal perspectives outlined above. As Morgan
notes; Favored metaphors tend to trap us in specific modes of action. (Morgan 1997: 350). Here, the
trap is that the signal/receiver metaphor excludes the
alternative, etymologically based understandings of
communication as communal exercises, shared experiences, rituals, and, what we want to propose, folk
understandings. None of these concepts are contradictory to the cross disciplinary content of SARF
that overtly embraces Cultural Theory and wants to
develop social structures as key factors, but they are
outside the problematic central metaphor.
The electronic imagery is too passive to cope
with the complexity of human risk behaviour.
. . . . The metaphor reduces the richness of human
risk behaviour addressed by the subjectivist
paradigms to the mere noise it represents to the
technical risk assessor. (Rayner 1988:202)
AN ORGANIZING FRAMEWORK
1327
engage with issues. These narratives are not necessarily well structured, integrated or strongly worked out.
But they pre-date actors encounters with an issue, and
may well be modified as a consequence. The important
quality of narratives in this context is that they make
less commitment to a particular source of influences
on risk understandings than is found in psychometric risk (for example Slovic 1987) and work on the
cultural selection of risk, for example, Schwarz and
Thompson (1990). They do not have to be psychological resources within the individual, nor interpretive
resources within a particular culture, nor the explanations provided by some authority, but can be any of
these, or some mixture. Their source is less important than their availability at a given time to a given
actor.
The final aim is to represent the idea that the
particular case or situation is often instrumental in
explaining peoples interpretations. It is not just an
actors available choices, nor the available fragments
of narrative, but also the structure of the case that
matters. For example, in the UK aviation influenza
outbreak at the Bernard Matthews plant the outbreak
happened to occur in an intensive production facility.
Avian influenza was thus confounded with intensive food production. It need not have been. Other
dominant
narrative
fragments
unsustainable mass
production, ecological
paradigm, industrygovernment conspiracy, happy animals,
sensational media, rational science
repertoire
of available
enactable
choices
available
enactable
choices
factory
farming is
unnatural
Boycott processed
turkey meat
meat product
boycott
risk-related intentions
Figure 1.
consumption
choices, travel decisions, affiliation
decisions, dietary
choices, protest actions, biosecurity
precautions
Figure 2.
1328
regulatory agencies
broadcast media
4.2
lay publics
advocacy groups
industrial firms
Figure 3.
4
4.1
METHODS
Data collection
Since we are working within a loose theoretical framework, not a predictive theory, the empirical investigation was naturally an exercise in qualitative, inductive
analysis. Our aim is to elaborate on the framework we
have proposed, developing it for the case of zoonotic
risk both generally, and for specific cases of zoonosis.
The research design involved collecting rich textual
data about people making sense of zoonotic cases.
This data was to consist of lay focus groups and expert
groups.
The lay focus groups were intended to involve individuals who had no particular reason to have expert,
analytical or deep knowledge of zoonosis. The expert
groups were intended to involve individuals who were
closely involved in farming, food processing and regulation, and who had privileged practical or analytical
knowledge of zoonoses.
In both cases the data was collected in connection
with particular outbreaks of a zoonosis. The focus
groups were based on both exotic zoonosis and
endemic ones. These distinctions are deployed by
Defra (the UK Department for Environment, Food and
Rural Affairs) and denote whether a disease is established inside the UK or is originating from abroad.
Focus groups
Focus groups are extremely well suited to the exploration of shared understandings (Kreuger 1994).
Within a focus group participants can engage with
one another to retrieve narratives through sharing partial accounts and through developing collective, emergent accounts including elements of narrative, story
and metaphor.
Focus groups were designed to incorporate a strong
element of homogeneity in order to promote sharing
of opinions. The ideal size was determined at between
5 and 8 to discourage fragmentation, or groups within
groups and to increase opportunities to share insights.
The first group was constituted of PhD students
from management related disciplines. They were
shown a brief presentation based around headlines
from the BBC website related to the Suffolk outbreak of bird flu and subsequent stories dealing with
bird flu from February 2007 to February 2008. The
presentation was intended to focus thoughts on the
groups understanding of this particular zoonosis but
was carefully crafted to convey little detail about specific actors, and to avoid judgments about the risk.
It was designed to include dates, locations, numbers of birds involved and consequences in terms of
movement restrictions and control measures.
A second focus group comprised of veterinarian PhD students all with a different animal disease
specialism.
A third group was made up of retired lay people
with no particular knowledge of zoonosis.
Further groups are still in development.
4.3
Expert groups
1329
Analysis procedure
A reservoir of narratives
the restrictions that we have brokenor not particularly been in placeand add that to mass
farming and turkeysyou then get a copy ofthe
consequence is then that you get bird flu in this
country. . .
Importantly we acknowledge that this was an inconsistent association. On the one hand there was recognition that free range outdoor rearing brought poultry
into contact with wild birds. This was accepted as a
risk factor. At the same time there was a alternative
argument made by the same respondents, that intensive stocking was a breeding ground for disease or
(as above) actually the cause of the disease. This type
of inconsistency, here that the disease is caused by
intensive farming and that it is at the same time naturally occurring in the wild, is a typical feature of
folk theory.
They [Folk Theories] are a form of expectations,
based in some experience, but not necessarily systematically checked. Their robustness depends from their
being generally accepted, and thus part of a repertoire
current in a group or in our culture more generally.
(Rip 2006:349)
The Defra expert group recognized the tension
between risk factors of intensive production and risk
factors from outdoor rearing yet were quite reticent about stating which practice had a higher risk
profile. With avian influenza, they argued, overhead netting and location away from wetland habitats
could offer good protection to outdoor reared birds.
With indoor rearing biosecurity should concentrate on
rodent control and facility cleaning between stocking.
This even handed assessment was perhaps in part
due to the experts responsibility to supervise all parts
of the industry. However, one expert respondent did
acknowledge that a gleaming, sterile indoor facility
would be the best precaution against the spread of
MRSA in pigs but that the public would judge such
a facility as the unacceptable face of factory farming preferring a naturalistic and consequently dirty
environment for pigs.
A clear conspiracy narrative was also voiced by
focus groups. When considering official response in
terms of both control measures and communications,
the focus groups again expressed a somewhat contradictory view that on the one hand the authorities had
done what they had to do in regards to the culling of
birds and the imposition of restrictions. On the other
hand, there was a shared feeling that the authorities
may well conceal information from the public and
expose the public to risks and that we would never
know.
1330
A number of anthropological studies into folk biology consider primitive taxonomic categories as cases
of folk understanding (for example Diamond and
Bishop 1999). We considered to what extent zoonosis was a meaningful category by encouraging the
focus groups to identify zoonotic diseases. If there
were a general understanding of zoonosis it may
be significant. However, the lay groups studied did
not appear to have a particular category of animal
diseases that cross the species barrier and could identify only rabies, mad cow disease and bird flu
readily. Other zoonotic diseases were recalled only
after further prompting indicating that the category
was not salient. Our contention is that lay groups
have no sophisticated taxonomic understanding about
zoonoses as a single category. By this we do not simply
mean that the word zoonosis is not generally recognized. We mean that the category is not meaningful.
The expert groups corroborated the story that zoonosis
is not generally understood as a category by the public. On several occasions experts lamented the fact that
whilst diseases crossing the species barrier are highly
significant in epidemiological terms, the public did
not recognize them as a single category.
This is, we propose, at least partly indicative of
a complex social landscape in which modern, urban
populations are distanced from livestock farming in
many instances. Experts regularly raised concerns
about this disjuncture between farm and fork. Understandings arise when the public is confronted with an
outbreak. At the point of confrontation they draw on
a reservoir of narratives and a repertoire of available
courses of action to construct their understanding. Initial findings suggest that wider narratives that already
have social currency, for example critiques of intensive production as unnatural or breeding grounds
for a variety of ills, may be drawn upon when needed in
order to understand a current event or case. As Kasperson expresses it Reference to a highly appreciated
1331
REFERENCES
Atran, S. and D.L. Medin, Eds. (1999). Folkbiology. Cambridge, Mass., MIT Press.
Atran, S. and D.L. Medin. Cambridge, Mass., MIT Press.
Bennet, M. (2007). Personal communication at a meeting
of the project team at the National Zoonosis Centre.
D. Duckett. Liverpool.
Berg, B.L. (1989). Qualitative research methods for the social
sciences. Boston, Allyn and Bacon.
Bryman, A. and R.G. Burgess (1994). Analyzing qualitative
data. London; New York, Routledge.
Campbell, S. and G. Currie (2006). Against Beck: In
Defence of Risk Analysis. Philosophy of the Social
Sciences 36(2): 149172.
Carey, J.W. (1989). Communication as culture: essays on
media and society. Boston, Unwin Hyman.
Coley, J.D., D.L. Medin, et al. (1999). Inductive Reasoning
in Folkbiological Thought. Folkbiology.
Diamond, J. and K.D. Bishop (1999). Ethno-ornithology of
the ketengban People, Indonesian New Guinea. Folkbiology. S. Atran and D.L. Medin. Cambridge, Mass., MIT
Press.
Douglas, M. (1992). Risk and blame: essays in cultural
theory. London; New York, Routledge.
Douglas, M. and A. Wildavsky (1982). Risk and culture:
an essay on the selection of technological and environmental dangers. Berkeley, Calif., University of California
Press.
Easterling, D. (1997). The vulnerability of the Nevada visitor economy to a repository at Yucca Mounta. Risk
Analysis 17(5): 635647.
Eldridge, J. and J. Reilly (2003). Risk and relativity: BSE
and the British media. The Social Amplification of Risk.
N. Pidgeon, R.E. Kasaperson and P. Slovic. Cambridge,
Cambridge University.
Farndon, J. (2005). Bird flu: everything you need to know.
Thriplow, Icon Books.
Flynn, J. (2003). Nuclear stigma. The Social Amplification of Risk. N. Pidgeon, R.E. Kasaperson and P. Slovic.
Cambridge, Cambridge University: 326352.
Glaser, B.G. and A.L. Strauss (1967). The Discovery of
Grounded Theory. New York, Aldine.
Horlick-Jones, T., J. Walls, et al. (2007). The GM Debate:
Risk, Politics and Public Engagement. London, Routledge.
Hovland, C.I. (1948). Social Communication. Proceedings
of the American Philosophical Society 92(5): 371375.
Kasperson, J.X., R.E. Kasperson, et al. (2003). The social
amplification of risk: assessing fifteen years of research
and theory. The Social Amplification of Risk.
Kasperson, R.E., O. Renn, et al. (1988). The Social Amplification of Risk A Conceptual Framework. Risk Analysis
8(2): 177187.
Kolata, G.B. (2000). Flu: the story of the great influenza
pandemic of 1918 and the search for the virus that caused
it, Macmillan.
1332
1333
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
Hvard Thevik
Risk Management Solutions, DNV Energy, Det Norske Veritas, Norway
If P&C, Norway
ABSTRACT: The purpose of this paper is to present a framework for simple and effective treatment of
uncertainties in risk assessments. In the framework, it is emphasized how uncertainties can be accounted for
when using the risk assessment results, whether it be to identify and select risk reducing measures or to express
a companys overall risk picture. The use of the framework is illustrated by applying it in a typical offshore risk
assessment case. The new approach is compared with the approach presently employed within the oil & gas
industry and benefits are highlighted. Uncertainties are only associated with physical, observable quantities.
A key element of the framework is the use of sensitivity assessment. This is a well-proven, intuitive, and simple
way of illustrating how the results of a risk assessment depend on the input quantities and parameters. Here,
sensitivity assessment is revisited in order to show how the decision-making process can be enhanced when
accounting for uncertainties. The sensitivity, combined with an evaluation of the background material, forms
the foundation of the framework for presentation of uncertainty presented in this paper. Finally, it is discussed
how the uncertainties should be handled in the decision-making process. Often, uncertainties can be large, and
comparing (point) estimates from the traditional risk assessments with numerical risk acceptance criteria is
not recommended. Rather, possible risk reducing measures should be assessed based on the ALARP principle.
A measure should be implemented as long as the cost is not in gross disproportion to the benefits gained.
INTRODUCTION
1335
ASSESSMENT FRAMEWORK
K
U
Figure 1.
1336
Risk description.
A, C
1337
Table 1.
Uncertainty factor
Degree of uncertainty
Minor
Minor
Minor
1. No. Personnel
in each area
2. Number of well
operations
3. Ignition following
riser leak
4. Occurrence of
process leak
5. Duration of
process leak
Moderate
Significant
Leak
size
Process area 1
Small
Medium
Large
Process area 2
Small
Medium
Large
Group
E[C|A]
Maintenance
Well intervention
Maintenance
Well intervention
Maintenance
Well intervention
Maintenance
Well intervention
Maintenance
Well intervention
Maintenance
Well intervention
6,6
4,2
17,6
16,0
17,0
14,2
3,4
1,4
7,2
5,2
7,2
6,0
DISCUSSION
Significant
X
X
Moderate
X
Significant
Moderate
1338
CONCLUDING REMARKS
ACKNOWLEDGEMENTS
The authors would like to thank Andreas Falck, Brre
Johan Paaske and Jan Erik sland for their many useful
comments to an earlier version of this paper.
1339
REFERENCES
Apostolakis, G.E. (2004). How useful is quantitative risk
assessment? Risk Assessment, 24(3), 515520.
Aven, T. (2008a). A semi-quantitative approach to risk assessment, as an alternative to QRAs. Reliability Engineering
and System Safety, to appear.
Aven, T. (2008b). Risk AssessmentAssessing Uncertainties beyond Expected Values and Probabilities. New York:
Wiley.
Falck, A., Skramstad, E. & Berg, M. (2000). Use of QRA
for decision support in the design of an offshore oil production installation. Journal of Hazardous Materials, 71,
179192.
1340
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
G. Guidi
ENEA Italian National Agency for New Technologies, Energy and the Environment, Rome, Italy
ABSTRACT: Past industrial major accidents (Bhopal, Seveso, Chernobyl, etc.) highlighted that information
and prevention constitute an inseparable pair. International laws give more rights to the public in terms of access
to information as well as in terms of consultation. A correct communication entails advantages for the company
both in normal conditions and in case of accidents. The success of risk communication programs depends upon
their sensitivity to the perceptions of risk and the behaviour of people, competent public authorities, plants
operators and stakeholders. Good risk communication results in an outcome where there is a high level of
agreement between all the interested parties.
For many reasons, the permanent disposal of radioactive waste is one of the most difficult environmental
problem. In this paper two case studies will be discussed: Scanzano Jonico (Italy) as an example of bad risk
communication and Gyeongju (South Corea) as an example of good risk communication.
Public consent and reactions cant be ignored or dismissed, as irrational or illogical, when industrial hazardous activities are operated. Access to information,
participation in decision-making and planning, communication about emergency action and plans, regular
repetition of up to date news more and more strongly
have been asserted and supported by the Seveso Directives, 82/501/EEC (Seveso I), 96/82/EC (Seveso II)
and 2003/105/EC (Seveso III), that have been published in these last years. Right to know and need
to know of people are the correct base of the risk communication program. The community had to receive
at minimum information, on request or periodically,
including: the explanation, in simple terms, of activities undertaken at the establishment; the dangerous
substances used, with an indication of the main hazardous characteristics; general information relating to
the nature of major-accident hazards, including their
potential effects on population; adequate information
on how the public will be warned and kept informed
during an emergency, as well as on the behaviour they
should follow in this event.
The requirement of Article 8 of the Seveso I Directive for members of the public to be informed of
safety measures and how they should behave in the
event of an accident was not initially welcomed by
industry in European countries that were frightened
1341
The National Research Council defined risk communication as an interactive process of exchange of
information and opinion among individuals, groups
and institutions. It involves multiple messages about
the nature of risk and other messages, not strictly about
risk, that express concerns, opinions or reactions to
risk messages or to legal and institutional arrangements for risk management (National Research
Council 1989).
An important concept outlined in this definition is
that risk communication is an exchange of information, or an interactive process requiring the establishment of two-way communication channels.
Building trust in the communicator, raising awareness, educating, reaching agreement, motivating action are some of the possible goals of risk communication (Rowan 1991). Because of this multiplicity
of aims, different strategies of risk communication
may be suitable for different objectives. For example, stakeholder participation methods are likely to
be more appropriate for reaching agreement on particulate matter, while simple risk communication
messages are best for raising awareness.
Any organization planning a risk communication
effort needs to clarify whether the goal is to gain consensus on a course of action, or only to educate people.
In the first case if the organization involves the stakeholders in the decision process, this will facilitate the
process of building trust and gaining agreement on the
course of action (Bier 2001). One of the basic principles of risk communication is establishing trust and
credibility. A broad-based loss of trust in the leaders of major social institutions and in the institutions
themselves has occurred over the past three decades
(Kasperson et al. 1992). This general loss of trust may
also be exacerbates in particular contexts (i.e. Three
Mile Island accident). In cases where trust is particularly low due to past history or the seriousness of an
1342
Risk communication can be aimed at influencing peoples behaviour and perceptions about risk. The goal
might be to place a risk in the right context or to
encourage a change to less risky behaviours.
The idea of acceptable risk or safety may change
quite suddenly after a single stunning accident. Examples of a sudden loss of a safe feeling are the catastrophe at Bhopal, Chernobyl or terrorist attack on the
World Trade Centre. Public opinion is influenced not
only by the accident itself, but maybe even more by
the attention which is paid to it by the media.
Research into risk perception has revealed that,
in addition to rationally considering the probabilities
about a risk, human beings rely on intuitive faculties
1343
Over the last decade the awareness of the necessity for the nuclear waste programmes to become
more communicative has increased worldwide. This
reflects advances in national programmes to the phase
of site selection and the necessary involvement of
regional and local bodies as well as concerned citizens, but it is also due to the introduction of legal
requirements for public consultation, often under the
1344
Positive experiences
One example is Finland, where a site has been proposed recently by Posiva for detailed investigation in
1345
Negative experiences
1346
1347
CONCLUSIONS
REFERENCES
Bier, V.M. 2001. On the state of the art: risk communication
to the public. Reliability Engineering and System Safety,
71: 139150.
Casal, J., Montiel H., Planas-Cuchi E., Vilchez J.A.,
Guamis J., & Sans J. 1997. Information on the risks of
chemical accidents to the civil population. The experience of Baix Llobregat. Journal of Loss Prevention in the
Process Industries, Vol. 10 (3): 169178.
Covello, V.T., Sandman, P.M. & Slovic P. 1988. Risk Communication, Risk Statistics, and Risk comparison: a manual
for plant managers. Chemical Manufacturers Association.
Gough, J. & Hooper G. 2003. Communicating about risk
issues.
Http://www.europe.canterbury.ac.nz/conferences/tech
2004/tpp/Gough%20and%20Hooper_paper.pdf.
Heath, R.L., 1995. Corporate environmental risk communication: cases and practices along the Texas gulf coast.
Communication Yearbook, 18: 255277.
Kasperson, R.E., Golding D. & Tuler S., 1992. Social
distrust as a factor in siting hazardous facilities and
communicating risks. Journal of Social Issues, 48 (4):
161178.
National Research Council. 1989. Improving risk communication. National Academy Press.
Nuclear Decommissioning Authority. 2007. Managing radioactive waste safely: literature review of international
experiences of community partnership.
1348
OECD. 2003. Public information, consultation and involvement in radioactive waste management. An International
Overview of Approaches and Experiences.
Rowan, K.E., 1991. Goals, obstacles, and strategies in risk
communication: a problem-solving approach to improving communication about risks. Journal of Applied
Communication Research, 19: 300329.
Slovic, P., 1993. Perceived risk, trust, and democracy. Risk
Analysis, 13: 675682.
1349
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
J.W. Saull
International Federation of Airworthiness, East Grinstead, UK
ABSTRACT: We have discovered a factor that precisely represents the information needed in decision-making,
skill acquisition and increasing order. The information entropy given by the H-factor is a function of the
learning rate or depth of experience, and increases so does the value of the H-factor declines, demonstrating
increased order. The relative value of the information entropy H-factor at any experience depth is a direct
measure of organizational learning and the effectiveness of safety management systems. The definition of
the Information Entropy, H, is in terms of probabilities based on the frequency distribution of outcomes at
any given depth of experience. We suggest here a simple step-by-step Methodology to applying the H-Factor
to managing risk. To distinguish it from other standards and existing risk management techniques (e.g. PSA,
FMEA, bow-tie etc., etc.), we call this safety management and risk prediction measurement process the Risk
Management Measurement Methodology, or the RM 3 for short.
INTRODUCTION
1351
INFORMATION ENTROPY
We now want to know how probable things like outcomes are, and how the arrangements that we might
have and the number of outcomes vary with experience. For each ith number of microstates, ni , representing the location of a possible outcome, the probability,
pi , of an outcome at any experience depth, i , for any
total number of, Nj , outcomes is given by the Laplace
(1814) relation:
pi (ni /Nj )
(1)
(2)
(3)
1352
(4)
and
ni ! (ni /e)ni
(5)
Provided we can now find the probability distribution of the outcomes, we can measure the entropy. In
the special case of the outcomes being described by a
continuous random variable, as for a learning curve,
we can replace the summation over discrete intervals
with an integral function:
Hj = pi ln pi dp = p2i (1/4 1/2 ln pi )
(9)
(6)
ni ln ni = n1 ln n1 + n2 ln n2 + + ni ln ni (7)
3
This simple looking expression for the possible
combinations leads directly to the measure of disorder,
the information entropy, Hj , per observation interval.
In terms of probabilities based on the frequency
of microstate occupation, ni = pi Nj and using Stirlings approximation we find the classic result for the
information entropy:
Hj = pi ln pi
(8)
and the maximum value occurs for a uniform distribution of outcomes. Interestingly, this is of course also the
Laplace-Bayes result, when p(P) 1/N for a uniform
risk.
These two classic results for, Wj and Hj , are literally
full of potential information, and are also in terms of
parameters we may actually know, or at least be able
to estimate.
The above result for the Information Entropy, H,
thus corresponds to the famous thermodynamic formula on Boltzmanns grave, S = k ln W, where, k,
corresponded to the physical proportionality between
energy and temperature in atomic and molecular systems. But there the similarity but not the analogy ends:
the formulae both look the same, but refer in our case
to probability distributions in HTS, not in thermodynamic systems. In our case, for the first time, we
are predicting the distribution of risk, errors, accidents, events, and decisions and how they vary with
depth of experience and learning, in a completely
new application. In fact, since, H, is a measure of
Uncertainty, we can now define the Certainty, C, as
now given by, say, C = 1 H, which is parallel to the
definition of Exergy as the converse of Entropy.
(10)
or solving,
ni = n0 exp( i )
(11)
1353
ni = ni /Nj
(12)
pi = p0 exp( i )
(13)
The probability of risk also decreases with increasing depth of experience. The form of the distribution
is similar to that adopted for the fractional population
change during species extinction (Gott (1993)), where
in that case the parameter, , is a constant extinction
rate rather than a learning constant.
We have discovered the H-factor that precisely represents the information needed in decision-making,
skill acquisition and increasing order. It is called the
Information Entropy. The properties of the H-factor
are as a fundamental measure of the predictability
of a random event, (that) enables intercomparisons
between different kinds of events. Hence, we need
in any HTS to quantitatively assess a Safety Management Systems effectiveness in reducing outcomes,
and managing risk. This is the prudent and necessary
measure that management needs to create order from
disorder.
(15)
(16)
(14)
(17)
where the value of the exponent, a, illustrates, measures and determines how fast we are achieving order
and our safety goals. Here, N , the non-dimensional
measure of the depth of experience, is just a convenient measure for the experience interval divided by
the total observed, /M , in the experience interval.
The information entropy is then given by the nondimensional form of the H-factor:
(18)
1354
H1 :Plot data
Fit exponent "a" value in Equation 2
(plot distribution on Depth graph)
Compare to a = 3.5
H 4:SMS Measure
Plot H versus non-dimensional experience,N*
REFERENCES
Figure 1. The Risk Management Measurement Methodology (RM3) step-by-step procedure using a double or
side-by-side flow.
Duffey, R.B. & Saull, J.W, 2002. Know the Risk, First Edition,
Boston, USA, Butterworth and Heinemann.
Duffey, R.B. & Saull, J.W. Manuscript in preparation to be
published. Managing Risk: The Human Element, West
Sussex, UK, John Wiley & Sons Limited.
Duffey, R.B. & Skjerve, A.B. 2008. Risk Trends, Indicators
and Learning Rates: A New Case Study of North Sea Oil
and Gas, Proc. ESREL 2008 and 17th SRA Europe Annual
Conference, 2225 September, Valencia, Spain.
Goth III. & J. Richard, 1993. Implications of the Copernican principle for our future prospects, Nature, Vol. 363,
pp. 315319.
Greiner, W., Neise, L. & Stocker, H., 1997. Thermodynamics and Statistical Mechanics, Springer, New York, pp.
150151.
Jaynes, E.T. 2003. Probability Theory: The Logic of Science, First Edition, Edited by G.L. Bretthorst, Cambridge
University Press, Cambridge, UK.
Laplace, P.S., 1814. Essai philosophique sur les probabilities,
extracted as Concerning Probability in J.R. Newmans
The World of Mathematics, Vol. 2, p. 1327, 1956.
Pierce, J.R. 1980. An Introduction to Information Theory,
Dover, New York.
Sherwin, C.W. 1961. Basic Concepts of Physics, Holt,
Rinehart and Winston, New York, USA, 1st Edition,
pp. 308338.
Sommerfeld, A, 1956. Thermodynamics and Statistical
Mechanics: Lectures on Theoretical Physics, Academic
Press, New York, NY.
Tor Norretranders, 1991. The User Illusion: Cutting consciousness Down to Size, Penguin Books, London, UK,
p. 290.
Woo, G., 1999, The Mathematics of Natural Catastrophes,
Imperial College Press, London, UK, Chapter 6, p.161
et seq.
1355
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
P.-E. Labeau
Universit Libre de Bruxelles, Brussels, Belgium
ABSTRACT: This paper presents briefly cultural theory applied to risk perception and examines how it has
been empirically tested. It shows that the method used so far for the tests is flawed and thus does not allow to
(in)validate cultural theory. Therefore, we suggest a research direction that would enable an effective examination
of the validity of cultural theory for risk perception.
1
INTRODUCTION
2
2.1
The origin of this theory is to be found in the cultural analysis initiated by the British anthropologist
Mary Douglas. Indeed, in Purity and danger, the
examination of pollution and taboo leads Douglas to
observe that social organizations select beliefs related
to dangers in order to secure their own stabilization
(Douglas, 1966).
The generalization of this finding constitutes the
basic hypothesis of cultural theory as it will be developed later: cultural biases and social organizations
sustain each other mutually without any bond of causal
nature. In other words, a type of social organization
is maintained by a set of cognitive and axiological
contents, and conversely (Fig. 1).
While, in Douglas cultural theory, the notion of
social organization deals only with the interpersonal
relationship patterns, the concept of cultureor
1357
1358
1359
1360
3.4
1361
CONCLUSION
REFERENCES
Brenot J., Bonnefous S. & Marris C., Testing the cultural
theory of risk, Risk Analysis 18(6), 1998, p. 729739.
Cohen J., Statistical power analysis for the behavioral
sciences, Hillsdale, Erlbaum, 1988.
Dake K., Myths of nature: culture and the social construction of risk, Journal of social issues 48(4), 1992,
p. 2137.
Dake K., Orienting dispositions in the perception of risk:
an analysis of contemporary worldviews and cultural
biases, Journal of cross-cultural psychology 22(1), 1991,
p. 6182.
Douglas M. & Wildavsky A., Risk and culture, Berkeley,
University of California Press, 1982.
Douglas M., Cultural bias, London, Royal Anthropological
Institute (Occasional paper No. 35), 1979.
Douglas M., Natural symbols: explorations in cosmology,
London, Barrie & Rockliff, 1970.
Douglas M., Purity and danger: an analysis of concepts of
pollution and taboo, London, Routledge & Kegan Paul,
1966.
Gross J. & Rayner S., Measuring culture, New York,
Columbia University Press, 1985.
Marris C., Langford I. & ORiordan T., A quantitative test
of the cultural theory of risk perceptions: comparison with
the psychometric paradigm, Risk analysis 15(5), 1998,
p. 635647.
McDaniels T. et al., Perception of ecological risk to water
environments, Risk analysis 17(3), 1997, p. 341352.
Oltedal S. et al., Explaining risk perception. An evaluation of
cultural theory, Trondheim, Rotunde, 2004, available on
http://www.svt.ntnu.no/psy/Torbjorn.Rundmo/ Cultural_
theory.pdf.
1362
Thompson M. & Wildavsky A., A proposal to create a cultural theory of risk, in Kuntreuther H. & Ley E. (eds.),
The risk analysis controversy: an institutional perspective,
Berlin, Springer-Verlag, 1982, p. 145161.
Wildavsky A. & Dake K., Theories of risk perception: who
fears what and why?, Daedalus 119(4), 1990, p. 4160.
1363
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper deals with the question of how societal impacts of fatal accidents can be integrated
into the management of natural hazards. In Switzerland, the well-known -model, which weights the number of
expected fatalities by the function N (with > 1), is used across governmental agencies and across hazards to
give additional weight to the number of potential fatalities in risk estimations. Although the use of this function
has been promoted in several federal publications, it is not evident whether laypeople want public managers to
overweight less frequent accidents of larger consequences against more frequent ones of smaller consequences.
Here we report on a choice experimental task that required respondents to decide on three risky situations. In
each situation they were asked, which of two roads they would protect from avalanche risk, given that they
were the responsible hazard manager and had only resources to protect one of the two roads. The results of
this experiment show that laypeople behave on average risk seeking ( < 1) rather than risk averse ( > 1),
when they make decisions on avalanche risks involving fatalities. These results support earlier research, finding
experts to behave risk prone when deciding on multi-fatality risks (N > 100). In conclusion, people tend to give
more weight to the probability of fatalities than to the number of expected fatalities.
INTRODUCTION
1365
2
2.1
METHOD
Road B
Expected accidents
During 20 years
Expected number of
fatalities per accident
Road A
Road B
I would protect:
Figure 1.
2.2 Participants
Based on a random sample from the telephone book,
we recruited respondents for a survey from two areas
of Switzerland. About half of the respondents were
selected from the mountainous region around Davos;
the other half of the respondents lived in the city of
Zurich. These samples were chosen to represent highand low-exposed people.
To collect the data, we conducted a questionnaire
survey by mail. The questionnaire with cover letter was
sent to those individuals, which agreed in the phone
recruitment to participate in this study. The experimental task, which we present here, was completed
and returned by 471 persons. The response rate for the
mountaineers was 50% (n = 227), and for the urbanities 54% (n = 244). It is crucial that participants
responses can be linked to their experiences with natural hazards. Therefore, we asked participants whether
they had ever been affected by any kind of natural
hazard. A description of the demographic characteristics of the survey sample is given in Table 1. The
samples were well balanced for gender and for origin.
Furthermore, there were expected differences between
mountain and urban areas concerning the experience
with natural hazards.
The questionnaire comprised 16 pages and contained two different choice tasks (here we only report
on one of them). Besides, we asked several questions on the perceived risk from natural hazards and
from ordinary road accidents in mountainous regions
of Switzerland. Participants provided answers on their
familiarity with the life in the mountains, past experiences with natural hazards, the severity of these
threats, and some attitudinal questions related to
behavior under risk. In addition, respondents answered
how sure they were in their decisions. Data on
socio-demographics were also collected.
Table 1.
Setting
Urban area
Mountain area
N
Age, M (SD)
Gender
Males
Females
Experience
Yes
No
227
49.0 (16.0)
244
48.4 (16.5)
52%
48%
50%
50%
29%
71%
46%
54%
1366
2.3
Data analysis
Having collected many choices on these risky situations, one can calculate for each option the probability
to be selected by an individual i, given the observed
attributes of the option under choice (namely the frequency of accidents and the number of fatalities) and
the alternative option. The analysis of these choices
calls for a flexible choice model such as the conditional
logit model:
exp Vi (A)
Pr(A|i) =
exp Vi (A) + exp Vi (B)
(2)
where the value function V may take on any functional form (Greene 2003). This flexibility allows for
testing different value functions and is therefore an
appropriate model for our purpose.
3
3.1
RESULTS
Qualitative results
Quantitative results
Vi = R + Xi
i I
(4)
Variable
(3)
where and are coefficient vectors. R is a vector of risk, which is defined as the product of the
accident frequency over 20 years and a weighting function W (N ) of the number of fatalities per accident.
Consequently, this product is equal to the disutility
caused by the expected fatalities, which in the case
of W (N ) = N becomes the statistical mortality risk
per year. Xi is a vector of personal characteristics
including age, gender, education, experience with and
exposure to natural hazards for each individual i out
of the response population I .
In order to test the validity of the -model, we modified the basic model of Equation 3, using the weighting
function W (N ) = N :
Table 2.
3.2
i I
Coefficient t stat.
Exposure = weekly
0.1102
0.1771
Experience = yes
1367
17.977
0.404
1.059
0.866
1.051
1.640
function as V 0.5(p N 0.72 ). Inserted into Equation 2, this function predicts the probability that people
choose to protect road A (with N accidents, each of
which takes a single life) over road B (with a single
accident that takes N lives).
Already at balancing ten accidents with one fatality each (on road A) vs. one single accident with ten
fatalities (on road B), people prefer to protect road A by
90% (curve 1 in Fig. 3). Even in situations, in which the
number of expected fatalities to be avoided on road B
is larger than on Athe risk neutral agent would thus
prefer to protect Bour decision model predicts that
A is protected until the number of expected fatalities
falls below a certain threshold N (curves 2 and 3
in Fig. 3).
CONCLUSIONS
1368
1369
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
T. Horlick-Jones
Cardiff School of Social Sciences, Cardiff University, UK
J. Espluga
Universitat Autnoma de Barcelona, Spain
ABSTRACT: This paper reports on the development of a novel methodology to investigate lay perceptions of
nuclear fusion as an energy source. Our approach has concentrated on generating a learning and deliberation
process that allows groups of lay participants to engage with information about fusion in a quasi-naturalistic
fashion. In this way, we have attempted to avoid the terms of this deliberation being pre-framed according to
the technical logics of nuclear engineering and risk management. We provide some early observations on the
implementation of this methodology.
INTRODUCTION
THE METHOD
1371
DATA GATHERING
Age
1825
2640
40+
Socioeconomic
status
Group 1
Group 3
Group 2
Group 5
Group 4
Group 6
AB
CD
1372
PRELIMINARY INSIGHTS
This section sets out some very preliminary impressions on the discussion and learning processes that we
staged, based on observational notes. It is important
to bear in mind the status of this material. These are
initial sketches that need to be checked, confirmed (or
possibly rejected) in the next steps of the data analysis
process, as described in the previous sections.
As might be expected, and in line with empirical
evidence on the lay perception of nuclear fusion, there
was a general and widespread lack of awareness and
knowledge about the technology among our research
participants.
In an effort to encourage a preliminary discussion
about fusion and with the aim to explore lay perceptions in the light of generic and contextual information,
a very short news article was given to participants.
Even after reading this short news article, most participants felt that they were still not able to discuss fusion
(How am I going to talk about it if I still do not know
what it is?). It could be said that fusion was perceived
as very obscure, not only in spatial or temporal terms,
but also from a conceptual perspective, as it appeared
to be far from my understanding and difficult to
be dealt with.
According to the first impression of the observers,
further reactions to this short news article included
a combination of surprise and perplexity. Participants
were surprised to find out that such an important scientific project was going on and they knew nothing about
it. Some participants experienced this surprise as a
positive issue (scientific progress, hope, etc); while
others were sceptical (is this real?) or incredulous.
As far as the learning process is concerned, the
first point to be made is thatin some groups
participants seemed to have considerable difficulties in coping with the technical knowledge. At first
As mentioned earlier, at the end of the second meetings of each group, a questionnaire was administered
to participants. The aim of the questionnaire was
to evaluate the effectiveness of the group process
as an exercise in citizen engagement. The questions
dealt with issues such as perceived representativeness,
impartiality, freedom to express personal opinions,
understanding of the process and level of interest of
the group discussions. All the evaluation items were
rated very highly by participants. They perceived that
the process was unbiased, interesting, that it allowed
1373
DISCUSSION
Our preliminary assessment of the group-based investigation is that the methodology has worked well;
in that it is showing its capacity to generate quasinaturalistic collective discourses, and to promote a rich
discussion and learning process among our research
participants. It is also worth mentioning that the
method has demonstrated its capacity to generate a
significant amount of rich qualitative and quantitative
data.
ACKNOWLEDGMENTS
The work reported in this paper was supported by the
European Fusion Development Agreement (EFDA)
(contract TW6-TRE-FESS-B1) and UKs Economic &
Social Research Council (award RES-000-22-2664).
Parts of this work, supported by the European Communities under the contract of Association between
EURATOM/CIEMAT, were carried out within the
framework of the European Fusion Development
Agreement. The views and opinions expressed herein
do not necessarily reflect those of the European
Commission.
REFERENCES
Antaki, C. (1994) Explaining and Arguing: the Social
Organization of Accounts. London: Sage.
Bloor, M. (1978) On the analysis of observational data: a
discussion of the worth and uses of inductive techniques
and respondent validation, Sociology, 12(3) pp.545557.
Bloor, M., Frankland, J., Thomas, M. and Robson, K. (2002)
Focus Groups in Social Research. London: Sage.
1374
Safety culture
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: This paper discusses the significance of identity on safety critical behavior. Investigations of
accidents and near misses find that the majority of incidents may be seen as a result of human failure. These
wrongdoings are often described as quiet aberration or short cuts, which can be said to be standardized or
institutionalized actions. In This paper the concepts of identity and identification is used in an attempt to explain
how standardized quiet aberration or short cuts is established and maintained in an organization. Identities
are seen as products and producers of relations and belonging conventions. We propose that identity is an
important aspect of safety because it conveys models of self and others as well as models of how work should
be performed.
1
INTRODUCTION
Safety management has traditionally been preoccupied with the formal aspects of organizations. In the
last decades, however, the search for ways to improve
safety has expanded to include also the informal
aspects of organization. The concept of safety culture
has been subject to much research and most recently,
the role of interpersonal trust has been thoroughly
investigated in a special issue of Risk Analysis1 . In
this paper we are going to direct attention to a topic
which has been largely neglected in both these strands
of research, but which is nevertheless crucial in understanding the way safety-critical work is performed.
This is the issue of group (or occupational) identity.
The goal of this paper is to explore different ways identity may influence a groups work practice, and thereby
also influence safety. We argue that, in terms of safety
critical work practice, identity offers explanatory force
because, firstly, identities may be seen as constituting models of selves, secondly by involving models
of others, i.e. constituting borders between us and
them, and thirdly, by providing models of work, i.e.
conventions for work performance.
WHAT IS IDENTITY?
1 Risk
concept in sociology, social psychology, social anthropology, and more recently in organizational studies
(Hatch & Schultz 2004). In social science, the term
has been applied and developed through studies of
human interaction by Cooley (1902), Mead (1934) and
Goffman (1959).
Group identity or collective identity is usually seen
as distinct from individual identity [9]. Brewer &
Gardner (1996) claim that personal identities differentiate the self from the others. Collective identity,
on the other hand, reflects assimilation to a certain
social group. Both personal and collective identities
are dependent on interaction with other people, by
emphasizing similarities and dissimilarities with other
individuals. Seen as a product of interaction, the term
identity has been criticized because it indicates a fixed
stable state. Due to this some writers prefers to use the
term identification rather than identity to underline the
transitory characteristic of the phenomena. According
to Weick, (1995) changes in interactional patterns or
changes between interactional relations implies a shift
in definitions of selves. An individual may as a member of an organization hold a portfolio of overlapping
identities that are mobilized in different interactional
situations.
In a working situation, as in any other social situation, humans construct identities. This construction
process involves a notion of we-ness (Hatch &
Schultz 2004), which defines a certain community
of people. The constitution of identities may partly
be seen as a result of an orientation towards shared
problems (Lysgaard 1961). In an organization these
identity defined communities may coincide with formal work groups in the organizational design, such as
1377
CASE DESCRIPTIONS
1378
5
4
RESEARCH METHODS
FINDINGS
1379
departments (mechanical department, logistic department etc). In recent years there have been organizational changes which have altered the organizational
structure from departments to integrated teams, with
a mix of occupations in each team. At least this is the
case for installations in our empirical material. Even
so, our data show that the primary identity defined
communities are connected to the occupational departments, and not to the teams. For the process operators
it is still the production department with the rest of
the process operators, which continues to be the community they identify with. They have a strong notion
of we-ness and as an occupational group, the process operators generally think that they are doing the
most important work on the installation: Without us
the platform will stop, we own the equipment, are
statements from process operators that underline this.
These expressions are strengthened further by the fact
that process operators are assisting the central control
room, which is defined to be the heart on the platform both when it comes to normal operations and
emergency situations. And to underline the importance of the control room and the role of the control
room operators, the control room operators have to
have thorough knowledge of every system and area on
the platform, due to their work in watching over and
controlling the whole oil and gas production on the
installation.
The radio that process operators carry with them
to communicate and report directly with the control
room, when doing their observing and controlling out
in the plant, is an identity mark that underlines this
responsibility and identification with the community
together with other process operators. In addition to
the flagmen assisting the crane operators on deck, the
process operators are the only occupational group that
is carrying a radio out in the plant.
Process operators look upon themselves as being
strong personalities with an individualistic attitude.
These characteristics are to a great extent attributed to
the responsibility the process operators have in watching over and controlling the oil and gas production
process.
The expressions of we-ness among process operators can also be looked upon as a set of shared
conventions that represent attitudes and definitions on
how to behave in a working situation. Due to their role
in controlling and watching over the production, there
is a prevailing understanding among process operators
that they are observing a working process, i.e. when
mechanics are changing a packing on the well head.
An important aspect in this observing and watching
over the production is that the process operators are
handling gas under high pressure in their work. Most
of this work is done by checking out what alarms and
computers have been detecting. But during the fieldwork there were many stories told by process operators
1380
5.3
Offshore oil and gas production involves great potential for hazardous situations, both in terms of occupational accidents, major accidents and environmental
damage. On an oil and gas installation there is gas
under high pressure, which represent a potential for
leakage and explosions and must therefore be handled with care and knowledge. Process operators to
a great extent have the responsibility to monitor and
control these hazards. Our findings indicate that the
process operators take this responsibility seriously,
and they are fully aware of the potential crisis if
hydrocarbons go astray. Due to this, the process
operators emphasize the importance of knowing their
designated area on the installation. The process operators have an awareness of their responsibility for
the potential crisis related to their work. They are
handling hydrocarbons under high pressure which,
if something goes wrong, have the potential resulting in gas leakages, fires, explosionshazardous
situations with consequences for all the 200 to 350
workers onboard the platform, in addition to environmental damages. The awareness and responsibility
for these potential big crises are to a great extent
representing the process operators interpretation of
hazards.
Their focus on hazardous and their responsibility
are used in the interaction with other work groups
on the platform to legitimate their informal rank as
superior to the others. Any how, this view and attitude towards the other limits the information flow
between the work groups, and may lead to bounded
information regarding e.g. the state of the production
systems.
5.4
1381
5.5
1382
The seamens work situation is commonly compared to the platform workers situation, in terms of
shifts, salary and work load: The motivation decrease
when you look at the big differences between our shifts,
and what they have on the platforms. We have one
month on and one month off. They have two weeks
on and one month off. I feel like a second-rate human
compared to them.
6
6.1
DISCUSSION
Models of selves and models of work
1383
CONCLUSION
The impact of identification processes on an organizational safety level may be considered on, at least,
three levels.
First, group identity serves to align the work practice among the members of the community. Models of
selves are accompanied by a set of conventions about
how to act in a specific situation. This means that models of selves also include models of work, i.e. models of
how work should ideally be performed. These conventions may be more or less congruent with the formal
structures of safety management. Identity thus influences safety by influencing behaviour, e.g. the level
of compliance with procedures. Second, identity not
only involves a model of oneself, but also models of
others. This means that identity also influences the
way different groups perceive others. These relations
may direct and limit interaction and information flow
between different identities constituted communities,
i.e. between us and them. This aspect of identity may influence on safety by either facilitating or
serving as barriers for communication and information flow regarding risks. Third, identities are crucial
1384
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The paper describes our attempt at mapping the complexity of the civil aviation system, in terms
of how changes affect safety and interactions between actors within the system. Three cases were selected to
represent three distinct levels of the transport system: the civil aviation authority case, the air traffic control/airport
operation case and the maintenance case. Through the complexity perspective, we identified several positive
system characteristics or mechanisms that contributed to maintain and improve safety during change processes,
including a strong safety consciousness, collective coordination among individuals, and safety practices based
on flexibility and knowledge sharing. These system characteristics were strengthened by changes with a positive
perceived influence on safety, such as new technology improving flight safety. However, changes involving
efficiency, merging and relocation were perceived to influence safety negatively by creating conflicts of priorities
and reducing safety margins. The mixed effects of changes represented a common challenge across the three
case organizations.
INTRODUCTION
2
2.1
THEORIZING ON COMPLEXITY
What is complexity?
Complexity can be seen as a systemic property, permeating every level and actor of for example the civil
aviation system, from the particular work tasks of the
aviation technician to the changes implemented at a
national and international political level. The following four categories of complexity reflect this view
(Pettersen et al. 2008):
Technological complexity arises from increased specialization, increased interactions, and reduced
transparency that may contribute to hide critical
system connections and dependencies, making the
1385
1386
2.2
METHODOLOGY
RESULTS
1387
1388
1389
4.2
1390
ANALYSIS
Based on the results, we discuss the different complexity categories within our civil aviation transport
case. In addition, we point to aspects our study shares
in common with previous case studies within aviation
(Snook 2000, Vaughan 1996; 2005, Woods 2005).
Technological complexity and work situation complexity are best illustrated through the aviation maintenance case, where technicians interact with specialized
(aircraft) computer systems and equipment. To handle
this complexity, the case organization utilizes practical
and experience-based knowledge and resource slack.
This flexibility means that when the organization faces
a new situation where systems or components act in
unforeseen ways, potentially hiding critical system
functions and dependencies, the right person for the
particular task is searched for throughout the organization and across organizational boundaries. In other
words, the informal coping mechanisms of seeking
individuals with specific knowledge and/or of reducing the pace of a particular operation help improve
the transparency of the interface between individuals
and technology. This reduces work situation complexity and thereby also prevents possible unintended or
unwanted side effects of the particular operation
that might adversely affect safety.
All cases show organizational complexity elements
that fall within the definition of organizations as open
systems, where individuals, technology and structures
interact within and outside the organization, seeking
ways of adapting to the environment. For example, in
all cases individuals and organizations seek adaptation
to changes in national and international regulations
and legislations, by integrating new safety protocols
in current operations (the air traffic control/airport
operation and line maintenance cases) or by implementing additional rules and regulations for inspection
(the civil aviation authority case). Similarly, new market conditions, such as increased competition and
demands for profit, has lead to the development of new
inspection methodologies (the civil aviation authority
case) and a focus on efficiency and cost-cutting (the air
traffic control/airport operation and line maintenance
cases). Overall, both examples illustrate how the particular organization interacts with local, national and
global aspects of its environment, and seeks way of
countering the effects of changes in the environment.
In all three cases, political/economical complexity plays a major role. One example is how changes
CONCLUSION
1391
efficiency programs. This supports a system approach to understanding changes and implications to
safety that we believe should be continued in future
research endeavours focusing on understanding and
managing changes.
ACKNOWLEDGEMENTS
We wish to thank the Norwegian Research Council
for financing this work as part of the RISIT (Risk and
Safety in the Transportation Sector) program. We also
wish to thank our network within Norwegian civil aviation that contributed with enthusiasm and knowledge
in planning and accomplishing the project. Special
thanks go to Avinor and the Sola Conference for their
financial contribution to the project. Access to a questionnaire survey within the Norwegian civil aviation
transport system, was made available to us by the
Accident Investigation Board Norway and Institute of
Transport Economics. Finally, we wish to thank our
research colleagues at DNV Norway and the International Research Institute of Stavanger (IRIS) for inputs
and discussions, and Preben Linde who stepped in
as a project manager in the difficult initial research
phase.
REFERENCES
Amalberti, R. 1996. La conduite de systmes risques [The
control of systems at risk]. Paris: Presses Universitaires
de France.
Drazin, R. & Sandelands, L. 1992. Autogenesis: A Perspective on the Process of Organizing. Organization Science
3(2): 230249.
Goodstein, L.P. et al. 1988. Task, erors and mental models.
London, Taylor & Francis.
Guba, E.G. & Lincoln, Y.S. 1981. Effective evaluation:
Improving the usefulness of evaluation results through
responsive and naturalistic approaches. Josey-Bass, San
Francisco, USA.
Morgan, G. 1986. Images of Organization. Newbury Park,
CA, Sage.
Hollnagel, E. 1996. Reliability Analysis and Operator
Modelling. Reliability Engineering and System Safety 52:
327337.
Perrow C. 1984. Normal Accidents: Living with High-Risk
Technologies. Basic Books, NY.
Pettersen, K.A. et al. 2008. Managing the social-complexity
of Socio-technical systems (paper draft).
Rasmussen, J. 1982. Human Factors in High Risk Technology. In Green. E.A (ed.), High Risk Safety Technology.
London, John Wiley and Sons.
Rasmussen, J. 1997. Risk Management in a dynamic
society: A modelling problem. Safety Science 27(2/3):
183213.
Sandelands, L. & Drazin, R. 1989. On the Language of Organization Theory. Organization Studies 10(4): 457477.
1392
Snook, S.A. 2000. Friendly fire: The Accidental Shootdown of U.S. Black Hawks over Northen Iraq. Princeton,
Princeton University Press.
Vaughan, D. 1996. The Challenger launch decision: Risky
technology, culture, and deviance at NASA. Chicago:
University of Chicago Press.
Vaughan, D. 2005. System Effects: On Slippery Slopes,
Repeating Negative Patterns, and Learning from Mistake?
In Starbuck, W.H. & Farjoun, M. (eds), Organization at
the Limit: Lessons from the Columbia Disaster. Malden,
MA, Blackwell Publishing.
Weick, K. 1977. Organization design: organizations as
self-designing systems. Organizational Dynamics 6(2):
3046.
Wolf, F.G. 2001. Operationalizing and testing normal accident theory in petrochemical plants and refinerie. Production and Operations Management 10(3): 292305.
Woods, D.D. 1988. Coping with Complexity: The Psychology of Human Behaviour in Complex Systems. In
Goodstein, L.P., et al. (ed.), Task, erors and mental
models. London, Taylor & Francis.
Woods, D.D. 2005. Creating Foresight: Lessons for Enhancing resilience from Columbia. In Starbuck, W.H. &
Farjoun, M. (eds), Organization at the Limit: Lessons
from the Columbia Disaster. Malden, MA, Blackwell
Publishing.
1393
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: The potential consequences of the human exposure to electromagnetic fields have generated an
increasing interest for both, public and relevant authorities, in terms of health and safety at work. It is worth
highlighting the adoption of the Directive 2004/40/CE of the European Parliament and the Council of 29 April
2004, which is being transposed into national legislation.
The directive establishes the minimal health and safety requirements for the protection of workers from the
health and safety risks derived, or that could be derived, from the exposure to electromagnetic fields (0 Hz to
300 GHz) during their work. Therefore, in fulfilment of the directive, the employers will have to evaluate and,
if necessary, measure and/or calculate the electromagnetic field levels that workers are exposed to: on the one
hand, by the measure of the electromagnetic fields at the place of work, and, on the other hand, faced with the
appearance of high electromagnetic field levels, to carry out the appropriate actions by means of electromagnetic
shieldings or other measures to reduce the electromagnetic field level. The efficiency of the aforementioned
measures is realized by means of electromagnetic fields simulations by finite elements.
INTRODUCTION
DEVELOPMENT OF DIRECTIVE
2004/40/EC
1395
1999 on limiting exposure of the population to electromagnetic fields (0 Hz300 GHz) establishes a system
of minimum restrictions and reference levels.
The Recommendation is based on the directives of
the International Commission on Non Ionizing Radiation Protection (ICNIRP), approved in 1998 by the
Scientific Director Committee advising the European
Commission, and, in 2001, by the Scientific Committee on Toxicity, Ecotoxicity and the Environment
(CSTEE). It is confirmed and recorded in the Scientific Committee on Emerging and Newly Identified
Health Risks (SCENIHR) decree of the 29th of March
2007, and is a European reference standard in worker
safety (2004/40/EC).
In 1989 the European Parliament and the Council adopted the Councils Directive 89/391/CEE, dated
the 12th of June 1989, concerning the application
of measures to promote improved worker health and
safety. In September 1990 the European Parliament
adopted a resolution on its action plan for the application of the EU Charter of workers fundamental
social rights; in this framework the Commission was
solicited to develop a specific Directive pertaining
to risks related to persons physically present in the
workplace.
First were developed the directives pertaining
to vibrations and noise, and in 2004 Directive
2004/40/EC, concerning the minimum safety standards concerning risks due to electromagnetic fields
(eighteenth specific Directive with order in section 1
of article 16 of Directive 89/361/CEE). The present
Directive establishes minimum requirements pertaining to EM fields (0 Hz to 300 GHz), to be fulfilled
by companies, both in terms of limit exposure values and values giving rise to a required action. The
present Directive applies only to short-term, but not
long-term, known negative effects on workers health
and safety.
Electric field strength (E): vector magnitude corresponding to the force exerted on a charged particle,
independent of its movement in space [V/m].
Magnetic field: region of space in which a point
electric charge with velocity v experiences a
force exerted perpendicular and proportional to the
velocity vector and to a field parameter called
magnetic induction, according to the following
expression:
F = q (v B)
FUNDAMENTAL THEORY
OF ELECTROMAGNETIC FIELDS
=H
B
(2)
(1)
(3)
where is the magnetic permeability of the propagation medium, expressed in [Weber/A m]. Magnetic induction is measured in the Tesla [T], which
corresponds to 1 [Weber/m2 ].
Electromagnetic field: Field associated with EM
radiation. described by two vectors, one electrical
and the other magnetic, which advance, mutually
perpendicular, in the direction of propagation.
Specific Absorption Ratio (SAR): Power absorbed
per unit of mass of body tissue, whose average value
is computed for the whole body or parts thereof in
[W/kg].
Contact current (IC ): current between a person and
an object [A]. A conducting object in an electric
field can be charged by the field.
Current density (J): current flowing through a
unit section perpendicular to the current direction, within a three-dimensional volume such as the
human body or a part thereof [A].
Power density (S): Power per unit area normal to
the direction of propagation [W/m2 ]. Appropriate
magnitude for use at very high frequencies, and
whose penetration of the human body is low.
1396
(4)
(5)
(6)
=0
B
(7)
c
f
Figure 2.
(8)
5
8
Representation of EM waves.
|E|
0
In the following table (Table 3) is displayed the emissions spectrum regulated by Directive 2004/40/EC
according to frequency.
In this last table, N is the parameter determining
the width of each band between 0.1 10N 3 10N Hz
(International Telecommunications UnionITU).
In the following section, specific types of equipment in work environments generating EM radiation
are listed for each frequency range (Vargas et al.):
Sources generating fields at frequencies below
3 kHz (0 Hz f < 3 kHz)
(9)
1397
Table 1.
Average SAR
for entire body
Local SAR
(head and trunk)
Local SAR
(extremities)
Power density,
S
Range of
frequencies
mA/m2 (rms)
W/Kg
W/Kg
W/Kg
W/m2
Up to 1 Hz
14 Hz
41000 Hz
1000 Hz100 KHz
100 KHz10 MHz
10 MHz10 GHz
10300 GHz
40
40/f
10
f/100
f/100
0.4
0.4
10
10
20
20
50
Table 2.
Electric field
strength, E
Magnetic field
intensity, H
Magnetic
induction, B
Plane wave
equivalent
power density,
Seq
Range of
frequencies
V/m
A/m
W/m2
mA
mA
01 Hz
18 Hz
825 Hz
0.0250.82 kHz
0.822.5 kHz
2.565 kHz
65100 kHz
0.11 MHz
20000
20000
500/f
610
610
610
610
1.63 105
1.63 105 /f 2
2 104 /f
20/f
24.4
24.4
1600/f
1.6/f
2 105
2 105 /f 2
2.5 104 /f
25/f
30.7
30.7
2000/f
2/f
1.0
1.0
1.0
1.0
1.0
0.4f
0.4f
40
Induction stoves
Modulated radio-wave transmission antennas
Arc-welding equipment
Nautical radio-telephones
AM radio broadcasting
Heat-sealing machines
Current induced
in extremities, IL
Contact current,
IC
Mobile telephones
Mobile telephone base stations
Microwave ovens
Surgical diathermy machines
Anti-theft systems
1398
Banda
11
EHF
Extremely high
frequencies
SHF
Super high
frequencies
UHF
Ultra high
frequencies
VHF
Very high
frequencies
HF
High
frequencies
MF
Medium
frequencies
LF
Low
frequencies
ELF
Extremely low
300 GHz
30 GHz
1 mm
10 mm
30 GHz
3 GHz
10 mm
100 mm
3 GHz
300 MHz
100 mm
1m
300 MHz
30 MHz
1m
10 m
30 MHz
3 MHz
10 m
100 m
3 MHz
300 kHz
100 m
1 km
300 kHz
30 kHz
1 km
10 km
30 kHz
0 Hz
10 km
10
9
8
7
6
5
Figure 4.
Figure 5.
Figure 3.
MEASURES TO BE ADOPTED BY
COMPANIES TO COMPLY WITH
DIRECTIVE 2004/40/EC
1399
of different equipment deployed in industrial environments, in order to minimize their combined effect on
the workers. All of these simulations are always tested
through comparisons with actual medical data in the
workplace. As an example, consider the magnetic field
shown in Figure 3, which can be reduced through
screening structure as shown in Figure 4. Figure 5
shows the physical location of the current lines creating the magnetic field with respect to the screen.
7
1400
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Electrostatics discharges are a dangerous ignition source in industrial environment. Many
regulations and standards help industry to design equipment and installations to avoid the presence of electrostatic charge. However, very often the presence of static charge in equipments or processes is hard to detect
and, even more difficult, to eliminate. Several strategies must be followed to avoid hazards and damages in
industrial processes or products: good equipment design, deep comprehension of materials properties, staff
training, strict maintenance and very careful modification of any process or equipment. In this paper, we will
focus on materials characterization and on-site diagnostic.
INTRODUCTION
2
2.1
1401
Surface
conduction
Volume
conduction
Figure 1.
3
Figure 2.
Volume polarization.
surfaces. Charge generation depends on relative velocity and pressure. Charging mechanism is not well
understood, but it is commonly accepted that the physical process is the same as contact charging. Pressure
and velocity should modify the total amount of contact
points and thus the transferred charge. Tribocharging
is present in many industrial processes: extrusion lines,
moulding, packaging, unrolling, conveying, etc.
Induction charging is created by high voltage conductors. The electric field can generate static charges
in floating (non-grounded) conductive elements or a
permanent polarization on insulating materials.
Chemical or electrochemical charge generation
occurs during electrolytic charge separation or corrosion processes. However, charge is usually dissipated
by conductive paths. Piezoelectric or photoelectric
charging are particular phenomena which can induce
some electrostatic charge. They are used in some specific applications (sensors, printers) but they should
not represent a hazard.
2.2
Charge dissipation
MATERIALS CHARACTERIZATION
The main intrinsic property used to characterize antistatic materials is surface or volume resistivity. Commonly, a material is considered dissipative if surface
resistivity is in the range 106 to 1012 . But conductivity in itself is not enough to guarantee dissipation
of electrostatic charges, as long as a conductive path
to ground is not provided.
There are many standards describing the test procedures for different applications. The situation is,
in fact, quite puzzling and sometimes it is difficult
to choose the appropriate system. One of the most
general standard is the IEC 60093 which describes
a measurement configuration for resistivity measurements based on plane parallel electrodes, with a guard
ring to distinguish between surface or volume conduction (Fig. 3). The resistance is the ratio between
the applied voltage and the measured current. Experimental setup allows to measure surface or volume
current. Besides, due to slow polarization, the current decreases continuously during measurement so
resistance increases with time. It is a compromise to
establish the resistance as the value at a maximum time
or the value which remains almost constant within the
precision of the measurement. In fact, a slow polarization current can be decreasing for very long time
periods. This is also the reason why samples are short
circuited before measurements for a long period.
However, for high resistances or inhomogeneous
materials (like textiles with conducting fibers) resistivity measurements are not sufficient to ensure a
dissipative behaviour. Charge dissipation is also quantified by means of a potential decay measurement or
1402
Volume resistivity
V
IS
IV
Surface potential
measurement
IS + IV
Sample
IS
V
Figure 3.
IV
IS + IV
Grounding plane
Surface resistivity
2000
insulating material
1500
1000
500
Time [s]
0
0
10
20
30
40
50
60
-500
Sensing aperture
antistatic material
-1000
Corona charging
-1500
-2000
Sample
Grounded enclosure
Enclosure for
surface dissipation
samples
Surface
Potential (V)
Y (cm)
X (cm)
INDUSTRIAL TESTING
1403
The presence of electrostatic charges should be considered at the earlier stage of industry installations
design. According to the processes and associated
risks (personal safety, product or process damages)
some initial measures should be taken. For example:
identification of ignition risks and possible charging
situations (rolling, sliding, impact, powder conveying, etc.) or antistatic measures (dissipating materials,
grounding points, neutralizers, etc,).
Besides, in-situ verification should be performed to
check possible static charge sources. Some important
points for in.situ measurements are:
floor resistance: it is measured with special electrodes in two ways, point to point (see fig. 8) or
point to building electrical ground.
electrostatic potential of people: the setup of the
standard EN 1815 provides a simple setup to measure electrostatic potential of people walking on
resilient floors. Figure 9 shows an example of
measurement on an insulating floor. Potentials can
easily reach thousands of volts. It is considered that
Resistance meter
25 kg
25 kg
10 cm
Figure 8.
Electrodes
Time [s]
Grounding plate
1404
Finally, but of paramount importance, staff training should be seriously considered as the major tool
allowing to avoid problems with electrostatic charge.
A good understanding of electrostatics and elimination
techniques is the best way to succeed in electrostatic
elimination.
Figure 11.
ionizer.
Passive electrode
Figure 12.
The train of positive and negative ions cancels the surface charge according to the polarity of the charges on
the material (Fig. 11). When surface is neutralized a
balanced flow of charge reaches the surface. The efficiency of ionizers decreases with the distance to the
object.
Sometimes a passive ionizer is enough to reduce
the charge down to an acceptable level. In that case,
the electric field of the static charge itself is used to
induce a corona discharge of the opposite sign in a
needle electrode. Generated ions partially discharge
the surface. This technique cannot completely eliminate the surface charge. Some commercial systems are
able to combine a surface potential measurement with
a DC driven corona discharge to ensure electrostatic
discharge. There are also some ionizers equipped with
a gas flow (nozzles and guns) to increase the efficiency.
CONCLUSIONS
1405
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: How operations and maintenance is organized and carried out is a central factor in the secure and
reliable operation of an offshore installation. In the Kristin asset on the Norwegian continental shelf, operations
and maintenance is conducted with a lean, highly skilled organization. To date, operations are characterized
by a safe and effective execution, and high regularity. The paper explores one central aspect of the safe and
effective execution observed on board; namely how the organizational model used in the asset influences safe
and effective day-to-day operations. The asset uses one directed team as a central concept in the operating model,
where the operations crew is empowered and synchronizes tasks between functions. The paper discusses how
this leads to shared situational awareness and self-synchronization as critical concepts and issues for safe and
effective operations.
INTRODUCTION
Kristin is a new installation on the shelf, set in production in 2005. It is a semi-submersible, producing from
the Kristin reservoir over four templates. The reservoir is marked by high temperatures and high pressure
(HT/HP). Its production capacity is 125,000 barrels
of condensate and just over 18 million cubic meters
of rich gas per day. The platform is also designed for
later tie-ins, such as Tyrihans (under development).
Now, how operations and maintenance is organized
is a central factor in the secure and reliable operation of an offshore installation. In the last decade,
a set of interesting developments on the Norwegian
continental shelf has occurred. Most large installations had their peak production in the mid- to late
nineties. New installations have traditionally been
smaller, yet located on technically more challenging
fields. Also, on the organizational side, developments
have been made. First, by using multi-skilled and
multi-functional teams as a central way of organizing work on the platform, exemplified in the sgard
field development. These basic insights are based on
organizing work to enhance flexibility and innovation,
seen in lean manufacturing (Womack 1990), and selfdirected work teams (Trist and Bamforth 1951; Fisher
and Kimball 2000). Second, a set of technological
door-openers are emerging; improved communication
and the possibility of online support between onshore
1407
METHODS
1408
Figure 1.
Three dimensions of management and decision-making in the One Directed Team operation model.
1409
Figure 2. Layout of work-planning areas (D&V), CCR and Collaboration room (LED) offshore with link to onshore
functions indicated. Distance between D&V, CCR and LED is less than 5 meters.
3.1
The landscape is designed with workstations combined into a number of islands with three stations
together down the middle of the space, and a continuous set of workstations along the walls (c.f. Figure 2).
Process operators together with the SAS-technician
and the Security/Maritime Coordinator are located at
stations close to the collaboration room and CCR,
while the other functions are placed further along in
the landscape. Contract workers on shorter assignments (such as rust-cleaning and repainting) have work
stations in the far end of the office landscape. Guest
stations can be found at this end as well.
In the landscape all work preparations, planning
and reporting takes place, for all disciplines, including F&A and contractors. Thus, it is an integrated work
area for all for the planning, coordination and reporting phases of operational tasks. Work preparation and
planning is done early in the day, before moving out
into the process facilities, where most of the work-day
is spent.
In the landscape there is discussion and on-going
talk on work orders, tasks, and assignments for the
different disciplines. Operators can and will hear what
the other functions are planning or discussing. As
such the landscape is a source of informal information.
The operators see the landscape as a collective asset.
One operator says: It is easy get hold of persons,
it easy to discuss and coordinate tasks, and we feel
that we are connected . . . that we get information,
decisions, first hand and directly . . . No physical barriers in the landscape makes it possible to bring in other
1410
operators when a discussion takes place. Also, operators will join ongoing discussions if they hear that there
are issues or elements where they have relevant information. Most planning activities take place right after
the 7 a.m. morning meeting in the shared coffee bar,
where all the personnel on board are informed about
HSE, production status, and upcoming operational
issues.
The work process at Kristin is thus as follows:
After a planning/preparation session in the landscape,
personnel move out to workshops, stores, and the
process plant. Work orders are retrieved from SAP,
and for each role or discipline there is a set of programmed/scheduled activities and a set of corrective
activities. These are all part of the planned activities
for the week. There is one planning meeting for the
O&M-crew, held every Saturday, with a plan responsible person within each discipline. Now, giving leeway
to other tasks, the operators choose which work orders
to complete/start.
There are many examples of how this promotes
higher self-synchronization. When discussing the
days work with one technician, he described how his
planned taskthe scheduled refurbishment of a large
valvewas moved to after lunch. This was agreed at
the planning/preparation session in the morning. Then
he went back to SAP to find something less complex to fill his day until lunch. He pointed out that
he always had a set of work orders at hand, in order to
be doing something useful precisely for such situations. Accordingly, this work practice makes the
organization more robust in order to withstand changes
and arising situations.
The operators havegiven the premises of what
goes on in the larger O&M-crew and what is prioritized from managementa certain amount of sway
over which work orders to pick up. These decisions
are made on the lowest level possible. Personnel have
full responsibility for the task, including its planning,
execution, and reporting. This means that material
must be found or ordered before execution and work
is reported in one integrated loop. Correspondingly,
SAP-proficiency in the asset is very high. Says one
informant: Those who want a list of tasks you have
to do today put in their hand, will not apply for work
on Kristin.
4
DISCUSSION
1411
Self-directed teams are further down on the continuum from low to high empowerment than are examples
from lean manufacturing (Fisher and Kimball 2000;
Lee and Koh 2001). As mentioned in the introduction,
the self-directed team model used in the sgard license
is one example on the NCS (Trist and Bamforth 1951;
Emery and Thorsrud 1976; Qvale and Karlsen 1997).
The one directed team model, however, does not use
such elements as explicit group meetings and consensus decision-making found in self-directed teams.
In the Kristin case, the generic insights on operations have been moved one step further, trying to
improve the understanding of the workings of these
organizational principles. Following Barley (1996),
Barley and Kunda (2001), and Orlikowski and Barley
(2001), a more grounded research design increases
the understanding of work processes and knowledge
flows (Orlikowski 2002). Also, the generic organizational capabilities suggested by Teece, Pisano et al.
(1997) can be given concrete substance in these settings. This is then what Teece, Pisano et al. (1997)
would call dynamic capabilities, specific to the organization. Henderson has tried to develop some examples
relevant dynamic capabilities (Baird, Henderson et al.
1997; Balasubramanian, Nochur et al. 1999). Selfsynchronization through shared situational awareness
and empowerment are key concepts in the dynamic
capabilities he prescribes.
Other contributions have developed new insights
into the type of knowledge used in technical work
processes, especially (Ryle 1949; Orlikowski 2002),
trying to include the type and nature of transactions
between actors as a part of effective problem-solving.
This fits nicely into the notion of shared situational
awareness mentioned above.
In the one directed team operational model shared
situational awareness and empowerment is used to promote self-synchronization. This is discussed in the
following.
First, how is shared situational awareness (SA) in
teams promoted and managed in dynamic environments? Rousseau et al. (2004:1415), Artman (2000),
and Patrick and James (2004) argue that there is an
increasing interest in studying team cognition, based
on the fact that teamwork, or working towards a shared
goal requires information sharing and coordination.
Depending on the tasks at hand, awareness is distributed in the team to one member or shared between
team members. Artman (2000: 113) defines TSA as;
Two or more agents active construction of a situation
model, which is partly shared and partly distributed
and, from which they can anticipate important future
states in the near future. Patrick and James (2005: 68)
support this perspective of TSA, because such a perspective . . . not only emphasizes the interpretative
and distributed nature of the types of knowledge that
constitute the awareness of the team but also hints
1412
CONCLUSION
In the operations on Kristin there have been no serious personal injuries (red or yellow incidents) to date.
Absences due to illness were as low as 2.7 percent in
2006. The asset has had higher regularity (up-time)
than other assets and better than planed. In the first
year of operation, non-planned shut-downs were 1.27
percent of the total time available. There have been
no gas leakages (on level 1 to 4) to date. Corrective
maintenance: Outstanding work orders of high and
medium level priority is about 500 hours, well below
the average of 1800 hours for installations on the NCS.
Scheduled maintenance is more difficult to assess in
this early phase of operations, but at the end of 2006
there was about 400 hours outstanding.
The central operational feature promoting this operational excellence is the one directed team operational
model. As argued, the combination of empowerment and shared situational awareness enables the
operations and maintenance crews to be proactive in
problem-solving. Second, this keeps transaction cost
in functions, in the team, and between teams and external functions low, securing effective problem-solving.
Third, with full-loop work processes and a high level
of transparency around who is responsible for tasks,
the number of hand-offs are reduced and motivation
strengthened. By this self-synchronization is achieved,
reducing the need for coordination from management
and promoting safe and reliable operation.
1413
REFERENCES
Artmann, H. (2000). Team Situation Assessment and Information Distribution, Ergonomics, vol.43, pp.11111128.
Baird, L., J.C. Henderson, et al. (1997). Learning
from action: An analysis of the Center for Army
Lessons Learned (CALL). Human Resource Management 36(4): 385.
Balasubramanian, P., K. Nochur, et al. (1999). Managing process knowledge for decision support. Decision
Support Systems 27(12): 145162.
Barley, S.R. (1996). Technicians in the workplace: Ethnographic evidence for bringing work into organization
studies. Administrative Science Quarterly 41(3): 404.
Barley, S.R. and G. Kunda (2001). Bringing work back in.
Organization Science 12(1): 76.
Emery, F.E. and E. Thorsrud (1976). Democracy at work: The
report of the Norwegian Industrial Democracy Program.
Leiden, Netherlands, Martinius Nijhoff.
Fisher, K. and A. Kimball (2000). Leading self-directed work
teams. New York, McGraw-Hill.
French, H.T., Matthew, M.D. and Redden E.S. Infantry Situation awareness in: Banbury, S and Tremblay (ed) (2004),
A Cognitive Approach to Situation Awareness: theory and
application, Ashgate Publishing Company, Burlington
VT: USA.
Garbis, C. and Artmann, H. Team Situation Awareness as Communicative Practices in: Banbury, S and
Tremblay (ed) (2004), A Cognitive Approach to Situation
Awareness: theory and application, Ashgate Publishing
Company, Burlington VT: USA.
Lee, M. and J. Koh (2001). Is empowerment really a new
concept? The International Journal of Human Resource
Management 12(4): 684695.
Orlikowski, W.J. (2002). Knowing in practice: Enacting a collective capability in distributed organizing.
Organization Science 13(3): 249.
Orlikowski, W.J. and S.R. Barley (2001). Technology and
institutions: What can research on information technology
and research on organizations learn from each other?
MIS Quarterly 25(2): 145.
Patrick, J. and James, N. (2004). A Task-Oriented Perspective
of Situation Awareness in Banbury, S and Tremblay (ed)
(2004), A Cognitive Approach to Situation Awareness:
theory and application, Ashgate Publishing Company,
Burlington VT: USA.
Qvale, T. and B. Karlsen (1997). Praktiske forsk skal gi
metoder for bedriftsutvikling. Bedre Bedrift 4.
Rousseau, R., Tremblay, S. and Breton, R. Defining and
Modeling Situation Awareness: A Critical Review in:
Banbury, S and Tremblay (ed) (2004), A Cognitive
Approach to Situation Awareness: theory and application,
Ashgate Publishing Company, Burlington VT: USA.
Ryle, G. (1949). The concept of mind. London, Hutchinsons.
Strauss, A. and J. Corbin (1990). Basics of Qualitative
Research: Grounded Theory Procedures and Techniques.
Newbury Park, SAGE Publications.
Teece, D.J., G. Pisano, et al. (1997). Dynamic Capabilities and Strategic Management. Strategic Management
Journal 18(7): 509533.
Trist, E.L. and K. Bamforth (1951). Some social and
psychological consequences of the Longwall method of
coal-getting. Human Relations 4: 338.
van Leeuwen, E.H. and D. Norrie (1997). Holons and
holarchies [intelligent manufacturing systems]. Manufacturing Engineer 76(2): 8688.
Womack, J.P., D.T. Jones, et al. (1990). The machine that
changed the world. New York, Macmillian.
1414
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
S.A. Silva
Instituto Superior de Cincias do Trabalho e da Empresa, Lisboa, Portugal
K. Mearns
School of Psychology, University of Aberdeen, Aberdeen, UK
ABSTRACT: The psychosocial model of work-related accidents adopts an agents safety response approach
to safety climate and shows the chain of safety influences from the Organizational Safety Response (OSR)
to accident indicators, such as micro-accidents (MA), through the Supervisors Safety Response (SSR), the
Co-Workers Safety Response (CSR), the Worker Safety Response (WSR) and the perceived risk (PR). This
paper analyzes the influence of leadership, role clarity, and monotonous and repetitive work (MRW) on the
psychosocial model variables in a Spanish construction sample (N = 789). Leadership and role clarity appear as
significant antecedents of the OSR, SSR and CSR, while MRW acts as an antecedent of PR and MA. Leadership
also shows direct effects on MRW and PR. Results suggest that leadership not only affects the chain of safety
responses that influence the safety outcomes, but it also directly affects the main safety properties of the work,
the levels of risk and micro-accidents.
In spite of the many studies showing the heavy economic, social and human costs of lack of safety, (e.g.,
HSE, 2005), year after year figures show high rates
of work-related accidents, even in the so-called established market economies that have enough economic
and legal resources to cope with this phenomenon
(Hmlinen et al. 2006). There are many sources of
explanations for this rather surprising fact. Many companies are unaware of the costs of accidents, and many
others, especially medium and small sized businesses,
may have limited resources for safety, including a lack
of sufficient available knowledge and material opportunities. However, the generalized lack of success on
this obvious economic matter suggests that the main
processes of organizational management should be
reviewed from a safety point of view.
Organizational behaviour, regardless of the hierarchical level of the performer, can be described as a
resultant vector compound of three main vectors: productivity, quality and safety. Although the profitable
results of the company simultaneously depend on the
three vectors, traditionally many organizations emphasize productivity or productivity and quality leading
1415
1416
2
2.1
METHOD
2.2
Measures
Sample
The sample is composed of 789 Spanish construction workers who voluntarily agreed to participate in the study. Data was obtained during
professional construction training sessions performed
by a Spanish occupational foundation for the construction sector in 10 Spanish provinces. 90.6% of
the respondents were male, and 9.4% were female.
Average age of respondents was between 30 and 39
years. 43.1% fell into this category. 33.5% were
between 18 and 30 years of age, 18.2% between
40 and 49 years of age, and 5.2% were over 50
years of age. The sample performed seven construction jobs: 21.0% bricklayers, 11.8% mates, 11.1%
moulding carpenters, 6.4% installers, 11% crane
operators, 14.2% foremen, and 24.5% construction
technicians.
Table 1. Descriptive statistics (means and standard deviations) and reliabilities (coefficient alpha) for each scale.
Scale
Mean SD
6.06
2.43 0.90
6.59
5.59
7.69
4.31
2.64
2.53
1.89
2.43
0.94
0.93
0.84
0.72
3.97
6.52
3.19
3.57
1.96
1.89
2.01
2.20
0.89
0.73
0.77
0.83
1417
Alpha
.48**
LEA
.40**.23**
.32**
OSR
.33**
RESULTS
.14**
-.21**
SSR
.33**
.07**
.07*
.25**
CSR
.30**
.16**
-.21**
MRW
.11**
.22**
WSR
-.16**
.40**
-.05
PR
-.09*
.32**
.44**
MA
CLR
-.09**
Figure 1. Psychosocial Model of Leadership and Safety Climate. Figures on the unidirectional rows are standardized
path coefficients. = p < 0.05; = p < 0.01.
the OSR and the SSR, and to a minor degree the CSR.
LEA also affects the MRW and the PR directly and
negatively. CLR positively affects the OSR, the SSR,
the CSR and the WSR. Finally, MRW has a direct and
positive effect on PR and MA.
These results confirm the main flow of safety influences between the organization and the individual
behaviour, and the importance of leadership, role
clarity and monotonous and repetitive work in the
assessment of the psychosocial set of variables that
make up the safety climate.
The path analysis coefficients corroborate the main
social process of influence between the organization
1418
and the workers safety response, through the supervisors safety response and the group influences represented by the co-workers safety response. This result
confirms previous research testing the psychosocial
model in general samples (Meli, 1998; 2004b) and in
construction samples (Meli et al. 2007). In this construction sample, the paths linking the various safety
responses are strong, as is the step between the perceived risk and the micro-accidents. However, the
paths linking the safety responses to the perceived
risk and the micro-accidents are weak, especially the
path between the workers safety response and the
perceived risk.
Positive leadership emerges as a powerful exogenous variable directly affecting role clarity, the chain of
safety responses and monotonous and repetitive work.
Positive leadership contributed to explaining a clear
definition of role, and both positive leadership and role
clarity contributed to a positive safety response by the
organization, the supervisors and, to a lesser degree,
the co-workers. Role clarity also directly affects the
workers safety response.
The role of perceived risk is clearly affected by
positive leadership, with a negative sign, and by
monotonous and repetitive work, with a positive sign.
This chain of social influences gives structure to the
different safety responses contained under a holistic safety climate perspective, and it also confirms
the expected relationship between perceived risk and
micro-accidents.
In this research, a realistic view of perceived
risk is assumed; that is, the risk that is perceived
reflects the degree of safety of the work setting
given the behaviour of the social agents involved.
From this point of view, risk represents an estimation of the perceived probability of undesired
events, such as accidents or micro-accidents, and is
a result of the safety behaviours of all the social
agents. However, perceived risk can also be understood and analyzed from a subjective point of view,
as a result of the perceived level of undesired events,
such as accidents and micro-accidents, and as an
antecedent of the social chain of safety responses.
Both views can be accepted as partially true, and
risk can be understood both as a perceptive estimation of the probability of accidents and other
undesired events given the safety conditions and the
safety responses, and as a cognitive antecedent of
these safety responses influenced by the perception
of accidents and other undesired events. However,
it is hard to assume a subjective view of perceived
risk when we focus on the relationships with positive leadership and monotonous and repetitive work.
The results provided by the tested model in this construction sample suggest that the perceived risk is
strongly affected by the presence of monotonous and
repetitive worka characteristic of the tasks and the
1419
1420
Meli, J.L., Mearns, K., Silva, S. & Lima, L. 2007. Safety climate and the perceived risk of accidents in the construction
industry. Safety Science, doi: 10.1016/j.ssci.2007.11.004.
Meli, J.L., Silva, S., Mearns, K. & Lima, L. 2006. Exploring
the dimensionality of safety climate in the construction
industry. In Mondelo, P., Mattila, M., Karwoski, W. &
Hale, A. Proceedings of the Four International Conference on Occupational Risk Prevention.
Merton, R.K. 1957. Social theory and social structure (2nd
Edn.). Glencoe, IL: Free Press.
MTAS. 2007. VI Encuesta Nacional de Condiciones de
Trabajo.
Sobeih, T.M., Salem, O., Daraiseh, N., Genaidy, A.
Shell, R. 2006. Psychosocial factors and musculoskeletal
1421
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
O.A. Engen
University of Stavanger, Norway
ABSTRACT: This paper addresses safety culture on tankers and bulk carriers and which factors affect the
safety culture onboard vessels. The empirical setting for the study is the Norwegian shipping industry. Safety
management is a challenging issue within shipping for several reasons. First of all, life and work onboard a vessel
is a 24 hour activity and the crew has few possibilities of interacting with the surrounding society. Secondly
the geographical distance between the on-shore organization and the vessel may affect both the quality of those
systems and plans developed on shore and their implementation on the vessels. The ship management is thus
identified as a key factor to a sound safety culture along with the on shore crewing strategy.
INTRODUCTION
along with the need for reorganization to improve efficiency. This resulted in a cut in the crewing level. Later
in the 80s a global recession caused further structural
changes; flagging-out, use of external crewing agencies and signing on crew from developing countries
and lower wages (Bakka, Sjfartsdirektoratet 2004).
However, the shipping industry is today facing new
manning related challenges as there is a global shortage of manpower, this is due to three main challenges:
First, it is less attractive nowadays to work in the shipping industry. Second, the recruitment for ship crews
has been slow. This has resulted in the third situation
where the liquefied natural gas (LNG) shipping sector is drawing crew from the tanker industry, and the
tanker industry in turn is drawing people from the dry
bulk sector.
In 1894 the British Board of trade carried out a
study which showed that seafaring was one of the
worlds most dangerous occupations, and it still is
(Li, Shiping 2002). Regulations in order to reduce
the risk at sea were introduced about 150 years ago.
These regulations initially encompassed measures to
rescue shipwrecked sailors, and further requirements
for life-saving equipment, seaworthiness and human
working conditions. Traditionally the safety work has
focused on technical regulations and solutions even
though experience and accident statistics indicate that
most of the accidents at sea somehow were related
to human performance (Bakka, Sjfartsdirektoratet
2004). However, a few very serious accidents at seathat
1423
Person
Safety
Climate
CONTEXT
External Observable Factors
Situation
Organisational
l Factors
Behavior
Safety
Behavior
Internal
Psychological
Factors
1424
of their actions (Dekker & Dekker 2006). An alternative view is to recognise human error not as a cause
of accidents, but as a consequence or symptom of
organisational trouble deeper within the organisation,
arising from strategic or other top level decisions.
This includes resource allocation, crewing strategy
and contracting (Dekker, Dekker 2006, Reason 2001,
Reason, Hobbs 2003). An organisation is a complex
system balancing different, and often also conflicting,
goals towards safety and production in an aggressive
and competitive environment (Rasmussen 1997), a
situation that to a large extent is current within shipping. The BBS approach towards safety often implies
that more automation and tighter procedures should be
added in order to control the human actions. However,
the result may be that more complexity is added to the
system. This in combination with the organisations
struggle to survive in the competitive environment,
leads to the system becoming even more prone to accidents (Perrow 1999) (Dekker & Dekker 2006, Reason
2001, Reason & Hobbs 2003). However, the concept
of focusing on the human side of safety is not wrong.
After all, the technology and production systems are
operated, maintained and managed by humans, and
as the final barrier towards accidents and incidents
they are most of the time directly involved. The proponents of the BBS approach argue that behaviour
control and modification may bring a shift in an organisations safety culture, also at the upper level, but this
is most likely if the focus is not exclusively addressing
observed deficiencies at the organisations lower levels (DeJoy 2005). DeJoy (2005) calls attention to three
apparent weaknesses related to the BBS approach:
1. By focusing on human error it can lead to victimblaming.
2. It minimises the effect of the organisational
environment in which a person acts.
3. Focusing on immediate causes hinders unveiling the basic causes, which often reside in the
organisational environment.
Due to this, we will also include the organisational
environment in the safety culture concept, as proposed
by Cooper (2000).
2.2
When human error is not seen only as a cause of accidents, but as a symptom and consequence of problems
deeper inside the organisation, or what Reason (2001,
2003) refers to as latent organisational factors, emphasis is placed on weaknesses in strategic decisions made
at the top level in the organisation. These strategic
decisions may reflect an underlying assumption about
the best way to adapt to external factors and to achieve
internal integration, and if they are common for most
shipping companies an organisational culture may also
1425
METHODOLOGICAL IMPLICATIONS
Construct
Number of items
3
7
7
18
8
5
7
9
10
Coopers (2000) framework put forward the importance of methodological triangulation in order to grasp
all facets of the cultural concept. The internal psychological factors are most often assessed via safety
climate questionnaires. Our approach is to start with
such a survey in order to gain insight into the seafarers
perceptions and attitudes related to safety, along with
self-reported work behaviour related to risk taking,
rule violation and accident reporting. The survey also
includes questions related to crewing strategy, which
opens up the possibility of assessing the relationship between the organisational situation and actual
behaviour. The survey results are used to determine
which organisational factors are most likely affect the
safety culture, and to define research areas for a further
qualitative study.
3.1
Table 1.
Questionnaire sample
1426
RESULTS
Results from descriptive analysis
reporting practices, (3) competence, (4) local management, and (5) work situation. With regard to the local
management, competence and work situation
factor both EFA and CFA result in final solutions consisting of the same items, but with minor differences
in factor loading. The CFA included three more items
in the interaction factor than the EFA, and the final
factor, reporting practices resulted from only the
CFA.
Four of the constructs did not pass the reliability
tests. The first, top managements safety priorities,
was excluded due to low representative reliability
across subpopulations. This construct also consisted of
too few items. The remaining three constructs, procedures and guidelines, responsibility and sanctions
and working environment were excluded due to
low validity, mostly resulting from poor theoretical
relationship within the items of each construct.
For the further analysis the results from the CFA are
used. The 5 factors in question are presented in Table 2
along with number of items and explained variance.
Each factors Cronbachs alpha value and inter item
statistics is presented in table 3.
The alpha values range from .808 to .878, and the
internal item statistics are all within the recommended
levels. The five factors are therefore considered to be
a reliable and valid reflection of the underlying safety
culture concept.
Further, table 4 presents the correlation coefficients
between the factors, or safety culture dimensions. All
correlations are significant at the 0.01 level (2-tailed)
Number
of items
Explained
variance
Interaction
Reporting practices
Competence
Local management
Work situation
8
5
4
3
3
35.63%
9.77%
7.12%
5.96%
5.08%
Alpha
Inter-item
range
Item-total
range
Interaction
Reporting practices
Competence
Local Management
Work situation
.878
.808
.839
.866
.817
.360.606
.335.761
.497682
.692.716
.512.749
.520.724
.491.668
.628.712
.724774
.554.739
1427
Table 4.
F1: Interaction
F2: Reporting
practices
F3: Competence
F4: Local
management
F5: Work
situation
F2
F3
F4
.352
.639
1
.323
.474
.362
.367
.494
.322
.441
.444
F5
DISCUSSION
1428
for 7.12% of the total variance, is in this setting comprised of activities performed on board the vessel, and
is all under the control of the captain, training, drills
and familiarisation when signing on. Also, the competence dimension does correlate strongly with the
interaction dimension with a correlation coefficient
at .639. This indicates that a situation when the sailors
are feeling confident with the nature of their task also
results in a better interaction climate where conflicts
are more likely to be absent. As with the interaction
climate, competence will also be affected by the crew
stability. A crew member that is constantly signing on
new vessels and that has to interact with new crew
members and leaders, uses more effort adapting to
the new situation, working out how things are done
at that specific vessel, the informal structure onboard
and so on. When more stability is provided, more effort
may be placed on upgrading their competence, and the
competence will be kept within the vessel and organisation. Both the training activities and crewing strategy
may be controlled by the ship management, and thus
these safety culture dimensions are also, to a certain
degree, controllable.
The dimension of work situation consists of proactive activities as Safe Job Analysis (SJA), safety
evaluations and the possibility they have to prioritize
safety in their daily work. So how may the organisation affect this? For one, they may supply sufficient
crew. Today many vessels are sailing with a smaller
crew at the same time as new standards and resolutions
like the ISM-code increase the amount of paperwork
to be done. Both own observations and interviews
reveal that inter alia check lists and SJA are done in a
mechanical manner. This may originate from various
reasons such as an overload of work, no understanding
of the importance of those activities, lack of feedback
or improper planning by the local management.
The local management dimension, accounts for
5.96% of the explained variance, and the direct effect
of local management is relatively small. However,
local management is considered to have an indirect
effect on the safety climate through the managers,
or senior officers, affect on the interaction climate,
competence and training activities, reporting practices
and the work situation. Again we wish to focus on
the importance of stability within the work group.
Most captains have a sailing period of 3 months or
less, while most of the non Norwegian ratings have
a sailing period of 9 months. Most senior officers
also have a shorter sailing period then an ordinary
rating. Then a rating possibly has to deal with several different leaders during his stay. And each captain
and department managers leadership style may vary,
and are sometimes even destructive, as shown by following comment from a Pilipino engineer. The only
problem on board is the treatment of senior officers to
the lowest rank. (. . . ) There are some senior officers
CONCLUSIONS
The aim of this paper was to analyse the characteristics of the safety culture on Norwegian tankers and
bulk carriers, and identify what organisational factors
may affect the safety culture on board vessels. Statistical analysis identified five safety related dimensions
on board the vessels: interaction climate, reporting
practices, competence, local management and work
situation. Within shipping the interaction climate is
characterised by unstable working conditions. Under
such conditions it is difficult to achieve and maintain
a stable crew, and proper management becomes even
more important. Also the Captain has a vital role, as
he has the possibility to directly affect all the other
safety related aspects through his own leadership style.
The Captains, officers and ratings normally have different employment terms and shift terms. This may
jeopardise the development of a sound safety culture
as the crew has a poor possibility of developing common behaviour practices and a mutual understanding
1429
1430
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Despite increasing technological advancement and the strict regulation on occupational safety,
the number of accidents and injuries at work is still high. According to many studies, the best way to reduce these
occurrences is to promote a better safety culture. To this end many models have been proposed to identify the
complex nature and structure of safety culture: some of them take into consideration only professional aspects,
others take into consideration socio-cultural aspects. The paper proposes a new model, based on Bayesian
Belief Networks, for representing, analysing and improving safety culture. Thanks to the direct involvement
of employees, the model is able to provide a quantitative representation of the real anatomy of the companys
safety culture and its impact on the expected probability of safe behaviours performed by workers. To test and
prove the validity of the proposed approach a case study in a leading agricultural tractor manufacturer has been
carried out.
1
INTRODUCTION
1431
Notwithstanding the key role played by safety culture and safety climate in preventing or reducing the
incident rate of an organization, there is no apparent consensus on how to define the terms safety
culture and safety climate and how to describe
them. Guldenmund (2000) and Choudhry et al. (2007)
can be considered as reference for a the state-of-theart of definitions, theories, researches and models
related to safety culture. Guldenmund (2000) reviewed
the literature on safety culture and safety climate
in terms of definitions, sources, causal models and
goals from Zohar (1980) to Cabrera et al. (1997)
and Williamson et al. (1997). Choudhry et al. (2007)
focused their literature review on safety culture definitions and researches undertaken from 1998 onwards.
Furthermore, Guldenmund (2000) gives an overview
of the amount of questions, surveyed population and
dimensions of safety culture and climate researches.
Even though many models take into account only
technical aspects or only socio-cultural aspects, we
think that the socio-technical models are more suitable for description of the culture of real organizations.
Indeed, the paper describes a model for assessing the
anatomy and the effectiveness of safety culture considering both social and technical factors. The next
section briefly introduces the socio-technical model
developed by Brown et al. (2000) that has been used as
theoretical reference for our analyses on safety culture
mapping.
2.1
Browns model
Safety hazardstangible factors in the work environment (e.g. heavy items that must be carried,
repetitive motion, moving objects) that may pose
risks for possible injuries or ailments.
Safety climatethe more appropriate description of
the construct that captures employees perceptions
of the role of safety within an organization.
Pressure for expediency over safetyan employees
perception that an organization encourages him or
her to work around safety procedures in order to
meet production quotas, keep up with the flow
of incoming work, meet important deadlines, or
continue getting paychecks.
1432
METHODOLOGY
CASE STUDY
To test and prove the functionality of the developed approach a case study in SAME DEUTZ-FAHR
Italia has been carried out. The group is one of the
4.1
Variables
4.2
1433
Table 1.
Macro category
Note
Variable
Safe work
behaviours
The criterion
variable for a
predictive model
Employee
perception of the
technical system
factors
Safe work
behaviours
System-related
factors
Human Factors
Employee
attitudes and
behaviours
Safety hazards
Safety climate
Pressure
Management
commitment
Priority of safety
over other issues
Safety efficacy
Cavalier attitude
Communication
Employee
involvement
Employee fidelity
Variables
Alfa-Cronbach
Safety hazard
Safety climate
Pressure
Management commitment
Priority of safety over other
issues
Safety efficacy
Cavalier attitude
Communication
Employee involvement
Employee fidelity
Safe Behaviour
0.75
0.77
0.72
0.78
0.83
0.81
0.88
0.86
0.75
0.71
0.83
...
...
...
SocioTechnical factors
Hazard
Do you lift too heavy loads?
Do you consider dangerous the
transit of trucks/trolleys?
Are the tools/machine tools you
currently use too dangerous?
Is the work environment safe?
...
Cavalier attitude
Do you think safety rules
are useful?
Would your job be safe also
without procedures?
...
(1 = Never, 2 =
Sometime, 3 = At all)
2
Figure 3.
1
2
2
(1 = Never, 2 =
Sometime, 3 = At all)
2
1
the selected sample can be considered statistical significant. The response rate was about 95% (123
responses).
Since the factors are relevant to several questions in
the questionnaire, values are calculated as the average
scores of all relevant questions with the same weight.
For example the value of the variable Hazard, derived
from the responses in Table 2, has been calculated as
the average scores of the 4 questions, that is Hazard =
(2 + 1 + 2 + 2)/4 = 1.75. Due to this, the value of
the 10 social and technical factors lies in the interval
between 1 to 3, whereas the value representing Safe
Work Behaviour lies in the interval between 0 to 100
(percentage of working time).
The internal consistency of the questionnaire pertinent to the eleven factors is measured by Cronbachs
alpha (Table 3), related to scale reliability that was
defined as the extent to which a measure produces similar results over different occasions of data collection
(McMillan et al. 1989).
1434
4.3
Network development
The collected data were processed through the software Bayesware Discoverer, that implements the
algorithm K2 (Cooper et al. 1992), to find out the
Bayesian structure underlying the relationships among
socio-technical factors and also to estimate conditional probabilities. The arcs in the BBN were directed
considering the experts opinion in cause-effect relationships (it is apparent, for example, that the age
of the worker affects the safety climate and not the
reverse) and the Browns model (Brown et al. 2000).
The data pre-processing (using algorithms as K2,
NPC, PC) allowed the reduction of CPTs complexity:
during the pre-processing it is convenient to have the
discretisation of continuous intervals aiming at reducing the number of possible states. In the applied case
the learning algorithm and data processing were used
Figure 5a.
area.
1435
Table 4. Summary of the effectiveness of strategies to improve safe work behaviours in manufacturing area. All values
are percentage. Negative values means decreases.
Joint strategy
Strategy
Safety
hazards
(%)
Safety
climate
(%)
Management
commitment
(%)
Safe work
behaviour
(%)
Unsafe work
behaviour
(%)
0,8
11, 9
0,0
0, 2
0,2
3, 1
0,3
4, 4
0,0
0, 1
0,2
2, 3
1,5
21, 3
13,7
1,0
14, 7
16,3
13,7
0,3
3, 9
16,3
13,7
1,8
25, 9
27, 2
16,3
13,7
10
10
10
27, 2
16,3
27, 2
27,2
1436
Table 5. Summary of the effectiveness of strategies to improve safe work behaviours in assembling area. All values are
percentage. Negative values means decreases.
Strategy
Simple
strategy (best scenario)
Simple
strategy (10% factor
improvement)
Joint strategy
Priority
of safety
(%)
Safe
workplace
behaviour
(%)
Unsafe
workplace
behaviour
(%)
38,9
1,5
6
0,6
4,2
16,8
1,9
10,0
38,9
1,5
7,3
4,3
20,6
Pressure
(%)
22,6
10
27,2
suggest the best option (Fig. 7). To achieve an improvement of 3.1% on the rate of safe behaviours it is more
efficient to mitigate safety hazards (8.3% of reduction) than promote management commitment (13.7%
of improvement). Therefore, the analysis suggests to
focus on the elimination or minimisation of hazards.
The same approach has been applied in the assembling area where a reduction of 4.2% of unsafe
behaviours is explored thanks to a higher priority of
safety (Fig. 8). In this case the lowest level of effort
required to achieve the same target, i.e. the most efficient action, is guaranteed by reducing the pressure for
compliance with production scheduling over safety.
The proposed analysis to improve safe behaviours
is regarded as suitable to suggest how and where focus
actions for improvement only on the short-medium
term. Indeed, the effectiveness of these strategies on
middle-long term is not demonstrated. We think that
in the middle-long term it is more effective to concentrate on improving the anatomy of safety culture, by
removing incoherent relationships and promoting the
inclusion of positive shaping factors.
1437
CONCLUSIONS
1438
Safety, Reliability and Risk Analysis: Theory, Methods and Applications Martorell et al. (eds)
2009 Taylor & Francis Group, London, ISBN 978-0-415-48513-5
ABSTRACT: Experience with accidents in different branches of industry, also in railways companies and the
nuclear industry, has shown the importance of safe operation management. Recent events in German nuclear
power plants such as a transformer fire underlined the necessity of an appropriate and regularly reviewed safety
management system. Such a system has two major aims, firstly to improve the safety performance through
planning, control and supervision of safety related activities through all operational phases and secondly to
foster and support a strong safety culture through the development and reinforcement of good safety attitudes
and behaviour in individuals and teams. Methods of evaluating safety culture can be divided into two main
groups: the approach mainly based on interviews with plant management and staff to get direct information
about the plant safety culture and an approach that uses information derived from plant operation, incidents and
accidents. The current status of the still ongoing discussion in Germany is presented.
INTRODUCTION