0% found this document useful (0 votes)
13 views

The

Uploaded by

jian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

The

Uploaded by

jian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

The "culture” o fan organisation can be defined as "the way we do things

around here”. As such culture provides a context for action which binds
together the different components of an organisational system in the pursuit
of corporate goals. Successful organisations tend to heve strong cultures
which dominate and permeate the structure and associated systems. Within
these organisations nothing is too trivial or too much trouble. Every effort is
made by every member to ensure that all activities are done the "right”
way. Thus the prevailing organisational culture serves as a powerful lever in
guiding the behaviour of its members in their everday work.

The Impact of Safety Culture on Quality


An evaluation of the impact of safety culture on quality in 626 US
organisations revealed that better work methods and reduced absenteeism
had contributed to improved organisational performance, while also
impacting on producto quality. Similarly, construction industry studies have
shown that projects driven by safety are more likely to be on Schedule and
within Budget. The safety culture of Shell, for example, was shows to heve
had a significant effect on the progress and completion of a new natural gas
liquid plant at Mossmorran, Scotland. Major investments in safety in the
British Steel industry not only resulted in significant reductions in accidents
with corresponding increases in productivity, but also led to increasingly
positive attitudes about quality an safety.

The Impact of Safety Culture on Reliability


The impacto f safety culture on the reliability of technological systems is
thought to be indirect via organisational structures and processes: partly
because the reliability of complex technical systems (e.g. manufacturing
plant) is dependent on the quality of its structural components and sub
systems; and, partly because of the interaction between them. Nonetheless,
reliability has been reported to improve by a factor of three, and sometimes
by as much as a factor of ten, when quality improvements are initiated. It is
likely, however, that some of these improvements are related to the use of
better monitoring and feedback systems, both of which are vital safety
culture features, and as a resulto f streamlining production processes.
The Impact of Safety Culture on Competitiveness
A Good safety culture can also contribute to competitiveness in many ways.
For example, it may make the difference between winning or losing a
contract (e.g. many operating companies in the off-shore oil industry only
select and award work to contractors with a positive safety culture); it may
affect people’s way of thinking and lead to the development of safety
features for some products which are then used as markting devices (e.g.
air bags in motor vehicles to protect occupants during a collision); and it
positively impacts on employees’ commitment and loyalty to the
organisation, resulting in greater job satisfaction, productivity and reduced
absenteeism.
The Impacto of Safety Culture on Profitability
Although a focus on safety has often been seen as non-productive
expenditure demanded by law, it can also contribute to profit by minimising
los and adding to the capital value o fan organisation. For example,
construction industry research has shown that an investment of 2.5% of
direct labor costs in an effective safety program should, at a conservative
estimate, produce a gross.
Saving of 6.5% (4.0% net) of direct labor cost. Similarly, an 82% decrease in
lost-time accidents which resulted from a behavioural safety programme
saved a manufacturing company an estimated £180,00 to £360,000 in
compensation costs in just one year. These figures were considered
conservative, as the estimated savings did not reflect those associated with
a 55% decrease in minor injuries. In the normal course of events, generating
this leve lof profit might require an extra 30% to 40% of production capacity.
As the latter illustrates, the costs of accidents can be considerable. Previous
estimates by the Confederation of British Industry (CBI) in 1990 suggested
that the mínimum non-recoverable costo f each accident was £1,500,
whether investigated or not. Similarly, in 1993, base don research in six
industries, the Health and Safety Executive’s (HSE) Accident Prevention
Advisory Unit (APAU) estimated that only £1 in £11 lost as a result of
workplace accidents is covered by insurance. Indeed the typical costs
associated with accidents include:
Lost production caused by:
Time away from job by injured person and co-worker(s) in attendance
Time spent by first-aider attending injured person
Possible downtime of production process
Posible damage to production process
Time and costs due to repair of plant and equipment
Increased insurance premiums
Legal costs
Medical expenses
Compensation costs to injusred employees
Absenteeism
Lower morale of employees leading to por performance and productivity
Usatisfactory employee relations
Low levels of motivation.

As a whole, the available evidence indicates that an effective safety culture


is an essential element of any business strategy, as it has so many positive
effects on other áreas of business performance. It also illustrates the point
that safety culture does not opérate in a vacuum: it affects, and in turn is
affected by, other operational processes or organisational systems.

THE EVOLUTION OF THE CONCEPT OF SAFETY CULTURE


Traditionally, attempts to identify the most effective methods for preventing
accidents have typically addressed two fundamental issues:
Whether or not employees should be provided with the máximum protection
posible
Whether or not employees should be trained to recognise potentially
hazardous situations and take the most appropriate actions.

Implicitly recognising that the potential for an accident is always present,


the first approach is base don the fundamental belief that protecting an
individual from the potential for harm, either by statutory mean sor via
physical barriers, is the best way to proceed. The second approach is
predicated on the fundamental beliet that, if the individual possesses the
relevant knowledge and skills, accidents Will be avoided. Traditionally,
attempts to impove safety campaigns or safety training. However, as a
result of inquiries investigating large-scale disasters such as Chernobyl, the
Kings Cross fire, Piper Alpha, Clapham Junction, etc., more recent moves to
improve workplace safety have focused on the concept o fan identifiable
safety culture. Whilst incorporating all the traditional routes to improve
safety, the concept of safety culture goes much further by focusing on the
presence of good quality safety management control systems.

Legislative Attempts to Improve Safety


Legislative approaches to improving safety have their roots in the industrial
revolution of the 18th and 19th centuries. Due to radical changes in
technology and the development of new industries, many employees were
exposed to all manner of hazards in factories and mines. During this period,
the rising number of deaths and injuries led to immense public pressure for
parliamentary regulation. Initial parliamentary reluctance, and much
opposition from factory and mine owners, led to large chucks of this early
legislation being repealed and then reintroduced as deficiencies became
apparent. Importantly, however, this legislation introduced the notion of
inspectorates for factories (1833), for mines (1842) and for the railways
(1840), albeit that the inspectorates’ authority was fairly limited. Over the
next 100 years a steady stream of legislation followed that further
Empowered these different inspectorates while also establishing many
important principles, such as the mandatory reporting of fatal accidents, the
provision of guards for moving machinery and the requirement to provide
first-aid facilities. In 1972 the Robens Committee investigated the many
shortcoming in safety management of the time, and made various
recommendations that subsequently formed the basis of the Health and
Safety at Work Act 1974. This Act placed the responsibility for all the
previous Health and Safety Inspectorates under the auspices of the Health
and Safety Commission (HSC) to bring about changes in safety management
practices. The central idea wa that the HSC would promote proactive self-
regulatory safety management practices by influencing attitudes and
creating an optimal framework for the orgnisation of health and safety.
Unfortunately, this proved more difficult in practice than envisaged: partly
becauce of the pervading influence of traditional accident causation models;
partly because of “get out” clauses provided by such qualifiers as “as far as
is reasonably practicable”; and, partly because many employers had real
difficulty in understanding what the were required to do in practice.
Moreover, legislation can only be effective if it is adequately resourced and
policed. The has not always proven possible as, traditionally, the number of
inspectors available has been relatively small compared to the number of
premises covered by the legislation. In the UK construction industry, for
example, at the beginning of the 1990s, there were only 90 or so inspectors
to pólice approximately 100,000 sites, not all of which had been notified to
the appropriate authorities. In practice, the meant that many companies
could openly flout the 1974 Act with little chance of prosecution. Indeed,
manyo f them implemented safety improvement initiatives only when forced
to do by inspectors.
As a result of recent European directives, the legislative focus has now
firmly shifted to proactive management of safety rather than an inspection
of sites/premises approach (i.e. the Management of Health and Safety at
Work Regulations 1992 (MHSWR). Accompanied by an Approved Code of
Practice (ACOP) issued by the HSC, these regulations came into effect in
January 1993. One of the most important features of the new regulations is
that the majority of the requirements are of an absolute nature, designated
by the term ‘shall’, rather than ‘so far as is reasonably practicable’.
Similarly, the emphasis has switched to the process of safety management
rather than the outcomes: employers are now required to take steps to
identify and manage hazards by undertaking formal assessments of risk.
Thereafter they must plan, organise, implement, control, monitor and review
their preventative and protective measures. These measures must be
documented and fully integrated with other types of management systems
(e.g. finance, personnel, production, etc.). In some high-risk industries (e.g.
offshore energy extraction, mining and rail transport) companies are also
required to submit a ‘safety case’ detailing precisely how they intend top ut
the regulations into effect. Although some may view the new regulations as
draconian, much of the underlying ratiobale is devived from management
theory and multi-disciplinary scientific research examining accident
causation factors.
ACCIDENT CAUSATION MODELS
During the 19th and early 20th centuries many safety practitioners and
factoty inspectorates took the view preventative physical measures such as
machine guarding, housekeeping and hazand inspections were the best way
to prevent accidents. This view was predicated on the belief that controlling,
however, accidents continued to increase at an alarming rate in British
factories during and after the First World War. This led to the commissioning
of government committees to examine whether accidents were caused by
physical working conditions (situational factors) or individual characteristics
(person factors). The differentiation was partly base don the hereditary
versus environment debate brought about by Darwin’s radical theory of
evolution, and partly because in-depth knowledge about the causes of
accidents cauld lead to the appropriate countermeasures being applied.

Accident Proneness Models


In 1919, at the behest of these government committees, Greenwood and
Woods from the Industrial Fatigue Research Board satatistically examined
accident rates in a munitions factory. Based an the notion that all munitions
workers were exposed to the same levels of risk, they examined three
propositions to try to identify the most worthwhile preventative measures.
These were thast:
Accidentes were a resulto f pure chance, and could happen to anyone at any
time
Having already experienced an accident, a person’s propensity for further
incidents would be reduced (burnt fingers hypothesis) or increased
(contagious hypothesis)
Some people were more likely to suffer an accident than others.
If the first proposition were correct, and no differences in accident rates
were found for particular types of people, prevention could be focused solely
on environmental demands and conditions. If the second proposition were
correct, remedial actions could be concentrated upon only those individuals
who had previously suffered an accident. If the third proposition were
correct, people with low accident liability could be selected for Jobs, while
those who experienced multiple accidents could be asked to leave.
An análisis of accident records divided into successive theree month periods
appeared to suggest that some people were consistently more involverd in
accidents than others, thereby supporting the third proposition. Despite the
obvious fact that nota ll people are exposed to the same levels of risk in
their work, these results and those of other studies led to the ‘accident
proneness’ model which dominated safety thinking and research for almost
50 years. In practice, the pervading influence of this approach meant that
most accidents were blamed solely on employees rather than the work
processes, por management practices or a combination of all three, a
response that can still be found in some organisations. Typically,
investigations to discover the underlying causal factors were felt
unnecessary and/or too costly winth the result that little attention was paid
to how accidents actually happened. Thus many companies felt they had
Little to do in the way of accident prevention other than select the right
employees and weed out or re-educate those involved in more than one
accident. Importantly, the findings of these types of study placed greater
emphasis on the fallibility of people than on the interaction between working
conditions and people, and this led to many companies inadvertently
neglecting their real safety responsibilities.
Heinrich’s Domino Theory
Despite recognition by early researchers of the role that managerial and
organisational factors played in the accident causation chain, most
practitioners focused almost exclusively on the prominence of employee’s
unsafe acts. To some extent this prominence, expressed in accident
triangles to this day, reinforced the prevailing view about ‘accident prone’,
published in 1931. Heinrich postulated that accidents were caused by either
an unsafe act, an unsafe condition, or both. Termed the ‘Domino’ theory,
this work provided the first sequential theory of the accident causation
process. Not only was safety behaviour demonstrated top lay a greater role
than previously thought (see Figure 1.1: Heinrich’s Domino Model of
Accident Causation), but it also brought the interaction between behaviour
and conditions (situation) into sharper focus for the first time.

Errores
organizativos

Estructura de
gestión
Errores operativos
Errores tácticos
Accidente/incidente

Lesiones/daños
materiales/incidente

ADAMS DOMINO THEORY


Building on Weaver’s adaptation of Heinrich’s basic model from an industrial
engineering systems perspective, in 1976 Adams changed the emphasis of
the first three dominos to reflect organisational rather than person feaures
(see Figure 1.3: Adams’ Domino Model of Accident Causation). By doing so,
he was one of the first theorists to move away from the discredited accident
proneness approach. Importantly, Adams also implicitly recognised the
discredited accident proneness approach. Importantly, Adams also implicitly
recognised the notion of a safety culture by stanting that the personality of
an organisation was reflected in its stable operational errors were caused:
by the management structure; the organisation’s objetives; the organisation
of the workflow system; and how operations were planned and executed. In
turn these operational errors caused ‘tactical errors’ (unsafe acts or
conditions). The essential difference here is that Adams explicitly recognised
that tactical errors were the result of higher management’s strategic errors.
Thus, Adams was one of the first safety theorists to specifically highlight the
multiple interactions between organisational structures, systems and sub
systems, and unsafe conditions and/or employees’ safety behaviour. Ideed,
Adams’ work is reflected in Johnson’s Management Oversight and Risk Tree
(MORT) published in 1975 which is an analytical tool set in a logicalfault tree
format that provides a systematic basis for detailed accident investigations.
Although the scale and complexity of MORT has limited its practical
application, it has proven to be of immense value for building of accident
causation, this is partly because it recognises the interactions between
physical (job), procedural (organisational) and personal elements, and partly
because it has helped to discover that a number of parallel accident
sequences develop over a period of time, prior to the various causation
elements coinciding and interacting to produce an incident. This latter point,
and others, was picked up and developed further by James Reason in 1993.

Bird and Loftus’ Domino Theory


In parallel with these developments by Adams from the perspective of
management theory and total loss control, Bird and Loftus adapted
Heinrich’s Domino theory to reflect the influence of management in the
accidente causation process (see figure 1.4: Bird and loftus’ Domino Model
of Accident Causation). This model takes the view that por management in
the accident causation process (see Figure 1.4: Bird and Loftus’ Domino
Model of Accident Causation). This model takes the view that por
management control creates either poor personal factors (e.g. lack of
appropriate training) or por job factores (e.g. unguarded machinery). In
combination, these two factors lead to either unsafe acts or unsafe
conditions. In turn these cause an incident, which lead to losses related to
people, property or operational processes. This model in particular has
exerted a great influence on safety practices in some industries (e.g.
chemicals and mining) by virtue of its subsequent development into an
auditing tool (i.e. the International Safety Reting System (ISRS)) and its
emphasis on cost savings and financial return.

Falta de control
Causas básicas
Causas
inmediatas Incidentes

Pérdida

Incumplimiento
de las normas Factores de
pobreza o de Actos o
condiciones El Pérdida de propiedad o
trabajo
inseguras accidente/incidente
Although the above models have proved useful in identifying the sequence
of events in the accident causation chain, they have largely failed to specify
how and under what conditions each of the sequential elements might
interact to produce accidents. Many pratitioners have continued to blame
the individual for the unsafe act, or merely indentify and rectify the
inmediate usafe conditions, rather than examining how and why the unsafe
act ocurred, or how the unsafe condition was created. A more recent
causation model by Professor James Reason has largely overcome these
shortcomings. Initially base don an análisis of the Chernobyl disater in 1987,
Reason likened the accident causation process to ‘resident pathogens’ in
the human body. Similiar in concepto to physiological immune systems,
Reason argued that all organisational
systems carry the sedes of their own demine in the form of these
pathogens. In 1988 Reason termed these resident pathogens as ‘latent’
failures. In much the same way as Johnson had identified that accident
sequences develop over a period of time, Reason suggested that the’latent’
failures lie dormant, accumulate and subsequently combine with other
latent failures which are then triggered by ‘active’failures (e.g. unsafe acts)
to overcome the system’s defenses and cause accidents. Reason proposed
that ‘active´ failures were caused by por collective attitudes or by
unintentionally choosing the ‘wrong’ behavioural response in a given
situation, both of which may result in a breach of the system.
In later Works, Reason recognised the limitations of his original resident
pathogen model and, in conjunction with Wreathall, identifiad how and
where latent and active failures might be introduced into an organisational
system. This modified model suggests that pathigens are introduced into
the system by two routes.
 Laten failures caused by organisational or managerial factors (e.g.
top-level decisión-making).
 Active failures caused by individuals (e.g. psychological or
behavioural precursors).

Illustrated in Figure 1.5: Reason’s Pathogen Model of Accident Causation],


Reason´s mode is based on the notion that all types of productive systems
incorporate five basic elements.
 High-level decision-making.
 Line management co-ordination of operational activitis.
 Preconditions in the form of technology, Manpower and resources.
 Productive activities that require the synchronisation of people,
materials and technology.
 Defenses of some form or another to minimise the effects of
potentially hazardous circumstances.
Reason suggested that each particular element of the production model is
associated with its own particular form of latent or active failure.
Importantly, the principle pathogens emanate from the higher echelons and
are spread throughout the system by the various strands of line
management as they implement strategic decisions. These notions come
across clearly through his description of the two ways in which system
failures, or systemic pathogens, are introduced: types and tokens. ‘Types’
refer to general organisational and managerial failings, whereas ´tokens’ are
more specific failings relating to individuals. However, two different forms of
types exist.

Source types which are associated with senior management’s strategic


decisions.
Function types which are associated with line management´s
implementation of senior management´s fallible strategic decisions.
Analogous to Adam´s tactical errors, tokens also divide into condition tokens
which the situational (man-machine interface, workload, etc.) or
psychological (attention, attitudes, motivation, etc.) precursors of unsafe
acts; and act tokens that are further classified on the basis of whether they
are caused by:
Slip and lapses (skill-based errors)
Mistakes (rule-based and/or knowledge-based errors)
Volitions (deliberate infringements of safe working practices).

Compared to previous causation models, Reason’s 1993 pathogen model is


fairly comprehensive, and makes an important contribution to safety
management in so far as it identifies and distinguishes between the types of
error that might be made, and where they might be introduced into an
organisational system. It also stresses the importance of identifying and
rooting out possible latent fauilures before they can be triggered by active
failures. Like Adams before him, therefore, Reason shifts the main focus of
accident prevention away from the operator’s unsafe acts and more onto
the organisation’s overall management system, particularly in relation to the
implementation of the organisation’s strategic decisions.
figure 1.5 Reason’s Pathogen Model of Accident Causation – Adapted from
Reason, J. ‘Managing the Management Risk: New approaches to
Organisational Safety’, in B. Wilpert and T. Qvale Reliability Et Safety in
Hazardous Work Systems. LEA Hove (UK). Reprinted by permission of
Psychology Press Ltd, Hove, Uk
ORGANISATIONAL CHARACTERISTICS OF A GOOD SAFETY CULTURE
In parallel with the development of the accident causation models outlined
above, researchers attempted to identify certain organisational
characteristics thought to distinguish low accident companies from high
accident companies. Conducted in the USA during the early 1960s to the
end of the 1970s across a wide variety of industries, this research
discovered the following consistent features:

Strong senior management commitment, leadership and involvement in


safety
Closer contact and better communications between all organisational levels
Greater hazard control and better housekeeping
A mature, stable workforce
Good personnel selection, job placement and promotion procedures
Good induction and follow-up safety training
Ongoing safety schemes reinforcing the importance of safety, including
‘near miss’ reporting.
More recent research conducted in the UK at the end of the 1980s by the
CBI revealed similar features. However, by incorporating lessons learnt from
implementing TQM initiatives they also highlighted other essential features
that included:

Accepting that the promotion of a safety culture is a long term strategy


which requires sustained effort and interest
Adopting a formal health and safety policy, supported by adequate codes of
practice and safety standards
Stressing that health and safety is equal to other business objectives
Thoroughly investigating all accidents and near misses
Regularly auditing safety systems to provide information feedback with a
view to developing ideas for continuous improvement.
Importantly, all the above features were also identified in a report produced
in the early 1990s by the Advisory Committee on the Safety of Nuclear
Installations (ACSNI) Study Group on Human Factors, indicating broad
agreement about the specific factors that positively impact on safety
performance. Although most of the features identified allude to the presence
of organisational systems and modes of organisational behaviour, the ACSNI
group also highlighted the importance of various psychological attributes
that exert their influence on safety per se. These include perceptions about
and attitudes towards accident causation, risk and job-induced stress
caused by conflicting role demands and poor working conditions. The
prominence of these psychological factors was also highlighted in a study at
British Nuclear Fuels Ltd (BNFL), which showed that only 20% of the root
causes of accidents were attributable to inadequacies of equipment and
plant, with the remaining 80% being caused by people-based factors such
as poor managerial control, worker competencies and breaches of rules.
Based on this accumulated body of evidence the ACSNI Study Group
suggested that for practical purposes, safety culture could be defined as:

`…the producto of individual and group values, attitudes, competencias, and


patterns of behaviour that determine the commitment to, and the style and
proficiency of, an organisation’s health and safety programmes.
Organisations with a positive safety culture are characterised by
communications founded on mutual trust, by shared perceptions of the
importance of safety, and confidence in the efficacy of preventative
measure.’
TOWARDS A MODEL OF SAFETY CULTURE
To a greater or lesser degree, each of the accident causation models
described above recognises the presence o fan interactive or reciprocal
relationship between psychological, situational and behavioural factors.
Heinrich, for example, identified the interactive relationship between
behaviour, situations and person factors at operator levels, while his 80:20%
rule implicitly recognised that the strength of someone´s behaviour, or the
situation (e,g. workflow process) may exert different effects at diffrent
moments in time. The interacitive relationship between management
systems and managerial behaviour, was also recognised by Weaver when he
stated that accidents were symptoms of operational error. However, Adams’
far-reaching insights recognised the mutually interactive nature of the
relationship between all three factors, and the time-related causal
relationship between high level strategic decisions and tactical operational
errors. Similarly, Reason’s pathogen model recognises that person,
situational and behavioural factors are the immediate precursors of unsafe
acts; that the strength of each may differ; and that it may take time for one
element to exert its effects on the other two elements (e.g. the temporal
relationships between latent (managerial) and active (operational) failures).
Importantly, the work carried aut to identify the organisational
characteristics of a positive safety culture also emphasised the interaction
between organisational systems, modes of orgnisational behaviour and
people´s psychological attributes. Clearly, therefore, this interactive
relationship between psychological, situational and behavioural factors is
aplicable to the accident causation chain at all levels of an organisation.
Consequently, it can be cogently argued that culture is actually:
` The product of multiple goal-directed interactions between people
(psychological), Jobs (behavioural) and the organisation (situational)’.
Viewed from this perpective, an organisation’s prevailing safety culture is
reflected in the Dynamic inter-relationships between members’ perceptions
about, and attitudes towards, organisational safety goals; members’ day-to-
day goal-directed safety behaviour; and the presence and quality of
organisational safety systems to support goal-directed behaviour.
Consistent with the idea that culture can best be described as ´the way we
do things around here´, the potency of this interactive model for analysing `
safety culture´ resides in the explicit recognition that the relative strength of
each source may be different in any given situation: e.g. the design of a
production system may exert stronger effects on someone’s work-related
safety behaviour than that person´s safety attitudes. Similarly, the
interactive influence of each source may not accur simultaneously: e.g. it
may take time for a change in safety behaviour to exert an influence on and
actívate the relationship with the workflow system and/or work-related
safety attitudes.
Thinking of safety culture in these terms, therefore, provides an organising
framework to assist in ongoing practical assessments and analyses. As such,
given the appropriate measuring instruments, the relative influence of each
component can be determined in any given situation, so allowing either
highly focused remedial actions or forward planning to take place.
Indeed, the merists of this interactive framework for analysing safety culture
become apparent if we separate the ACSNI Study Group’s working definition
of safety culture into its component parts. For example, “individual and
group values and attitudes” refers to members perceptions about and
attitudes towards safety goals; “patterns of behaviour” refers to members”
day-to-day goal- directed safety behaviour. Moreover, the second section
implicitly recognises the “reciprocal” relationship between each of these
elements, acknowledged in paragraph 80 of the ACSNI report which
statesˋ…the whole is more than the sum of the parts. The many separate
practices interact to give a much larger effect´. It becomes clear that
working definition of safety culture alludes to the reciprocal relationship
between an organisation’s safety management system(s) (SMS), the
prevailing safety climate (perceptions and attitudes), and daily goal-directed
safety behaviour (see Figure 1.6: Cooper´s Reciprocal Safety Culture
Model). Since each of these safety culture components can be directly
measured in their own right or in combination, it is possible to quantify
safety culture in a meaningful way at many different organisational levels,
which hitherto has been somewhat difficult. Accordingly, the organising
framework also has the potential to provide organisations with a common
frame of reference for the development of benchmarking partnerships with
other business units or organisations. This latter point may be particularly
important to industries where there is substantial use of specialist sub-
contractors ( e.g. the same language. Additionally, it provides a means by
which the prevailing safety culture of different departments can be
compared usefully.
The practical utility of the interactive framework is further enhanced by the
fact that the model can be applied to each individual component (see Figure
1.7: Cooper’s Reciprocal Safety Culture Model applied to Each Elemnt). For
example, because we can meuse people’s perceptions and attitudes about
the prevailing safety climate via psychometric questionnaires, it is feasible
that we could discover that a work group´s levels of perceived risk (i.e,
person factors) is determined by their pereceptions of the required
workpace (i.e. job factors) and management´s commitment to safety (i.e.
organisational factors). Similarly, we might discover that the implementation
of commitment (i.e. person factors), competing goals (job factors) and
quality of communications (i.e. organisational factors). These relationships
also apply to safety management systems where person factors (e.g. safety
training) Will interact with job factors (e.g. man-machine interfacing) and
organisational factors (e.g. allocation of resources).
In recent years, manyo f these relationships have been empirically
examined in a wide variety of industries by the author and found to hold
true, providing support to the notion that safety culture can be meaningfully
analysed by using the model to focus on its constituent components: i.e.
safety management systems (situational), safety climate (perceptual) and
goal-directed safety behaviour (behavioural).

Definición del error humano


Everyone makes mistakes. Human errors are a part of our everyday
experience. A human error could therefore be defined quite simply as
‘someone making a mistake’. The reality is much more complex and
before this book can proceed much fur ther, it is necessary to produce
some clear definition of human error and the way it is manifested.
Many have tried and some have succeeded in defining human error.
Some examples from various sources follow and are listed under the
name of the author. In studying these definitions, it should be noted
that each author has a distinct purpose in mind when formulating his
definition and that the definition will be useful within that context. The
objective here is to produce a final definition which will be suitable
within the context of this book.

Swain and Guttman 1983


An error is an out of tolerance action, where the limits of tolerable
performance are defined by the system.
This is an interesting definition because it allows the system response
to determine whether an error has occurred. Thus a human error is a
deviation from normal or expected performance, the deviation being
defined by the consequence. The consequence is some measurable
characteristic of the system whose tolerable limits have been
exceeded, rather than the human action that contains the error.
However, after the error has been made, the human action within
which the error occurred can be examined to determine the cause of
the deviation. Also useful here is the concept of an out of tolerance
action, indicating that there are limitations to human performance
which can be accepted without a human error having necessarily
occurred.

Reason 1990
A generic term to encompass all those occasions in which a planned
sequence of mental or physical activities fails to achieve its intended
outcome, and when these failures cannot be attributed to some
chance agency.
Again, this definition focuses on the outcome or consequence of the
action rather than on the action itself in order to determine if an error
has occurred. In this definition it is recognized that the desired end
result may follow a pre-planned sequence of human actions, which
has to take place successfully before the result is achieved. Any one
or more of the actions in the sequence may contain an error that
causes the intended outcome not to be achieved. This closely reflects
the reality of many industrial situations. The definition is also
interesting in that it excludes random or chance events from the
category of human error. This is discussed in more detail below.

Hollnagel 1993
An erroneous action can be defined as an action which fails to
produce the expected result and/or which produces an unwanted
consequence.
Hollnagel prefers to use the term ‘erroneous action’rather than
‘human error’. The problem, according to Hollnagel, is that human
error can be understood in different ways. Firstly, it can refer to the
cause of an event, so that after an accident occurs, it is often
reported that it was due to human error. Human error can also be a
failure of the cognitive (or thinking) processes that went into planning
an action or sequence of actions, a failure in execution of the action
or a failure to carry out the action at all. Erroneous action defines
what happened without saying anything about why it happened.

Meister 1966
A failure of a common sequence of psychological functions that are
basic to human behaviour: stimulus, organism and response. When
any element of the chain is broken, a perfect execution cannot be
achieved due to failure of perceived stimulus, inability to discriminate
among various stimuli, misinterpretation of meaning of stimuli, not
knowing what response to make to a particular stimulus, physical
inability to make the required response and responding out of
sequence.
This quite detailed definition perceives human actions as comprising
three elements:
Stimulus – the perception by the senses of external cues which carry
the information that an action should be carried out.
Organism – the way these stimuli are interpreted, the formulation of
an appropriate action and the planning of how that action should be
carried out.
Response – the execution of the planned actions.
As with Reason’s definition, this emphasizes the reality that no single
human action stands alone, but is part of a sequential process and
that human error must be under stood in the context of this. This
principle will become abundantly clear as human error is examined in
the light of accident case studies. When the events that precede a
human error are found to have an influence on the probability of the
error occurring, the error is referred to as a human dependent failure.
In addition, although a human error may represent a deviation from
an intended action, not every error necessarily leads to a
consequence because of the possibility of error recovery. In fact many
errors are recoverable, if they were not, then the world would be a
much more chaotic place than it actually is. Error recovery is an
extremely important aspect of the study of human error and will be
dealt with in more detail later in this book, as will human error
dependency.

Characterizing an error

Intención de alcanzar un resultado deseado


A common element in all the above definitions is that for a human error to
occur within an action, the action must be accompanied by an intention to
achieve a desired result or outcome. This eliminates spontaneous and
involuntary actions (having no prior conscious thought or intent and
including the random errors which are discussed in more detail below) from
the category of human errors to be considered in this book. To fully
understand spontaneous and involuntary errors it is necessary to draw upon
expertise in the fields of psychology, physiology and neurology, disciplines
which are beyond the scope of this book which offers, as far as possible, a
pragmatic and engineering approach to human error. Readers interested in
delving further into these topics can refer to more specialist volumes.
Cómo decidir si se ha producido un error
One way of deciding whether or not an error has occurred is to focus on the
actual outcome and compare this with the intended outcome. Then, it could
be said, if the intended outcome was not achieved within certain limits of
tolerability, an error has occurred. Such a definition would, however, exclude
the important class of recovered errors mentioned above. If an error occurs
but its effects are nullified by some subsequent recovery action, it would be
incorrect to say that this was not a human error and need not be
investigated. It is possible that on a subsequent occasion, the same error
might occur and the recovery action not take place or be too late or
ineffective, in which case the actual outcome would differ from the intended
outcome. It must be true to say that the initiating error was always an error
even if the final outcome was as intended, if it was necessary to carry out
an intervening action successfully for the intended outcome to be achieved.
The subject of error recovery is considered in detail in a later chapter.

The significance of an error


An important principle to be established in the study of human error is that
the significance or severity of a human error is measured in terms of its
consequence. A human error is of little or no interest apart from its
consequence. In one sense, a human error that has no consequence is not
an error assuming that recovery has not taken place as discussed above.
There is nothing to register the occurrence of an error with no consequence
except for the perception of the person making the error assuming the
person was aware of an error being made. At the same time, an error does
not have to be special or unique to cause an accident of immense
proportions. The error itself may be completely trivial, the most insignificant
slip made in a moment of absentmindedness or an off-the-cuff management
decision. The seriousness of the error or the decisión depends entirely on
the consequences. This principle makes the study of human error not only
important but also challenging. If any trivial human error is potentially cap
able of causing such disproportionate consequences, then how can the
significant error that will cause a major accident be identified from the
millions of human errors which could possibly occur. Significant error
identification will be discussed later in this book.
Intención
An extremely important aspect of ‘what characterizes an error’ is the degree
of intention involved when an ‘out of tolerance action’ is committed. It is
important because later in the book a distinction is made between errors
and violations. The violation of a rule is considered as a separate category
to that of an error. A violation of a rule is always an intentional action,
carried out in full knowledge that a rule is being disobeyed, but not
necessarily in full knowledge of the consequences. If a violation is not
intentional then it is an error. It is important to make the distinction because
the method of analysis of violations (proposed later in this book) differs from
the normal methods of human error analysis which are also described. The
difficulty is that some classes of violations verge on error and are quite
difficult to differentiate. The best method of making the distinction is to
assess whether the action was intentional or not. For the purposes of this
book, a human error is by definition always considered to be unintentional.

A final definition

A final definition of human error which suits the purposes of this book yet
which takes into account the above characteristics and some of the other
definitions given above, is proposed as follows:
A human error is an unintended failure of a purposeful action, either singly
or as part of a planned sequence of actions, to achieve an intended
outcome within set limits of tolerability pertaining to either the action or
the outcome.
With this definition, a human error occurs if:
there was no intention to commit an error when carrying out the action,
the action was purposeful,
the intended outcome of the action was not achieved within set limits of
tolerability.
With this definition in place, it is now possible to examine how human error
can be classified using a number of error types and taxonomies. First of all,
however, it is necessary to make an important distinction between random
errors, which are not con sidered in this book, and systemic errors which are
considered.
Random and systemic errors
Introduction
Although random errors are not the main subject of this book, it is necessary
to examine them briefly here in order to be able to distinguish them from
systemic errors. The characteristics of a random error (adopted for the
purposes of this book) are that it is unintentional, unpredictable and does
not have a systemic cause (an external factor which caused the error or
made it more likely). The source of a random error will be found within the
mental process and will therefore be difficult to identify with any certainty
and even more difficult to correct with any prediction of success. This is
discussed in more detail below.
Although random errors are by definition unpredictable they are not
necessarily improbable. It is indeed fortunate that most human errors are
not truly random events that occur unpredictably in isolation from any
external point of reference. If this were the case, then the identification and
reduction of human error might well be made impossible and there would be
little purpose in writing this book. Fortunately most human errors have
underlying systemic causes that can be identified, studied and at least
partly addressed in order to make the errors less likely. It is only this
possibility that makes a whole range of dangerous human activities
acceptable.
Error causation
Two types of human error causation can be postulated and are referred to
simply as:
internal causes leading to endogenous error,
external causes leading to exogenous error
Endogenous errors have an internal cause such as a failure within the
cognitive (or thinking and reasoning) processes. Some writers refer to these
internal causes as ‘psychological mechanisms’. In order to explain the
occurrence of endogenous errors, it would be necessary to draw upon
insights from the psychological, physiological or neurological sciences.
By contrast, exogenous errors have an external cause or are related to a
context within which a human activity is carried out such as aspects of the
task environment which might make an error more likely. However, even
exogenous errors require internal cognitive processes to be involved. The
difference is, that in an exogenous error, some feature of the external
environment has also played a part in causing the error. This could happen
for instance in the situation where the person responding to a stimulus is
presented with confusing or conflicting information. The mental
interpretation and processing of this information is then made more difficult,
the planned response is not appropriate and results in an exogenous error.
Conversely when an endogenous error occurs, there is at least no evidence
of an external cause although it is difficult to show this with certainty.
Although the distinction between endogenous and exogenous errors may
seem rather artificial, it is nevertheless an important concept for
understanding the nature of human error. It is important because exogenous
errors are theoretically capable of being reduced in frequency through
changes to the external environment, while endogenous errors are not.
Human performance
In practice it is a matter of judgement whether an error is exogenous or
endogenous since there will never be complete information about the cause
of an error. One way of making the judgment is to assess whether the
external environment or stimulus to action seems conducive to reasonable
performance, given the capabilities of the person undertaking the task. If it
is, then the error may well be endogenous in nature. However, if it is judged
that a reasonable person, having the requisite skills would be unable to
undertake the task successfully, then the error is almost certainly
exogenous in nature. Human performance is therefore a function of the
balance between the capability of the person carrying out the task and the
demands of the task. The achievement of Good performance consists in
obtaining the right balance as illustrated in Figure 1.1.
Although it may not be possible to predict the occurrence of random errors,
it may still be possible to estimate their frequency. Many random errors
seem to occur at the extremes of human variability. As an example, we
might imagine a well-motivated person, supported by a well-designed
system, working in a comfortable (but not too comfortable) environment.
The person carries out a fairly simple but well practised routine, one which
demands a reasonable but unstressed level of attention and which retains
concentration and interest. Most of the time the task will be carried out
successfully. However, there will be rare occasions when that person may
well com mit an inadvertent and inexplicable error. This can almost certainly
be classed as an endogenous or random error.
Estimating human error probability
Basic probability theory and the methods of allocating actual probability
values to human errors is discussed in more detail in later chapters of this
book. The reason for estimating human error probability is that it provides a
benchmark for measuring the benefits of improvements made to the
systems that support human performance. This is particularly the case in
safety critical situations such as operating a nuclear power station, driving a
train or in air traffic control. In general, quantification of human error is
feasible in the case of exogenous errors, but less so in the case of
endogenous or random errors.
One approach to quantification of human error, which will be discussed later
in the book, is to assume an average or mean probability of error for a
particular type of task such as selecting a rotary control from a group of
similar controls. The actual probability of error in a given situation can then
be assessed by examining human capability (which may or may not be
average) versus the demands of the task, as discussed in Section 1.2.3. The
demands of the task may be assessed by looking for instance at how the
group of rotary controls are laid out and how clearly they are labelled. If
they are not laid out logically or they are not clearly labelled, then the
demands of the task Will be much greater and so will the error probability.
Conversely, if the person making the selection is not sufficiently trained or
experienced, then a higher probability of error may be expected. Although
the demands of the task may be acceptable, the scales may still become
unevenly balanced if human capability is insufficient.
It is a constant theme of this book that the causes of exogenous errors are
deficiencies in the systems in place to support the person carrying out the
task, or indeed the absence of such systems. Thus exogenous errors
resulting from the failure or inadequacy of systems will be referred to as
system induced or systemic error. Conversely, in accordance with the
pragmatic nature of this book, random errors are not generally considered
since their probability is indeterminate and they are less susceptible to
being corrected.
Human error and risk
The concept of residual error is important when the contribution of human
error to the risk of certain activities is considered. It is frequently stated that
the risk of an activity can never be reduced to zero, but can hopefully be
reduced to a level which is considered acceptable when weighed against the
benefits of the activity. It is also a fact that the cause of about 80 per cent of
all accidents can be attributed to human error. The fact that human error
cannot be entirely eliminated must therefore have an important bearing on
the level of residual risk of an activity where human error is a potential
accident contributor. Nevertheless, in such activities, the opportunity to
reduce risk to acceptable levels by reducing the probability of systemic
errors always remains a possibility. The main theme of this book is to
identify some of the more common deficiencies which are found in systems
and which make human errors more likely.

You might also like