0% found this document useful (0 votes)
63 views20 pages

Human Resource Indicators and Health Service Performance

This document discusses using human resource indicators (HRI) to help health service managers improve efficiency and effectiveness. It proposes developing a set of HRI that can: 1. Highlight issues and opportunities for local managers by comparing performance to similar organizations. 2. Be relevant at both local and higher levels of health systems. 3. Encourage local performance audits by allowing comparisons over time. The HRI are intended to provide pointers about resource use without determining exact causes. They work best when used alongside other indicators and management information to understand performance issues.

Uploaded by

DivyaDeepthi18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views20 pages

Human Resource Indicators and Health Service Performance

This document discusses using human resource indicators (HRI) to help health service managers improve efficiency and effectiveness. It proposes developing a set of HRI that can: 1. Highlight issues and opportunities for local managers by comparing performance to similar organizations. 2. Be relevant at both local and higher levels of health systems. 3. Encourage local performance audits by allowing comparisons over time. The HRI are intended to provide pointers about resource use without determining exact causes. They work best when used alongside other indicators and management information to understand performance issues.

Uploaded by

DivyaDeepthi18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Original Article

Human Resource Indicators and


Health Service Performance
Peter Hornby
Paul Forte
Centre for Health Planning and Management
Keele University
Keele, Staffs, England
Abstract
Public expectations and increasing financial pressures are requiring health services to adopt new
approaches to the management of their resources, particularly human resources. This paper examines the use of
human resource indicators (HRI) to support management-led initiatives to improve health service efficiency and
effectiveness. It does so, first, through an examination of the role of management indicators. This is followed
by a development of HRI which identifies the focus for HRI measurements; the types of indicators that support
this focus; the use of HRI and their interpretation; and, finally, options in their presentation. The paper
continues with an identification of a process for introducing the use of indicators into health services. The
paper concludes by stressing the need to link the introduction of HRI with practical efforts to enhance
management activities and purpose. Failure to do so will nullify the value of introducing HRI.
Key Words: Health service, Performance, Human resources, Indicators, Efficiency, Effectiveness

Introduction
New pressures are emerging in most countries with public expectations and demands
for health care increasing worldwide. It is evident that countries around the world are reexamining their approach to the provision of health care with the introduction of more radical
solutions to the problems they face, including the recognition that health services must
mobilise the resources available to them as efficiently and effectively as possible. At the
centre of this resource issue are health care staff, both trained and untrained, who constitute
the largest recurrent cost component of any health care service.
This need for greater efficiency and effectiveness in the use of health human resources
has in turn highlighted a requirement for improved management practice and more skilled
managers within health systems as well as a need for a practical methodology to assess
management performance(1) and particularly the management of human resources. A recent
WHO-sponsored initiative on the development of such a methodology for assessing
management performance (based on the application of human resource [HR] indicators of
performance) and the initial results of this work are reported here.
Implications for Management/HR Management
If we take as a fundamental characteristic that a health service organisation, like any
other organisation, comprises groups of people working towards a common purpose or set of
objectives which can be measured in some way then, typically, managers will want to know
whether:
the objectives are being achieved;
the service provided is as effective as it can be;
the processes by which the service is being delivered are as efficient as possible;
service delivery is improving or getting worse over time;
how the organisation compares with others in its efficiency and effectiveness.

Figure 1. Indicators for Measuring Organisational Performance


Human Resources indicators
generally in the form of
i

Needs

Objective

Process

Inputs

Relevance
Accessability

Outputs

Outcomes

Efficiency
Effectiveness
Impact

An important point to make here is that these information needs imply that managers
are actually managing (i.e. with responsibilities for the use of resources) rather than merely
administering the organisations. However, the recognition by most governments that
effective and efficient use of resources in health is critical if a satisfactory service is to be
sustained has led to a significant shift towards the concept of managed health services away
from bureaucratic and monolithic structures.
Decentralisation of service management is one observable measure which reflects this
change in attitude and is increasingly the direction in which many health service systems are
moving. However, in a decentralised system, where more rather than fewer decision-making
points exist, there is the potential danger of loss of control, particularly with an inadequate
information base. Improving this in the form of a set of HR indicators is becoming an
increasingly important part of enabling the decentralisation of management.
It will become even more so with the increasing acceptance that health system
performance in terms of efficiency, effectiveness and value for money is a legitimate focus
for health services management.
Management Indicators
Ideally, HR and other management indicators are constructed from generally available
data and describe constituents of organisational activity, namely inputs, processes and
outputs (see figure 1). It is this data that managers use in monitoring and as a basis for
decision making(2).
The indicators are usually created by linking two separate pieces of data to form a ratio.
The indicators literally provide an "indication" of the relative state of key determinants of
efficiency and effectiveness in comparison to "norms" of organisational activity. These norms
may be derived from:
- external comparisons with other similar organisations;
- internal comparisons with the previous performance of the organisation;
- comparisons with some pre-determined standard.

Figure 2. Hospital/staff performance comparison (between teaching hospitals A and B


and all general hospitals in a country)

Performance in relation to targets is a relative measure, not an absolute one. The


"relative norm" for performance is commonly determined by comparing the mean value of
the performance distribution for all similar organisations. Where there are differences from
these norms, then the "exception reporting" provided by the indicators is an essential first
step in pointing managers to where action needs to be taken. It is not just to show where
performance appears to be "poor" compared to similar organisations elsewhere, but also to
see where "good" performance is taking place, uncover the reasons why this is so and
determine how it can be applied elsewhere (3,4).
Ideally, several different indicators should be used to highlight an area of interest from
different perspectives. By using several indicators in this way, managers can begin to
understand what is happening; a single indicator is rarely sufficient. Early work on hospital
bed usage by Yates provides some excellent examples of this use of multiple indicators(5).
Indicators by themselves tend not to reveal the reasons for success or failure directly; rather
they point to issues and underlying causes and require further investigation to clarify the
detailed causal factors. In themselves the indicators cannot provide absolute certainty that
something is or is not happening; only a probability that something is occurring. A case in
point is demonstrated in figure 2.
The compares between two teaching hospitals shown as a box plot in Figure 2 above. Box
plots of the type shown hear provide a way of showing the position of an institution in
comparison to other institutions. Thus, in terms of beds, the two institutions compared here
are among the biggest of all institutions, while Hospital B has on e of lowest beds to doctors
ratios of all the institutions and substantially lower beds per doctor than Hospital A. Overall,
the indicators would suggest that while neither institution is achieving a very high throughput
of patients, Hospital A is achieving a more efficient use of its HR and bed resources. Clearly

more investigation would be needed to reach a definitive conclusion. The indicators have
simply identified nature of the potential management issue.
Human Resource Indicators
The aim of the WHO project is to establish a small set of indicators which will assist
managers, particularly in developing countries, in making the best use of the human
resources available to them. The indicators to be used need to cover a range of HR measures
and provide "pointers" to the likely efficiency and effectiveness with which human resources
are used. HR indicators (HRIs) form only one part of management information and must also
link with other information elements which report on performance in other areas of activity.
The project has explored a range of potentially suitable HR indicators which fit within a
framework of opportunities for managerial action. It considers, specifically, the identification of
a set of general HRIs which:
1. can point to issues and opportunities for local managers by comparison with other similar
organisations and units;
2. have relevance to local as well as higher governmental or organisational levels; and
3. encourage local performance audits by local management through comparisons of changes in
local indicators over time.
These indicators, and a methodology associated with using them, are intended to be
generally applicable to a wide range of health institutions. They are equally appropriate for nongovernmental organisations (NGO) and the private sector. Although they are primarily to be
used to compare similar institutions and organisational elements within particular segments of
the health sector (i.e. the public sector, NGOs or the private sector), cross-comparisons between
different segments of the health sector can also be made. Careful judgement is required to do
this to ensure compatibility in the circumstances of the institutions compared. It is, for instance,
to be expected that HR costs in a public sector hospital or clinic with an emergency service
requirement will differ markedly from a public sector hospital or clinic dealing with elective
work only or a private sector hospital dealing entirely with planned admissions.
We can only generalise, therefore, on what management and manager objectives are in a
particular country and the indicators they will need. In these circumstances, it is useful to define
indicators which can be grouped to characterise the general relationships between HR and other
elements of the health system. At the most general level, three major groupings of indicators are
suggested. These are indicators which relate to:
1.
2.
3.

the HR condition;
the product of the health system; and
the connection between HR and the product of the health system, i.e. linking 1 and 2.

This paper does not seek to provide an exhaustive or definitive list of human resource
indicators. Nevertheless, a core of indicators is presented in this paper (see Table 1-3). They
should be regarded as a basket of indicators from which those relevant to a particular

country's circumstances are chosen or added to. This may be, for example, through the
addition of entirely new indicators or in using variants of an existing indicator. The final
choice of indicators will be influenced by the specific issues that are regarded as most
relevant in a particular health service and which are also most amenable to change within it.
Table 1. Monitoring the HR condition
CHARACTERISTIC

A. Well-managed

GENERAL DESCRIPTION

INDICATOR OPTIONS

Determined
by
maintained 1. Staff on duty (available):
staffing levels, clear roles for
staff in post
staff qualified to do the work, 2. Vacancies: establishment
good communication and team
3. Establishment: staff in post
work
4. % budget on staff: total
budget
5. Staff reviews completed:
total staff
6. Post vacancy time: staff
available time

B. Properly trained

Staff are qualified to do the work 1. Number of staff receiving


required of them, know what that
training: total staff
work is and receive regular 2. Planned staff mix: actual
training updates
staff mix
3. Staff with job description:
total staff
4. Number of job descriptions
revised: total jobs
5. Total training time: total
working time

C. Motivated

Staff are committed, flexible, 1. No. of staff leaving: total


attend regularly and do more than
staff
they are required to do
2. No. of outside visits: total
staff
3. No. of days of uncertified
absence: total staff days
4. No. of hours worked:
official hours

CHARACTERISTIC

GENERAL DESCRIPTION

INDICATOR OPTIONS

D. Skills matched to Staff doing the work they are 1. See A and B above
tasks
competent to do
2. Prof. Health staff: other

staff
3. Skilled staff: unskilled staff
E. Staff matched to Sufficient staff available
workload
discharge duties assigned

F.
Staff
supported

to 1. No. of patients: staff number


2. No. of patients: professional
staff numbers
3. Overtime costs: total staff
costs

well- Staff operate in an acceptable 1. No. of times staff paid on


working
environment
with
time: total pay days
satisfactory conditions of service 2. No. of staff with housing:
total staff
3. Travel time to work: total
working time

Table 2. Monitoring the product of the health systme

CHARACTERISTICS

GENERAL DESCRIPTION

INDICATOR OPTIONS

G. Reduced morbidity

Activities of staff lead to


reduced morbidity

1. Immunisation: Target
number
2. Attended
deliveries:
total deliveries

H. Lowered preventable Activities of staff lead to


mortality
lowered mortality

1. Live birth rate: 1000

CHARACTERISTICS

INDICATOR OPTIONS

I. Less evidence
recurrent illness

GENERAL DESCRIPTION

of Treatment
provided
appropriate and effective

births
2. Infant mortality: 1000
children <1
3. Mortality: 1000
population

is 1. Endemic
caseload:
total population
2. Repeated patient visit:

total patients
3. Repeated treatments:
total treatments
I.
Increased
awareness

health Public take active role in health 1. Household visits: no. of


matters and take action to
staff
promote family and community 2. FP clinics time: total
health
working time
3. Staff providing HE:
total staff

Table 3. Monitoring the HR connection with system product

CHARACTERISTICS

GENERAL DESCRIPTION

INDICATOR OPTIONS

J. Appropriate skills
available

Mix of staff skills corresponds


to the service requirement

1. See D above
2. Patients: skilled staff

K.
Appropriate Staff available in sufficient and
caseloads
appropriate numbers to meet
service requirements

1. Skilled staff: non-skilled

L. Meets population Staff provide a service valued


needs
by the public

1. Patients: skilled staff


2. Total population: skilled
staff
3. Expenditure per case

staff
2. Inpatients: number of staff
3. Clinic attendances: number
of staff
4. Bed utilisation: number of
staff

Defining HR Indicators
Indicators can be developed to examine all the different elements of organisational
performance. This is important because managers need to know what is going on across all
the constituent parts of organisational activity and to understand what, if any, action is

within their power. The four main elements of performance identified in Figure 1 which
require management attention are illustrated here using indicators focused on HR aspects:

Inputs: this covers the resources introduced into the health system. Human resources
account for the majority of health service costs and are therefore the most significant
input. In making comparisons between health system units or over time it is useful to be
able to look at measures such as: relative proportions of different staff types and grades;
staff costs in relation to the total health service expenditure; numbers of staff relative to
the local population.

Processes: This looks at how the health service works as an organisation. In the HR
dimension, process issues include organisational environment in which people work and
the effect this might have on their performance, as well as more direct measures of HR
efficiency with respect to the way the HR resources are used. Thus staff turnover rates;
the "actual to planned" staff ratio; the ratio of new staff recruited to new staff trained all
give an indication of the quality of the organisational environment. Bed occupancy rates
to staff employed, on the other hand, provide a more direct relationship between HR and
other resources inputs in the health care process.

Outcomes: These are the products of the organisation. This is particularly difficult to
measure in health service systems as there is little agreement on ways of measuring health
outcomes (ie. the change in health status for a person having been in the health care
system). Usually the best that can be managed are proxy measures such as overall
population mortality rates to staff employed.

Outputs: Outcomes are often expressed in so-called intermediate output measures such
as the number of patients treated. This data can be more easily measured, but does not
give an accurate picture of how health status is affected. Typical HR output measures
could include: the number of nurses per thousand clinic attendances; trained nurses/
midwives per 1000 live births.

Peters and Waterman (6) identify the "7Ss" - strategy, structure, skills, style of
management, systems, staff, shared values - as key interrelated factors determining the
performance of an organisation. The HR elements in this (staff, skills, shared values and
structure) can be expected to play a significant role in changing organisational performance.
The most common words used to assess the impact of these related elements are
"efficiency", "effectiveness" and "quality". But what do these words translate into in
terms of a health service workforce? It is likely that the implications of these words are to
question the extent to which the HR are:
-

well-managed,
properly trained,
motivated,

appropriately skilled,
sufficient to undertake the required work,
well-supported (working conditions);

and the extent to which the activities of the health service elements or institutions
result in:
-

reduced morbidity,
lowered preventable mortality,
less evidence of recurrent illness, and
increased health awareness;

and, finally, the extent to which the activities undertaken by the institution or element
of the health service are appropriate to
-

skills available,
case loads,
type of population and their needs.

All of these factors provide some measure of quality of management of the HR and of the
likely performance of the workforce. The measure of management quality clearly extends
beyond that of the local manager because it necessarily must incorporate actions by those
responsible for staff training, deployment and for career development within and beyond the
local institution.
Using HR Indicators
The principle guiding the use of HRIs is that they must be used to record performance at
a large number of institutions simultaneously and at regular intervals. In doing this, the
indicators not only record current achievement but also the range that exists between different
institutions of a similar kind. Figure 3 shows such a comparison on three HRIs in current use in
Britain.
Figure 3. Set of Human Resource Indicators for a Sample of UK Districts
(Quarter 2 - 1994/95)

Even though the specific definitions of these measures are not given here (reference 7
provides a full specification), it is clear that there are substantial performance differences in
productivity between these sample districts. It will also be apparent that HRIs such as these
provide decision makers with an opportunity to:
1. determine what value is a suitable norm or standard of performance, using as a basis the
existing performance range;
2. identify what institutions are exhibiting good and bad practice in terms of their efficiency and
effectiveness;
3. develop new targets for the future which can be realistically achieved by managers in the
service;
4. assess objectively the performance of managers within the health system.
The way in which indicators are used depends on where the manager actually functions
within the health system. In general, the further away the manager is from direct patient care, the
more strategic the perspective tends (and needs) to be. At the local level, however, many HR
issues are tactical in nature. That is, the majority of the issues need to be addressed immediately
or in the very short term as they involve staff working directly with patients and coping with dayto-day demands. Thus, for example, a manager will want to know how many staff are available
on a given day or week and what effect that might have on meeting the needs or demands for
service.
In the short term, HRIs will provide no new information that is any use to the local manager.
It is only continued collection of HRI information that allows local comparison and comparison
with other similar units that issues and opportunities for change being to emerge. The changes
can be either tactical in nature or strategic within the limits of the local strategic function.
Managers operating at higher levels in the health system have both tactical and strategic
functions. They need to assess how the local units under their control are performing, both
compared with each other and with other units elsewhere, and over time. This information can
help identify where resources might best be deployed or redeployed to improve services in the
short term or serve to identify where best practices are being carried out locally and where
others can be encouraged to do the same.
At a national level, longer-term strategy planning and resource monitoring across the health
service as a whole will continue to be important. This requires an overview of what is going on
across all local units and a need for information to be synthesised to provide a broad perspective
on training, employment, career planning and standard-setting.
Interpreting Indicators

10

There are, potentially, a very large number-thousands-of indicators; the actual number
depends on the level of detail the indicators can be expected to address. However, a lot can be
done with a much smaller set of indicators. They may provide "pictures" of the situation which
are less sharply defined than with the use of large numbers of indicators but they are more
practically achievable.
The importance of HR indicators lies not so much in their technical construction or
presentation but in their interpretation and application to management issues. The value of a
single indicator is not in itself very useful. It is only through comparing values of multiple
indicators between health service units, and in comparison to derived or pre-set "norms" that
their worth becomes apparent.
To assist in interpretation of what these differences and ranges mean it is usual to
concentrate attention on "outlier" values. In other words, those health service elements
whose indicator values are at the top or the bottom of the regional or national range. This
may be regarded as "good" or "poor" (depending on what the indicator is measuring) but
rather than consider them as absolute positions, it is helpful to take a constructive view and
focus attention on the possible underlying causes for such performance. National averages
can in themselves be misleading, however; it may be that the "national average" of the
majority of health service units represents an unacceptable level of performance in itself.
The primary purpose of looking at units with indicators at the margins of the range is to
establish what is occurring locally which results in these "exceptional" circumstances.
Investigations into the underlying causes for the indicator value will uncover practices which
might be emulated (or avoided) elsewhere and thus improve the management of human
resources in general. However, there are many reasons why indicators might be at extreme
ends of a range and careful, more detailed analysis is always required to help establish why
this might be the case:

incorrect/ missing data/ simple clerical input errors to the indicator system
poor local management of resources
underlying structural causes outside the immediate control of local managers

It is important not to be categorical in interpreting indicators for these reasons and


attributing 'blame' for the performance of the unit concerned. Indicators cannot measure
every single aspect of what is occuring within a location; they can only provide an indication
of circumstances and possible conclusion which will inevitably require more detailed
investigation.
With a smaller set of indicators, complex menu structures for indicator analysis are not
required. It is likely that, apart from the three major groupings proposed in this paper, all the
indicators will reside at the same menu level in the system. These can be supported by
"interpretation guides" which translate the indicators into descriptions of the possible situation.
Some examples of the type of information that might be provided in the guides are shown using a
limited number of indicators:

11

1. High numbers of staff per 1000 population; high numbers of staff in post in relation to
budget; high bed occupancy rates; low staff turnover rates; low overtime rates; low
population mortality rates.
Might imply an efficient and effective health service
2. High population mortality rates; low ratio of doctors to nurses; low ratio of skilled to
unskilled staff; low numbers of staff in post in relation to budget.
Might imply underfunded or underprovided service; shortage of trained (or any) staff for
available posts; implications for staff training.
3. Long time taken from qualification to taking up health service post; low actual to planned
staff ratio; high overtime rates; high staff turnover rate
Might imply inefficient HR recruitment process; potentially low morale among existing staff.
The implications of the indicators will clearly vary from level to level. Indicator statement 3
above provides a good example of this.
At local level, the manager will be aware that there are problems. The local HRIs will
initially tell him or her nothing new. From personal contacts with other managers, the manager
may conclude that this is simply a reflection of the health service condition or that his or her
institution is in some way disadvantaged. It is unlikely that the manager will assume either that it
is his or her fault or that there is much that can be done about the situation. It is not until
information about and comparisons with other similar units begin to emerge that the manager
may consider whether there is anything that could be done to improve morale and retain staff.
At local level, using HRIs from a number of similar units within the local area, the manager
could make comparisons in two directions:
1. between the units within the local area to establish if this is a problem of a particular unit and
then
2. with other similar areas elsewhere using median values of all similar localities.
The two-way comparison could lead to some short-term action within the local area to
redeploy local resources and improve local recruitment processes and/or terms and conditions of
service. At the same time, the local manager might put pressure on the region or national centre
to produce more staff and/or help to make it a more appealing to work in that particular area.
At regional and central, while there may be some short-term action to divert resources to
support particular regions and districts, there will be more attention on whether the problems
presented are endemic and need more fundamental action. It will require significant periods of
time to pass and substantive investigation before the more fundamental issues become clear.
The essential characteristic in this process is that there is a requirement for action at multiple
levels of the system. If this is not understood and implemented, the use of HRIs will collapse.
Preparing and Using Indicator Information
It is important to recognise hear that what is useful and relevant at the local level may only be
useful when summarised across localities for higher levels in the health system hierarchy.
Similarly, local managers will need to know what is happening elsewhere in summary; they will

12

not want to be overwhelmed with data from many individual units. The information needs to be
collated, summarised where necessary and presented in an appropriately digestible form. In
terms of resources and skills, this can really only be done at higher levels in the health system.
Before the actual production of indicator reporting commences, the responsibilities for
collecting the data required, or collating existing data sources, need to be established. This
would include how frequently data is required and how often it would be expected that the
indicators themselves might be used. To help operational level staff undertake this task it is
important to have the definition of the data items as tightly defined as possible to ensure that
all districts are using a common reporting base.
As noted earlier, the frequency of use of indicators is likely to vary at different
managerial levels. In addition, there will be constraints on the ability of the system as a whole
to provide information in an indicator format over short time intervals and an annual review
is probably the best that can be expected due to the logistics of data collection, its
transmission to the processing centre, data checking, preparation of indicators and their
distribution to districts. Typically, this is a process which itself takes several months. With a
reasonably small set of indicators it might be possible to prepare them on a six-monthly or
even a quarterly basis, but that will depend on the reliability of the data gathering and
indicator processing.
However, because the longer-term positions of health districts and units tend to change
slowly, making comparisons between organisations will normally be required only annually
or at best semi-annually, rather than on a more frequent basis.
It is likely that local level managers will want to make more immediate and frequent use
of the raw data forming the basis of the indicator information and they should be encouraged
to do so. They will already be gathering or recording this data for other purposes on a regular
and frequent basis (even daily for some items). For immediate, operational management
issues this data is unlikely to be needed in an indicator format (unless local managers feel it
desirable).
As a general principle, indicators should be constructed making use as much as possible
of existing data sources. Data for many indicators will be based on items which are being
continuously recorded already (such as staff absences, overtime payments, number of births,
number of clinic contacts). Other indicators will rely on data collected less frequently (for
example, number of staff reviews carried out in a particular month, or on a once-off basis
each year (for example a survey of staff travel-to-work times).
Local managers will need to be responsible for deciding who should record particular
data items, and who should be involved in undertaking any special data collection which may
not be routinely recorded. Some simple checklists to support managers in this activity could
also be of use in quality control. As noted above, managers should be encouraged to review
this raw data as it becomes available to monitor any trends and take any immediate
management actions as necessary and appropriate.
Transferring data to the data analysing centre would be through preparation and physical
transfer of a copy of completed proformas to be sent to (or collected by) the processing centre
(or region where initial processing may take place) for forwarding to the national centre. If
computers are available locally, this data might be submitted electronically but should be

13

accompanied by hard copies of the data proforma records. The actual transportation of the
information depends on what local communications systems are available. There might be
some advantage in having staff from the region collecting the data and make preliminary
quality checks of local data completion.
Indicators are only as good as their design and use. They are open to both criticism and
abuse, particularly if they have been developed without the involvement of a wide
constituency of future users. The more people who will have a stake in their application are
involved in establishing the indicators in the first place, the more the indicators themselves
will be perceived as useful and be properly sourced and maintained.
Presentation of Indicators
The main value of indicators lies in managers at all levels being able to make
comparisons between different localities. This means that the indicators need to be presented
in a format offering comparisons: locating the value of a particular health service unit in
relation to others and displaying the range of values across the country and presenting
numerical information to people who are often not "number oriented". In part the
multiplicity of information provided can make interpretation of a "compound view" of the
situation difficult.
While some of these difficulties can be alleviated by training of users, there is an
underlying issue that the way in which information is interpreted can be a country-specific
characteristic. Keeping presentation as simple as possible may be the only objective possible
for developing a generalised presentation style in these circumstances. How simple and what
form should be adopted for the presentation will depend on:

how many indicators there are and whether they can be grouped into convenient
categories;
who will be using them and for what purpose (this includes taking account of
differences in the requirements of different levels in the health service system and
different types of users [managers, planners, policy makers]);
how sophisticated users are in interpreting indicator-type information

The following presentation formats-either singly or in combination-might be suitable:

simple tables (also showing previous values for the locality and/or comparisons
with national/ other areas)
graphs including bar charts and histograms (particularly useful for comparisons
with other areas)
maps provide a good picture of geographic differences but have a restricted ability
to compare multiple indicators simply
box-plots contain a lot of information but would require a higher degree of user
training to achieve good interpretation

14

commentary (with or without indicator values). If without, then the information


would have to be more prescriptive in terms of what the local manager must do.

Examples of some of the different forms of presentation have been shown in the diagrams
in this paper. A more detailed description of the presentation options is discussed by Day (8).
The most common form for sets of multiple indicators are the box plots shown in figure 2 as
they allow a large number of indicators to be shown in close proximity and, as a
consequence, make interpretation easier. As indicators have the potential to be used by
planners and policy makers as well as managers, it may be necessary to produce the same
indicators in different presentation formats for different audiences.
With only a small number of indicators (e.g. less than 20) computer-based indicator
presentation systems are probably unnecessary. However, for a regional /national level
where the raw data is being gathered and processed, a simple PC-based facility using
spreadsheet technology would probably be valuable to aid and speed up the analytic process.
The need for computerisation at higher levels is partly due to the increased volume of
data at these levels-indicators from all/ several districts-and it becomes a more laborious task
not only to interpret for each district, but also to detect patterns or trends across several
districts in a region, or to assess trends in individual districts over time. This technology also
simplifies the preparation of high quality graphs/summary tables or analyses for individual
districts and simplifies the use of standard report formats. A specific computer software
program for producing performance indicator information displays has been developed by the
European Regional Office of the World Health Organization(9). It allows users to incorporate
their own country-specific basket of indicators.

What Training and Education is Needed?


The introduction of indicators from a higher level in an organisation could potentially be
seen as threatening to the local managers so it is imperative that their introduction is handled
sensitively. Part of this lies in educating people -- including those who administer it from
higher levels of the organisation -- so that everyone has a role to play in interpreting and
using the indicators. This approach gives a more positive image to the indicator concept and
encourages people to be proactive in their response to the indicators rather than taking a
reactive position. This can be seen on at least two levels :

immediate training on the interpretation and use of the indicator set; and
addressing wider issues of management in which the indicators play a part.

On the first of these issues, local users need to be introduced to the following concepts:

basic management principles and how indicators support this


definitions of the indicators and the data sources they are based on

15

how they can be interpreted


their use in comparing performance over time and across districts

A mixture of workshops and distance learning packs could help here, with each general
distribution of the indicator set including some form of commentary on what they seem to be
showing across the country/ region and for the specific district. However, it is important to
leave scope for local management initiatives and restrict prescriptive advice on local
management action.
With respect to the second issue, indicator workshops could start to address wider
management concerns such as how to:

implement management action on the basis of the indicator information;


improve data collection/ accuracy ;
construct additional local indicators as required;
involve local staff in their use (cascade training where appropriate);
promote feedback and interact with different levels of the administration.

There are advantages in managers developing local indicators where additional data
exists and there is a perceived need, perhaps being recorded more frequently than the data for
the national set and used specifically to address one particular issue. When that issue has
been resolved, the local indicator could be discontinued. If local indicators are to be
developed, they should be in conjunction with any established "national" set and not be seen
to replace or downgrade them. The idea of local indicators is to aid decentralisation through
empowering local managers-within clear national frameworks-to develop their local resource
management skills. This is one way of enhancing their capability.
Training and education, apart from addressing basic data collection, processing and
interpretation skills, should also focus on encouraging local managers to use the idea of
indicators on their own initiative locally where a management issue or specific data might
exist.
Suggested Steps for Establishing Indicators
Before an HRI system is established in a country there needs to be preliminary work
undertaken to establish a range of indicators appropriate to national circumstances and needs.
This work would aim to :

clarify the main purposes for wanting to use indicators and identify the desired
outcomes;
specify the set of indicators to be used and define data requirements;
set out the process for gathering data and constructing and distributing indicators
across the country;
decide how the indicators would be used in the management process to increase
efficiency and effectiveness.

16

Based on the UK experience, the necessary conditions for the development of a


successful management indicator system (i.e. one which is properly maintained, has
credibility, is used regularly and developed locally) are that there is:

support from the central authority (e.g. Ministry) as well as local management levels
a logical framework of indicators
a focus on relative rather than absolute performance
an ability to make cross-organisational comparisons
an efficient presentation and distribution system
good training and education support for users

Clearly, there will be different objectives and different administrative/ managerial


arrangements from country to country. These will require local adaptation of any basic
framework for the development and introduction of indicators. However, the following might
be regarded as a set of general steps applicable in setting up an indicator project in any
country:
1. Establish the reasons for introducing indicators and the objectives to be achieved.
This is fundamental; questions need to be asked to ascertain who wants the
information and why it is required e.g. at what administrative level indicators are to
be used at, how to incorporate the activity of NGOs where it is important, what the
timescale is for turning data around and how it fits in relation to other planning/
budgeting cycles.
2. Initial appraisal of the existing administrative/managerial framework to confirm lines
of accountability/responsibility. It is not always clear-even when the issue has been
explicitly addressed as part of the process in setting up the indicators in the first
place-who controls what in an organisation and especially one as diverse and complex
as the health service where different professional groups co-exist and linemanagement structures are not always clearly defined. Nevertheless, a critical early
requirement for indicator design will be to determine the role these indicators are to
fulfill.
3. Establish managerial levels at which the indicators are to be used. Also closely linked
to (1) and (2) this is important to establish early in the process as, to work best,
indicators need to be timely and relevant to the managers making use of them.
Establishing this will help to determine which indicators are appropriate for given
management levels and how frequently they need to be collected or disseminated.
4. Describe required indicators. This is where actual indicators are selected and defined.
Table 1-3 provide a core of possible indicators from which to select and build a core
set of indicators appropriate to the opportunities for action in the health system.
5. Identify existing/ required data sources. There is inevitably going to be a compromise
between having the ideal data for a particular indicator and making do with what is

17

already available or easy/ straightforward to measure. In general, the basic set has
been established in the light of experience of using data systems in developing
countries.
6. Establish data collection and processing procedures. If the indicators do require new
data then getting this has to be considered in the light of existing mechanisms for data
collection. If data processing is required, a decision also has to be made at what
organisational level this is to be done. Additionally, protocols for data collection need
to be set out for field staff.
7. Develop an indicator distribution network. Establish the timescale for gathering and
processing data and the development of the indicator sets and how this fits into any
existing schedules for disseminating information or for local planning/ budgeting/
review cycles if they exist. It is also necessary to consider the medium by which the
indicators are to be distributed, who they are to be given to, and what actions are
required by recipients.
8. Train and educate in the use of indicators. This is a vital component required from an
early stage. The process might proceed along the following lines:

explanation of why indicators are being introduced - "what is in it for managers"


- and what they are being asked to do;
how they can / should interpret the indicators;
how they might develop their own indicators locally;
development of a reward system for local initiative in their use.
9. Design monitoring arrangements for data quality/ feedback on use of indicators/
framework for adjusting/ developing indicator set. Some "indicators of the indicators"
will be helpful at higher management levels to assess if the introduction of the
indicators is creating a beneficial effect and whether (over time) any changes need to
be made to it. This overview is important even where management decentralisation is
an objective.
In addition to indicators developed as ratios, there is also a role for simple checklists.
Compliance or otherwise with these simple lists also provide management information and
can be used to monitor performance where more sophisticated data collection is not possible.
Conclusions
Indicators should be seen as one part of a wider approach to improving HR and health
service performance which includes :
1. identifying and supporting desired improvements in the service performance and
2. creating an environment in which achieving these improvements is seen to be desirable
and worthwhile to managers in the service.
The HRIs cannot lead this process alone; they simply identify needs and opportunities and
help to stimulate progress towards meeting service objectives. As a consequence, the process as a

18

whole has to start with a clear and specific intention to improve management in the health
system. The implication of this statement, then, is that the introduction of HRIs will not achieve
benefits unless decision makers in the health sector have a clear set of objectives for improving
HR performance and have the means to cause the changes needed to occur.
The use of indicators will only be sustained if managers in the health system see some
personal and professional benefit from their use. If this is the case, the use of management
indicators in general and HR indicators in particular must be accompanied or preceded by:
1. An organisational culture that encourages managers and staff to take the initiative in
improving performance and accepting the attendant risks.
2. A career and reward system that rewards managers for reaching higher levels of
achievement.
3. A specification of health service objectives that includes targets for managerial efficiency
and effectiveness.
4. A sufficient control of resources within the health system that managers are enabled to
achieve targets of performance.
5. An information system that can sustain rapid movement and processing of data between
field managers and central processing.
Indicators by themselves cannot produce change. This only comes about through action
taken by managers. For indicators to support managerial action requires that the design and
selection of indicators must be tailored to the way managers who will be using them work
and, furthermore, to the type of controls they have over the system in which they work.
The way in which indicators will be used will vary significantly among users and will
reflect individual interest, existing management processes and the organisational pressures
and rewards for good management. They will also reflect the power and responsibility which
managers have over the resources at their disposal. The selection and use of indicators cannot
be divorced from the managerial environment in which they are applied. The findings of a
UK survey (10) on what local managers found to be the most valued features of the national
NHS performance indicator set were :

Ability to make comparisons


Highlight areas of interest
Wide range of indicators
Ease of use
Standardised data

On the one hand, these value rankings reflect the wide range of issues local managers of
different types are expected to control; on the other hand, they importantly provide a basis for
resource bargaining which is a feature of the UK health system.
It is apparent from the study that the more widely the indicator information can be
disseminated, the better the chance of its being seen as a useful tool and acted upon. it is
important to give all of those involved a stakehold in the indicator process

19

If they are to make a useful contribution, the introduction of indicators must be as part of
a controlled process of management change in which the benefits of use are apparent to all
the users and are in step with the capacity of management to act
Introduction of indicators is only meaningful if combined with other management
development activities. It must be tempered by the state of management development that
exists. This implies that the development of indicators will not be a single event but should
be an evolving process linked to management growth
Acknowledgements
The authors wish to acknowledge the support provided by the Human Resources Division of
World Health Organization in Geneva in undertaking this work.
References
1. Sapirie, SA and Orzeszyna, S. Selecting and defining national health indicators.
Strengthening country health information unit. Geneva : World Health Organization,
1995.
2. Mullen, P. Performance indicators-is anything new? Hospital and Health Services Review
1985; 165-167.
3. Harley, M. The measurement of health services activity and standards. In: Holland W, ed.
Oxford textbook of public health. Vol. 3: Applications In public health. Oxford, Oxford
University Press, 1991.
4. NHS Management Executive. Health Service Indicators Handbook. (Leeds: NHSME,
1994.
5. Yates J. Hospital Beds: a problem for diagnosis and management? London: William
Heinemann Medical Books, 1982.
6. Peters TJ, Waterman, R H. In search of excellence: lessons from America's best-run
companies. New York: Harper & Row, 1982.
7. Harper J. Measuring Unit Labour Costs in the NHS: a guide. HP14 4AJ, England: The
Unit Labour Cost Project, 1995.
8. Day C. Taking action with indicators. London: HMSO, 1989.
9. Prokhorskas R. Data presentation system for health service indicators. Geneva: WHO,
Human Resource Indicator Workshop, 1996.
10. Jenkins L, Bardsley, M. How Did We Do? London: CASPE Research, 1988.

20

You might also like