0% found this document useful (0 votes)
173 views

EBook - OMBO401-Work System Design

Uploaded by

deepchand kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views

EBook - OMBO401-Work System Design

Uploaded by

deepchand kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 278

UNIT 1 INTRODUCTION TO WORK SYSTEM DESIGN

Objectives
After going through this unit,you will be able to:
 recognize the work system in an organization and its essential elements;
 calculate the RPN (Risk Priority Number) for problem solving;
 state various work methods and the problem solving approach towards it;
 identify the relationship between work design and productivity;
 differentiate between various models and their comparison.
Structure
1.1 Introduction
1.2 Definition of work system
1.3 Special Cases
1.4 Work System Framework
1.5 Work System Life Cycle Model
1.6 Work System Method
1.7 A problem-solving approach
1.8 Work Design & Productivity
1.9 Productivity Models
1.10 Models of National Economy
1.11 The Work Place Design
1.12 Keywords
1.13 Summary

1.1 INTRODUCTION

Among the goals of economic policy is a rising standard of living, and it is generally understood that
the means to that end is rising productivity. Productivity relates the quantity of goods and services
produced, and the income generated because of that production, to the amount of labor (e.g., hours
worked or number of workers) required producing it. The most used measure of the living standard
of a nation, is simply the ratio of that income to the total population, without regard to how the
income is actually distributed. If a relatively small share of a nation’s population works, there will be
a large difference between the level of productivity and that measure of the national standard of
living.
Productivity varies over time, and it varies across countries as well. The link between productivity
and living standards is not a direct one; therefore, countries with a high level of productivity may not
necessarily have the highest standard of living. Gross domestic product (GDP) per capita can rise in
the absence of an increase in productivity if (1) employees increase the number of hours they work
(hours per employee); (2) the share of the labor force that is employed rises (i.e., the unemployment
rate drops); or (3) the share of the population that is in the labor force rises (presuming that the
share of any new jobseekers who get jobs is at least as large as the share of those already in the
labor force who have jobs).

The standard measure of the production of goods and services for a nation is gross domestic product
(GDP). GDP measures the total value of goods and services produced within a nation’s borders.
Productivity is a measure of how much work is required to produce it. The most basic unit of labor is
the hour, thus productivity can be measured as GDP divided by the total number of hours worked.
Productivity may also be measured as the average contribution of each employee to total
production, or simply GDP divided by employment. The broadest measure of the living standard of a
nation is GDP divided by the total population. Per capita GDP says nothing about how those
national resources are distributed.

1.2 DEFINITION OF WORK SYSTEM

A work system is a system in which human participants and/or machines perform processes and
activities using information, technology, and other resources to produce products/services for
internal or external customers. Typical business organizations contain work systems that procure
materials from suppliers, produce products, deliver products to customers, find customers, create
financial reports, hire employees, coordinate work across departments, and perform many other
functions.

1.3 SPECIAL CASES

The work system concept is like a common denominator for many of the types of systems that
operate within or across organizations. Operational information systems, service systems, projects,
supply chains, and ecommerce web sites can all be viewed as special cases of work systems.
An information system is a work system whose processes and activities are devoted to processing
information. A service system is a work system that produces services for its customers. A project is
a work system designed to produce products and then go out of existence. A supply chain is an inter-
organizational work system devoted to procuring materials and other inputs required to produce a
firm’s products. An ecommerce web site can be viewed as a work system in which a buyer uses a
seller’s web site to obtain product information and perform purchase transactions. The relationship
between work systems in general and the special cases implies that the same basic concepts apply
to all of the special cases, which also have their own specialized vocabulary. In turn, this implies that
much of the body of knowledge for the current information systems discipline can be organized
around a work system core.

Many specific information systems exist to produce products/services that are of direct value to
customers, e.g., information services and Internet search. Other specific information systems exist
to support other work systems, e.g., an information system that helps sales people do their work.
Many different degrees of overlap are possible between an information system and a work system
that it supports. For example, an information system might provide information for a non-
overlapping work system, as happens when a commercial marketing survey provides information to
a firm’s marketing managers. In other cases, an information system may be an integral part of a
work system, as happens in highly automated manufacturing and in ecommerce web sites. In these
situations, participants in the work system are also participants in the information system, the work
system cannot operate properly without the information system, and the information system has
little significance outside of the work system.

1.4 WORK SYSTEM FRAMEWORK


The work system approach for understanding systems includes both a static view of a current (or
proposed) system in operation and a dynamic view of how a system evolves over time through
planned change and unplanned adaptations. The static view is summarized by the work system
framework, which identifies the basic elements for understanding and evaluating a work system. The
work system itself consists of four elements: the processes and activities, participants, information,
and technologies. Five other elements must be included in even a rudimentary understanding of a
work system’s operation, context, and significance. Those elements are the products and services
produced customers, environment, infrastructure, and strategies. This framework is prescriptive
enough to be useful in describing the system being studied, identifying problems and opportunities,
describing possible changes, and tracing how those changes might affect other parts of the work
system.

The elements of work system


(Source: Journal of the Association for Information Systems, Vol 2, No 4, 2013)

This slightly updated version of the work system framework replaces “work practices” with
“processes and activities.”

The definitions of elements of work system


The definitions of the 9 elements of a work system are as follows:
Processes and activities: include everything that happens within the work system. The term
processes and activities are used instead of the term business process because many work
systems do not contain highly structured business processes involving a prescribed sequence
of steps, each of which is triggered in a pre-defined manner. Rather, the sequence and
details of work in some work systems depend on the skills, experience, and judgment of the
work system participants. In effect, “business process” is but one of several different
perspectives for analyzing the activities within a work system. Other perspectives with their
own valuable concepts and terminology include decision-making, communication,
coordination, control, and information processing.
Participants: are people who perform the work. Some may use computers and IT
extensively, whereas others may use little or no technology. When analyzing a work system,
the more encompassing role of work system participant is more important than the more
limited role of technology user (whether or not particular participants happen to be
technology users).

Information includes codified and non-codified information used and created as participants
perform their work. Information may or may not be computerized. Data not related to the
work system is not directly relevant, making the distinction between data and information
secondary when describing or analyzing a work system. Codified knowledge recorded in
documents, software, and business rules can be viewed as a special case of information.

Technologies: include tools (such as cell phones, projectors, spreadsheet software, and
automobiles) and techniques (such as management by objectives, optimization, and remote
tracking) that work system participants use while doing their work.

Products and services are the combination of physical things, information, and services that
the work system produces. This may include physical products, information products,
services, intangibles such as enjoyment and peace of mind, and social products such as
arrangements, agreements, and organizations.

Customers: are people who receive direct benefit from products and services the work
system produces. They include external customers who receive the organization’s products
and/or services and internal customers who are employees or contractors working inside the
organization.

Environment: includes the organizational, cultural, competitive, technical, and regulatory


environment within which the work system operates. These factors affect system
performance even though the system does not rely on them directly to operate. The
organization’s general norms of behavior are part of its culture, whereas more specific
behavioral norms and expectations about specific activities within the work system are
considered part of its processes and activities.
Infrastructure: includes human, informational, and technical resources that the work system
relies on even though these resources exist and are managed outside of it and are shared
with other work systems. For example, technical infrastructure includes computer networks,
programming languages, and other technologies shared by other work systems and often
hidden or invisible to work system participants.

Strategies: include the strategies of the work system and of the department(s) and
enterprise(s) within which the work system exists. Strategies at the department and
enterprise level may help in explaining why the work system operates as it does and
whether it is operating properly.

1.5 WORK SYSTEM LIFE CYCLE MODEL

The dynamic view of a work system starts with the work system life cycle (WSLC) model, which
shows how a work system may evolve through multiple iterations of four phases: operation and
maintenance, initiation, development, and implementation. The names of the phases were chosen
to describe both computerized and non-computerized systems, and to apply regardless of whether
application software is acquired, built from scratch, or not used at all.

Work System Life Cycle Model


(Source: KengSiau, Roger Chiang, Bill C. Hardgrave - 2010 - Business & Economics)
This model encompasses both planned and unplanned change. Planned change occurs through a full
iteration encompassing the four phases, i.e., starting with an operation and maintenance phase,
flowing through initiation, development, and implementation, and arriving at a new operation and
maintenance phase. Unplanned change occurs through fixes, adaptations, and experimentation that
can occur within any phase. The phases include the following activities:

Operation and maintenance


 Operation of the work system and monitoring of its performance
 Maintenance of the work system (which often includes at least part of information systems
that support it) by identifying small flaws and eliminating or minimizing them through fixes,
adaptations, or workarounds.
 On-going improvement of processes and activities through analysis, experimentation, and
adaptation

Initiation
 Vision for the new or revised work system
 Operational goals
 Allocation of resources and clarification of time frames
 Economic, organizational, and technical feasibility of planned changes

Development
 Detailed requirements for the new or revised work system (including requirements for
information systems that support it)
 As necessary, creation, acquisition, configuration, and modification of procedures,
documentation, training material, software, and hardware
 Debugging and testing of hardware, software, and documentation

Implementation
 Implementation approach and plan (pilot? phased? big bang?)
 Change management efforts about rationale and positive or negative impacts of changes
 Training on details of the new or revised information system and work system
 Conversion to the new or revised work system
 Acceptance testing

As an example of the iterative nature of a work system’s life cycle, consider the sales system in a
software start-up. The first sales system is the CEO selling directly. At some point the CEO can’t
do it alone, several salespeople are hired and trained, and marketing materials are produced
that can be used by someone less expert than the CEO. As the firm grows, the sales system
becomes regionalized and an initial version of sales tracking software is developed and used.
Later, the firm changes its sales system again to accommodate needs to track and control a
larger sales force and predict sales several quarters in advance. A subsequent iteration might
involve the acquisition and configuration of CRM software. The first version of the work system
starts with an initiation phase. Each subsequent iteration involves deciding that the current sales
system is insufficient; initiating a project that may or may not involve significant changes in
software; developing the resources such as procedures, training materials, and software that are
needed to support the new version of the work system; and finally, implementing the new work
system.

The pictorial representation of the work system life cycle model places the four phases at the
vertices of rectangle. Forward and backward arrows between each successive pair of phases
indicate the planned sequence of phrase and allow the possibility of returning to a previous
phase if necessary. To encompass both planned and unplanned change, each phase has an
inward facing arrow to denote unanticipated opportunities and unanticipated adaptations,
thereby recognizing the importance of diffusion of innovation, experimentation, adaptation,
emergent change, and path dependence.

The work system life cycle model is iterative and includes both planned and unplanned change.
It is fundamentally different from the frequently cited system development life cycle (SDLC),
which describes projects that attempt to produce software or produce changes in a work
system. Current versions of the SDLC may contain iterations but they are basically iterations
within a project. More important, the system in the SDLC is a basically a technical artifact that is
being programmed. In contrast, the system in the WSLC is a work system that evolves over time
through multiple iterations. That evolution occurs through a combination of defined projects
and incremental changes resulting from small adaptations and experimentation. In contrast with
control-oriented versions of the SDLC, the WSLC treats unplanned changes as part of a work
system’s natural evolution.
1.6 WORK SYSTEM METHOD
The work system method (WSM) was developed as a semi-formal systems analysis and design
method that business professionals (and/or IT professionals) can use for understanding and
analyzing a work system at whatever level of depth is appropriate for their particular concerns. It
evolved iteratively starting in around 1997. At each stage, the then current version was tested by
evaluating the areas of success and the difficulties experienced by MBA and EMBA students trying to
use it for a practical purpose. A version called “work-centered analysis” that was presented in a
textbook has been used by a number of universities as part of the basic explanation of systems in
organizations, to help students focus on business issues, and to help student teams
communicate.Results from analyses of real world systems by typical employed MBA and EMBA
students indicate that a systems analysis method for business professionals should be much more
prescriptive than approaches such as Checkland’s soft system methodology, but less complex than
high-precision notations and diagramming tools for IT professionals. While not a straitjacket, it must
be at least somewhat procedural and must provide vocabulary and analysis concepts while at the
same time encouraging the user to perform the analysis at whatever level of detail is appropriate for
the task at hand.

1.7 APROBLEM SOLVING APPROACH


The latest version of the work system method is organized around a general problem-solving outline
that includes:

 Identify the problem or opportunity


 Identify the work system that has that problem or opportunity (plus relevant constraints and
other considerations)
 Use the work system framework to summarize the work system
 Gather relevant data.
 Analyze using design characteristics, measures of performance, and work system principles.
 Identify possibilities for improvement.
 Decide what to recommend
 Justify the recommendation using relevant metrics and work system principles.
For business professionals
In contrast to systems analysis and design methods for IT professionals who need to produce a
rigorous, totally consistent definition of a computerized system, the work system method:
 encourages the user to decide how deep to go
 makes explicit use of the work system framework and work system life cycle model
 makes explicit use of work system principles
 makes explicit use of characteristics and metrics for the work system and its elements
 includes work system participants as part of the system (not just users of the software)
 includes codified and non-codified information
 includes IT and non-IT technologies
 suggests that recommendations specify which work system improvements rely on IS
changes, which recommended work system changes don’t rely on IS changes, and which
recommended IS changes won’t affect the work system’s operational form.

1.8 WORK DESIGN & PRODUCTIVITY


Organizing a work space goes way back to the industrial revolution when managers were trying
to find out how to make their workers more productive. Initially, the goal was to reduce injuries,
but over the years we’ve learned that carefully organizing your workspace can improve
productivity (Resnick and Zanotti, 1998). More than just improving productivity, when you
personalize your work space, studies have shown you enjoy work more, you work harder, and
you have a greater sense of “organizational well-being” (Wells, 2000).In this section we focus on
three elements that will help you decide how to design your work space for productivity: pain
reduction, stress reduction, and personalization.

Pain Reduction
It seems kind of obvious — if you’re in pain, you won’t be as productive. But you may not even
realize that your work space may be the source of your pain. Back pain, headaches, sore wrists
and forearms are just a few symptoms that can crop up if you don’t have your workspace set up
properly.

Back pain, for example, can be caused by poor posture or a cheap office chair. Think about how
much time you spend in that chair. Likely, it’s more than 20 hours per week. It’s worth the
investment to get a chair that properly supports your body. If you work from home or your
employees do a lot of brainstorming, consider including a lounge chair space and a table with
comfy chairs for collaboration.

On the other hand, if you’re standing most of the day, you may need to consider a floor mat or
thick rug. The following office has the perfect location for pacing without strain on the knees or
lounging to gather inspiration.

A showcase to pain reduction


Headaches can be caused by issues with lighting or your monitors. Too much light can cause
eyestrain and some individuals are allergic to incandescent lights found in most offices. You can
also reduce headaches by increasing the refresh rate on your screen, which is easier on your
eyes (ask your tech department if you don’t know how to adjust this yourself).

Sore wrists and forearms are often caused by typing too much. Since typing less is probably not
an option, you can try an ergonomic keyboard. It takes a little getting used to but can
dramatically improve the comfort of typing. You may also consider an updated mouse as older
models take a lot more pressure to click on the buttons.

Stress Reduction
Stress is another major factor that is often easily reduced by designing your work space for
productivity. In crowded offices, a key contributor to stress is a lack of personal space. Look for
ways to create space like putting up dividers between people or arranging desks so that your
back is to the back of someone instead of facing them. Below is an example of an excellent set
up, from the offices in which employees have a private space without dividers.
Space provision for stress reduction
Personalization
Being able to personalize your work space to make it feel more like your own area with your
own individual touch has been a proven method for improving productivity. While some
managers frown upon too much decorating, the fact remains that people perform better when
they feel they have created their own space. A personalized workspace reduces stress and
makes the staff feel more connected to what they are doing and the organization as a whole.

The productivity formula is relatively simple: recruit quality workers sustain a high level of
employee satisfaction, and efficiency will improve. A great deal of research supports this
equation and points to the role that design plays in augmenting the variables. Workers
repeatedly cite design-related factors when explaining their reasons for choosing a job or staying
with a certain company, and managers recognize that employee satisfaction and productivity
rise in aesthetically appealing workplaces. For executives, the task of redesigning office space to
reflect evolving needs might seem daunting, but interiors experts can make the transition
smooth and graceful. Here below mentioned is a typical example of manufacturing work place
system design.
Example of manufacturing workplace system design
(Source: American Society for internal designs - ASID)

1.9 PRODUCTIVITY MODELS


The principle of comparing productivity models is to identify the characteristics that are present in
the models and to understand their differences. This task is alleviated by the fact that such
characteristics can unmistakably be identified by their measurement formula. Based on the model
comparison, it is possible to identify the models that are suited for measuring productivity. A
criterion of this solution is the production theory and the production function. It is essential that the
model is able to describe the production function.
Dimensions of productivity model comparisons (Saari 2006b)

The principle of model comparison becomes evident in the figure. There are two dimensions in the
comparison. Horizontal model comparison refers to a comparison between business models. Vertical
model comparison refers to a comparison between economic levels of activity or between the levels
of business, industry and national economy.

At all three levels of economy, that is, that of business, industry and national economy, a uniform
understanding prevails of the phenomenon of productivity and of how it should be modeled and
measured. The comparison reveals some differences that can mainly be seen to result from
differences in measuring accuracy. It has been possible to develop the productivity model of
business to be more accurate than that of national economy for the simple reason that in business
the measuring data are much more accurate (Saari 2006b).

Business Models
There are several different models available for measuring productivity. Comparing the models
systematically has proved most problematic. In terms of pure mathematics, it has not been possible
to establish the different and similar characteristics of them so as to be able to understand each
model as such and in relation to another model. This kind of comparison is possible using the
productivity model which is a model with adjustable characteristics. An adjustable model can be set
with the characteristics of the model under review after which both differences and similarities are
identifiable.

A characteristic of the productivity measurement models that surpasses all the others is the ability
to describe the production function. If the model can describe the production function, it is
applicable to total productivity measurements. On the other hand, if it cannot describe the
production function or if it can do so only partly, the model is not suitable for its task. The
productivity models based on the production function form rather a coherent entity in which
differences in models are fairly small. The differences play an insignificant role, and the solutions
that are optional can be recommended for good reasons. Productivity measurement models can
differ in characteristics from another in six ways.

 First, it is necessary to examine and clarify the differences in the names of the concepts.
Model developers have given different names to the same concepts, causing a lot of
confusion. It goes without saying that differences in names do not affect the logic of
modeling.

 Model variables can differ; hence, the basic logic of the model is different. It is a question of
which variables are used for the measurement. The most important characteristic of a
model is its ability to describe the production function. This requirement is fulfilled in case
the model has the production function variables of productivity and volume. Only the
models that meet this criterion are worth a closer comparison. (Saari 2006b)

 Calculation order of the variables can differ. Calculation is based on the principle of Ceteris
paribus stating that when calculating the impacts of change in one variable all other
variables are hold constant. The order of calculating the variables has some effect on the
calculation results, yet, the difference is not significant.

 Theoretical framework of the model can be either cost theory or production theory. In a
model based on the production theory, the volume of activity is measured by input volume.
In a model based on the cost theory, the volume of activity is measured by output volume.

 Accounting technique, i.e. how measurement results are produced, can differ. In calculation,
three techniques apply: ratio accounting, variance accounting and accounting form.
Differences in the accounting technique do not imply differences in accounting results but
differences in clarity and intelligibility. Variance accounting gives the user most possibilities
for an analysis.

 Adjustability of the model. There are two kinds of models, fixed and adjustable. On an
adjustable model, characteristics can be changed, and therefore, they can examine the
characteristics of the other models. A fixed model cannot be changed. It holds constant the
characteristic that the developer has created in it.

Based on the variables used in the productivity model suggested for measuring business, such
models can be grouped into three categories as follows:
 Productivity index models
 PPPV models
 PPPR models
In 1955, Davis published a book titled Productivity Accounting in which he presented a
productivity index model. Based on Davis’ model several versions have been developed, yet, the
basic solution is always the same (Kendrick & Creamer 1965, Craig & Harris 1973, Hines 1976,
Mundel 1983, Sumanth 1979). The only variable in the index model is productivity, which implies
that the model cannot be used for describing the production function. Therefore, the model is
not introduced in more detail here.

PPPV is the abbreviation for the following variables, profitability being expressed as a function of
them: Profitability = f (Productivity, Prices, and Volume)
The model is linked to the profit and loss statement so that profitability is expressed as a
function of productivity, volume, and unit prices. Productivity and volume are the variables of a
production function and using them makes it is possible to describe the real process. A change in
unit prices describes a change of production income distribution.
PPPR is the abbreviation for the following function: Profitability = f (Productivity, Price Recovery)

In this model, the variables of profitability are productivity and price recovery. Only the
productivity is a variable of the production function. The model lacks the variable of volume, and
for this reason, the model cannot describe the production function. The American models of
REALST (Loggerenberg&Cucchiaro 1982, Pineda 1990) and APQC (Kendrick 1984, Brayton 1983,
Genesca&Grifell, 1992, Pineda 1990) belong to this category of models but since they do not
apply to describing the production function (Saari 2000) they are not reviewed here more
closely.

Comparative Summary of Models


Choices Saari Kurosawa Gollop C&T
Variables used in the model Distribution Distribution Distribution Distribution
Productivity Productivity Productivity Productivity
Volume Volume Volume Volume
Theory alternatives Production Production function Cost function Cost function
1. Prod. function function
2.Cost function
Calc. order of variables 1. Distribution 1. Volume 1. Volume 1. Volume
2. Productivity 2. Productivity 2. Productivity 2. Productivity
3. Volume 3. Distribution 3. Distribution 3. Distribution
Accounting technique, All changes All changes Distribution; All Changes
alternatives variance accounting form variance acc. accounting form
1.Variance accounting accounting Productivity;
2. Ratio accounting Ratio acc;
3. Accounting form Volume; Account
form
Adjustability alternatives Adjustable Fixed Fixed Fixed
1. Adjustable
2. Fixed
PPPV models measure profitability as a function of productivity, volume and income distribution
(unit prices).
Such models are

 Japanese- Kurosawa (1975)


 French -Courbois& Temple (1975)
 Finnish- Saari (1976, 2000, 2004, 2006a, 2006b)
 American- Gollop (1979)

The table presents the characteristics of the PPPV models. All four models use the same
variables by which a change in profitability is written into formulas to be used for measurement.
These variables are income distribution (prices), productivity and volume. A conclusion is that
the basic logic of measurement is the same in all models. The method of implementing the
measurements varies to a degree, depending on the fact that the models do not produce similar
results from the same calculating material.

Even if the production function variables of profitability and volume were in the model, in
practice the calculation can also be carried out in compliance with the cost function. This is the
case in models C & T as well as Gollop. Calculating methods differ in the use of either output
volume or input volume for measuring the volume of activity. The former solution complies with
the cost function and the latter with the production function. It is obvious that the calculation
produces different results from the same material. A recommendation is to apply calculation in
accordance with the production function. According to the definition of the production function
used in the productivity models Saari and Kurosawa, productivity means the quantity and quality
of output per one unit of input.

Models differ from one another significantly in their calculation techniques. Differences in
calculation technique do not cause differences in calculation results but it is rather a question of
differences in clarity and intelligibility between the models. From the comparison it is evident
that the models of Courbois& Temple and Kurosawa are purely based on calculation formulas.
The calculation is based on the aggregates in the loss and profit account. Consequently, it does
not suit to analysis. The productivity model Saari is purely based on variance accounting known
from the standard cost accounting. The variance accounting is applied to elementary variables,
that is, to quantities and prices of different products and inputs. Variance accounting gives the
user most possibilities for analysis. The model of Gollop is a mixed model by its calculation
technique. Every variable is calculated using a different calculation technique. (Saari 2006b)

The productivity model Saari is the only model with alterable characteristics. Hence, it is an
adjustable model. A comparison between other models has been feasible by exploiting this
characteristic of this model.

1.10 MODELS OF NATIONAL ECONOMY

In order to measure productivity of a nation or an industry, it is necessary to operationalize the same


concept of productivity as in business, yet, the object of modeling is substantially wider and the
information more aggregate. The calculations of total productivity of a nation or an industry are
based on the time series of the SNA, System of National Accounts, formulated and developed for
half a century. National accounting is a system based on the recommendations of the UN (SNA 93) to
measure total production and total income of a nation and how they are used.Measurement of
productivity is at its most accurate in business because of the availability of all elementary data of
the quantities and prices of the inputs and the output in production. The more comprehensive the
entity we want to analyze by measurements, the more data need to be aggregated. In productivity
measurement, combining and aggregating the data always involves reduced measurement accuracy.

Output Measurement

Conceptually speaking, the amount of total production means the same in the national economy and
in business but for practical reasons modeling the concept differs, respectively. In national economy,
the total production is measured as the sum of value added whereas in business it is measured by
the total output value. When the output is calculated by the value added, all purchase inputs
(energy, materials etc.) and their productivity impacts are excluded from the examination.
Consequently, the production function of national economy is written as follows:

Value Added = Output = f (Capital, Labor)

In business, production is measured by the gross value of production, and in addition to the
producer’s own inputs (capital and labor) productivity analysis comprises all purchase inputs
such as raw-materials, energy, outsourcing services, supplies, components, etc. Accordingly, it is
possible to measure the total productivity in business which implies absolute consideration of all
inputs. Productivity measurement in business gives a more accurate result because it analyses all
the inputs used in production. (Saari 2006b)

The productivity measurement based on national accounting has been under development
recently. The method is known as KLEMS, and it takes all production inputs into consideration.
KLEMS is an abbreviation for K = capital, L = labour, E = energy, M = materials, and S = services. In
principle, all inputs are treated the same way. As for the capital input this means that it is
measured by capital services, not by the capital stock.

Combination or Aggregation Problem

The problem of aggregating or combining the output and inputs is purely measurement
technical, and it is caused by the fixed grouping of the items. In national accounting, data need
to be fed under fixed items resulting in large items of output and input which are not
homogeneous as provided in the measurements but include qualitative changes. There is no
fixed grouping of items in the business production model, neither for inputs nor for products,
but both inputs and products are present in calculations by their own names representing the
elementary price and quantity of the calculation material. (Saari 2006b)

Problem of the Relative Prices

For productivity analyses, the value of total production of the national economy, GNP, is
calculated with fixed prices. The fixed price calculation principle means that the prices by which
quantities are evaluated are hold fixed or unchanged for a given period. In the calculation
complying with national accounting, a fixed price GNP is obtained by applying the so-called basic
year prices. Since the basic year is usually changed every 5th year, the evaluation of the output
and input quantities remains unchanged for five years. When the new basic-year prices are
introduced, relative prices will change in relation to the prices of the previous basic year, which
has its certain impact on productivity
Old basic-year prices entail inaccuracy in the production measurement. For reasons of market
economy, relative values of output and inputs alter while the relative prices of the basic year do
not react to these changes in any way. Structural changes like this will be wrongly evaluated.
Short life-cycle products will not have any basis of evaluation because they are born, and they
die in between the two basic years. Obtaining good productivity by elasticity is ignored if old and
long-term fixed prices are being used. In business models this problem does not exist, because
the correct prices are available all the time. (Saari 2006b)

1.11 THE WORK PLACE DESIGN


The workplace design – the physical environment for the enterprise – is an important characteristic
of the enterprise. It provides accommodation for work processes and the employees, it is a
meeting-place, it communicates to employees and potential customers, as well as investors and
stakeholders, and other parts of the environment; it can be good or poor as a working
environment; and it is located somewhere.

Workplace design can be used and is being used as a strategic instrument for changing
workplace functioning. There is, however, no “architectural determinism”: Although design is
important, design alone cannot determine workplace culture and workplace functioning. Similar
spatial solutions may be deployed in quite different ways.
Workplace design will always be local and needs to have both top-leader support and employee
participation to achieve a fitting design, and to get support for necessary changes.

It is an aspiration of the Guide to empower enterprises and employees, to make them


competent customers of services from professional providers, and to make professional
providers even more professional. The main target groups for the guide are:
• Enterprises in private and public sector, including employers, employees, facilities
managers, HR personnel, and trade union representatives
• Providers of products and services for workplace design, including interior architects,
architects, facilities owners, workplace consultants, and ICT-suppliers
• Public authorities, including urban planners, and health and safety inspectors

History of work system Design

The foundations of the science of ergonomics appear to have been laid within the context of the
culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th
century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One
outstanding example of this can be found in the description Hippocrates gave of how a surgeon's
workplace should be designed and how the tools he uses should be arranged. The archaeological
record also shows that the early Egyptian dynasties made tools and household equipment that
illustrated ergonomic principles. It is therefore questionable whether the claim by Marmaris, et
al., regarding the origin of ergonomics, can be justified.

In the 19th century, Frederick Winslow Taylor pioneered the "scientific management” method,
which proposed a way to find the optimum method of carrying out a given task. Taylor found
that he could, for example, triple the amount of coal that workers were shoveling by
incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was
reached. Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the
"time and motion study". They aimed to improve efficiency by eliminating unnecessary steps
and actions. By applying this approach, the Gilbreths reduced the number of motions
in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350
bricks per hour.

Examples of Work Standards

Hot Work Permit: Hot work permits are used when heat or sparks are generated by work such as
welding, burning, cutting, riveting, grinding, drilling, and where work involves the use of pneumatic
hammers and chippers, non-explosion proof electrical equipment (lights, tools, and heaters), and
internal combustion engines.

Three types of hazardous situations need to be considered when performing hot work:
 The presence of flammable materials in the equipment.
 The presence of combustible materials that burn or give off flammable vapors when heated;
and
 The presence of flammable gas in the atmosphere, or gas entering from an adjacent area,
such as sewers that have not been properly protected. (Portable detectors for combustible
gases can be placed in the area to warn workers of the entry of these gases.)

Cold Work Permit: Cold work permits are used in hazardous maintenance work that does not
involve “hot work”. Cold work permits are issued when there is no reasonable source of ignition,
and when all contact with harmful substances has been eliminated or appropriate precautions
taken.
Confined Space Entry Permit:Confined space entry permits are used when entering any confined
space such as a tank, vessel, tower, pit or sewer. The permit should be used in conjunction with a
“Code of Practice” which describes all important safety aspects of the operation.

Some employers use special permits to cover specific hazards such as:
 extremely hazardous conditions
 radioactive materials
 PCBs and other dangerous chemicals
 excavations
 power supplies

1.12 KEYWORDS

Work Design-work design is the application of Socio-Technical Systems principles and techniques to
the humanization of work.

Work System-A work system is a system in which human participants and/or machines perform
work using information, technology, and other resources to produce products and/or services for
internal or external customers

Productivity- an economic measure of output per unit of input, Inputs include labor and capital,
while output is typically measured in revenues and other GDP components such as business
inventories

Efficiency-Efficiency in general, describes the extent to which time, effort or cost is well used for the
intended task or purpose

Work System Design-A work system is a system in which human participants and/or machines
perform processes and activities using information, technology, and other resources to produce
products/services for internal or external customers

Work System Life Cycle (WSLC) -The dynamic view of a work system starts with the work system life
cycle (WSLC) model, which shows how a work system may evolve through multiple iterations of four
phases: operation and maintenance, initiation, development, and implementation.
1.12 SUMMARY
A work system is a system in which human participants and/or machines perform processes and
activities using information, technology, and other resources to produce products/services for
internal or external customers. Work system can be an operational information system, service
systems, projects, supply chains, and ecommerce web sites can all be viewed as special cases of
work systems.

The work system approach for understanding systems includes both a static view of a current (or
proposed) system in operation and a dynamic view of how a system evolves over time through
planned change and unplanned adaptations. The dynamic view of a work system starts with the
work system life cycle (WSLC) model. This model encompasses both planned and unplanned change.
Planned change occurs through a full iteration encompassing the four phases, i.e., starting with an
operation and maintenance phase, flowing through initiation, development, and implementation,
and arriving at a new operation and maintenance phase. Unplanned change occurs through fixes,
adaptations, and experimentation that can occur within any phase. The work system method (WSM)
is a systems analysis and design method that business professionals can use for understanding and
analyzing a work system at whatever level of depth is appropriate for their concerns. The work
system method will be different for different industries.

There are nine elements of a work system namely processes and activities, participants, information,
technology, products and service, customers, environment, and strategy.The pain reduction, stress
reduction, personalization are the three important aspects in the relationship of productivity
enhancement of any organization with respect to employee.
In a different point of view the examples of work standards are hot work permit and confined space
entry permit.
UNIT 2 PROBLEM SOLVING TOOLS
Objectives
After going through this unit, you will be able to:
 Recognize the impotence of problem solving tools in business environment
 Calculate the RPN (Risk Priority Number) for problem solving
 State different Quantitative and Qualitative tools for problem solving
 Analyze the relationship between machine and operator
 Apply different charts in practice to study the relationship between various entities involved
in process.
Structure
2.1 Introduction
2.2 Exploratory Tools
2.3 Recording and Analysis Tools
2.4 Quantitative tools
2.5 Worker Machine Relationship
2.6 Keywords
2.7 Summary

2.1 INTRODUCTION
Problem solving is a fixture in life. You must be able to solve problems. Problems pop up every day.
Sometimes they are small and sometimes they are large. The same can be said in business.
Businesses have plenty of problems and it is up to the employees to find a way to solve those
problems. Again, sometimes simple problem-solving techniques just are not going to work because
some problems require more problem solving skills, techniques in a scientific way.

While most people associate lean with tools and principles such as value stream mapping, one-piece
flow, Kanban, 5-S, Total Productive Maintenance and kaizen events, few people think about the
more mundane aspects of lean. Problem solving is one of the keys to a successful lean
implementation because it empowers all of those involved.

Lean manufacturing has a unique way of solving problems. It does not just look at the effect of the
problem and try to cover it with a Band-Aid. Rather, the root cause of the problem is identified and
the root cause, as well as all contributing factors, is eliminated from the system, process, or
infrastructure in order to permanently solve the problems. What is the difference in these two
approaches? Simple, when you find and rectify the root causes, the problem will be solved forever.
Even other problems occurring due to these root causes will be eliminated in this effort.
It is very clear now that we must find out the root causes of the problems before we think about
rectifying them in lean manufacturing environments. So, how should we do this? What are the tools
available to perform these tasks? Let’s look at what problem solving is about. We’ll begin by asking
the question: “What is a problem?” A good definition of a problem is a variation from a recognized
standard. In other words, you need to know how things should be before you can recognize a
possible cause for them not being that way. After a problem has been recognized, a formal problem-
solving process should be applied.

2.2 EXPLORATORY TOOLS

Problem-solving often involves decision-making, and decision-making is especially important for


management and leadership. There are processes and techniques to improve decision-making and
the quality of decisions. Decision-making is more natural to certain personalities, so these people
should focus more on improving the quality of their decisions. People that are less natural decision-
makers are often able to make quality assessments, but then need to be more decisive in acting
upon the assessments made. Problem-solving and decision-making are closely linked, and each
requires creativity in identifying and developing options, for which the brainstorming technique is
particularly useful.
High performance work teams typically use following problem solvingtools.
 Plan, Do, Check, Act (PDCA)
 5-Why Analysis
 Simplified Failure Modes and Effects Analysis (SFMEA)
The Deming PDCA cycle provides effective guidelines for successful problem solving. The cycle
includes:
PDCA – Plan, Do, Check, Act
Plan
Clearly Define the Problem (P1): “A problem clearly stated is a problem half solved”. Although it
seems like a trivial step, the team should not take this step lightly. It is important to begin this
problem-solving journey with a clear, concise problem statement. If this is not done properly, it
could lead to one of the following: excessive time in cause identification due to a broad problem
statement, predisposing the team to a particular solution, or problem solving turns into solution
implementation rather than root-cause identification and remedy.
Collect Evidence of Problem (P2): This activity focuses on obtaining information/data to clearly
demonstrate that the problem does exist. In the case of team problem solving, this should be a quick
exercise since the reliability engineering function must have been looking at data in order to create
the team. The output of this activity will be a list of evidence statements (or graphs) to illustrate that
the problem exists, its size and the chronic nature of it.

Identification of Impacts or Opportunities (P3): This part of the Plan segment focuses on identifying
the benefits if this problem solving is successful. This activity needs to be thought of in two different
perspectives because Project Team work can take the form of control work, e.g. fixing a problem
that stands in the way of expected results or pure improvement (attempting to take results to a new
level of performance). In each case the output of this activity will be a list of statements. The impact
statements and opportunity statements should be stated in terms of loss of dollars, time, “product”,
rework, processing time and/or morale.

Measurement of Problem (P4): Before problem solving proceeds, it is important for the team to do a
quick check on the issue of how valid or reliable the data is on which the team is making the decision
to tackle the problem. For the parameter(s) that are being used as evidence of the problem, is there
any information known by the team that would question the validity, accuracy or reliability of the
data? This question should be examined whether we are relying on an instrument, a recorder or
people to record information or data. If the team suspects that there are significant issues that
“cloud” the data, then these measurement problems needs to be addressed, fixed and new
measures obtained before proceeding with the other segments of PDCA.
Measure(s) of Effectiveness (P5): At this point, the team needs to identify how they intend to
measure success of their problem-solving efforts. This is one of the most important steps in PDCA
and one that certainly differentiates it from traditional problem solving. The strategy is to agree on
what and how, to obtain the benchmark “before” reading, perform the PDCA activities and re-
measure or obtain the “after” measure. At that point, the team will need to decide whether they
need to recycle through PDCA in order to achieve their pre-stated objective.

Do
Generate Possible Causes (D1): To avoid falling into the mode of solution implementation or trial
and error problem solving, the team needs to start with a “blank slate” and from a fresh perspective
lay out all possible causes of the problem. From this point, the team can use data and its collective
knowledge and experience to sort through the most feasible or likely major causes. Proceeding in
this manner will help ensure that the team will ultimately get at root causes of problems and won’t
stop at the treatment of other symptoms. The best tool to facilitate this thinking is the Cause and
Effect Diagram done by those people most knowledgeable and closest to the problem.

Broke-Need-Fixing Causes Identified, Worked On (D2): Before proceeding to carry out either an
Action Plan (for Cause Remedies) or an Experimental Test Plan, there are often parts of the process
that are “broke”. This could take on many different forms.

Write Experimental Test or Action Plan (D3/4): Depending upon the type of problem being worked
on, the PDCA strategy will take one of two different directions at this point. The direction is based on
whether it is a “data-based” problem or “data-limited” problem. Shown in the table below is the
distinction between these two strategies and in particular, the difference between an Action Plan
and Experimental Test Plan. Note that in some cases, it will be necessary to use a combination of
Action Plans and Experimental Test Plans. That is, for some cause areas an Action Plan is appropriate
and for other causes within the same problem, carrying out an Experimental Test Plan is the best
route.

Experimental Test Plan


Approach When to use What
Write Action Suspected Cause(s) cannot be Brainstorm solution to major
Plan (for Cause changed or undone easily once causes Solution “areas” identified
Remedies) they are made; dependent for major causes
variable (other than Measure of Action Plan written to describe
Effectiveness) not obvious; lack of What, Who, How of solutions
data to study causes
Write When the suspected cause (s) can Experimental Design Test Plan
Experimental “operate” at two or more levels; written to test and verify all major
Test Plan the levels can be deliberately and causes, uses other techniques or
easily altered; the effects can be experimental design techniques
measured through dependent
variables
(Source: Relaiable Plant by Keith Mobley, 2008)

Write Action Plan for Cause Remedies (D3): In order to get to the point of writing the Action
Plan, the team needs to brainstorm possible solutions or remedies for each of the “cause
areas” and reach consensus on the prioritized solutions. This work can be carried out as a
team or split into sub-teams. Either way, the entire team will have to reach agreement on
proposed remedies and agree to the Action Plan. The Action Plan will be implemented in the
Check segment.

Write Experimental Test Plan (D4): The Experimental Test Plan is a document which shows
the experimental test(s) to be carried out. This will verify whether a root cause that has been
identified really does impact the dependent variable of interest. Sometimes this can be one
test that will test all causes at once or it could be a series of tests.

Note: If there is a suspicion that there is an interaction between causes, those causes should
be included in the same test.
The Experimental Test Plan should reflect:
 Time/length of test
 How the cause factors will be altered during the trials
 Dependent variable (variable interested in affecting) of interest
 Any noise variables that must be tracked
 Items to be kept constant
Everyone involved in the Experimental Test Plan(s) should be informed before the test is
run. This should include:
 Purpose of the test
 Experimental Test Plan (details)
 How they will be involved
 Key factors to ensure good results
When solutions have been worked up, the team should coordinate trial implementation
of the solutions and the “switch on/off” data analysis technique.

Resources Identified (D5): Once the Experimental Test Plan or the Action Plan is written, it
will be fairly obvious to the team what resources are needed to conduct the work. For
resources not on the team, the team should construct a list of who is needed, for what
reason, the time frame and the approximate amount of time that will be needed. This
information will be given to the Management Team.
Revised PDCA Timetable (D6): At this point, the team has a much better feel for what is to be
involved in the remainder of its PDCA activities. They should adjust the rough timetables
that had been projected in the Plan segment. This information should be updated on the
team Plan, as well as taken to the Management Team.

Management Team Review/Approval (D7): The team has reached a critical point in the PDCA
cycle. The activities they are about to carry out will have obvious impact and consequences
to the department. For this reason, it is crucial to make a presentation to the Management
Team before proceeding. This can be done by the team leader or the entire team. The
content/purpose of this presentation is:
 Present team outputs to date
 Explain logic leading up to the work completed to date
 Present and get Management Team approval for
− Measure of Effectiveness with “before” measure
− Priority causes
− Action Plan (for Cause Remedies) or Experimental Test Plan
− Revised PDCA timetable
Check

Carry out Experimental Test or Action Plan (C1/C2): Depending upon the nature of the problem, the
team will be carrying out either of these steps:
 Conduct Experimental Test Plan(s) to test and verify root causes or
 Work through the details of the appropriate solutions for each cause area. Then,
through data, verify to see if those solutions were effective.

Carry out Action Plan (C1): In the case of Action Plans, where solutions have been worked up and
agreed to by the team, the “switch on/switch off” techniques will need to be used to verify that the
solutions are appropriate and effective. To follow this strategy, the team needs to identify the
dependent variable – the variable that the team is trying to impact through changes in cause factors.

Carry out Experimental Test Plan (C2): During the Check segment, the Experimental Tests to check
all of the major prioritized causes are to be conducted, data analyzed, and conclusions drawn and
agreed to by the team.
Analyze Data from Experimental or Action Plan (C3): Typically, one person from the team is assigned
the responsibility to perform the analysis of the data from the Test Plan. When necessary, this
person should use the department or plant resource available to give guidance on the proper data
analysis tools and/or the interpretation of outputs. The specific tools that should be used will
depend upon the nature of the Test Plan.

Decisions-Back to Do Stage or Proceed (C4): After reviewing the data analysis conclusions about the
suspected causes or solutions that were tested, the team needs to make a critical decision of what
action to take based on this information.

Implementation Plan to Make Change Permanent (C5): The data analysis step could have been
performed in either of the following contexts:
 After the Action Plan (solutions) was carried out, data analysis was performed to see
if the dependent variable was impacted. If the conclusions were favorable, the team
could then go on to develop the Implementation Plan.
 The Experimental Test Plan was conducted; data was analyzed to verify causes. If the
conclusions were favorable (significant causes identified), the team must then
develop solutions to overcome those causes before proceeding to develop the
Implementation Plan. (e.g., It was just discovered through the Test Plan that
technician differences contribute to measurement error.)

Force Field on Implementation (C6): Once the Implementation Plan is written, the team
should do a Force Field Analysis on factors pulling for and factors pulling against a successful
implementation – success in the sense that the results seen in the test situation will be
realized on a permanent basis once the solutions are implemented.

Management Team Review/Approval (C7): The team has reached a very critical point in the
PDCA cycle and needs to meet with the Management Team before proceeding. This meeting
is extremely important, because the team will be going forward with permanent changes to
be made in operations. The Management Team not only needs to approve these changes
but also the way in which they will be implemented.

Act
Carry out Implementation Plan (A1): If the team has written a complete, clear and well thought
through Implementation Plan, it will be very obvious what work needs to be done, by whom and
when to carry out the Act segment of the PDCA cycle. The team should give significant attention to
assure communications and training is carried out thoroughly, so department members will know
what is changing, why the change is being made and what they need to do specifically to make
implementation a success.

Post-Measure of Effectiveness (A2): After all changes have been made and sufficient time has passed
for the results of these changes to have an effect, the team needs to go out and gather data on all of
the Measures of Effectiveness. The data then needs to be analyzed to see if a significant shift has
occurred.

Analyze Results vs. Team Objectives (A3): In the previous step, the team looked at whether the
Measure(s) of Effectiveness had been impacted in any significant way by the permanent
implementation of the changes. The team cannot stop here. If the answer to that question is
favorable, then the team needs to verify if the amount of improvement was large enough to meet
the team objective.

Team Feedback Gathered (A4): Once the team decision has been made that the PDCA cycle has been
successfully completed (based on Measure of Effectiveness change), the team needs to present this
information to the Management Team. Before this is done, the team leader needs to gather
feedback from the team. This feedback will be in the form of a questionnaire that all team members
(including the team leader) should fill out. The results will be tallied by the team leader and recorded
on form A3.

Management Team Close-out Meeting (A5): Before disbanding, the team needs to conduct a close-
out meeting with the Management Team. The major areas to be covered in this meeting are:
 Wrap up any implementation loose ends
 Review Measure of Effectiveness results, compare to team objective
 Ensure team documentation is complete and in order
 Share team member feedback on team experiences (standardized forms and
informal discussion)

5-Why Problem Solving


When you have a problem, go to the place where the problem occurred and ask the question
“Why” five times. In this way, you will find the root causes of the problem and you can start
treating them and rectifying the problem.
5-Why analysis is a technique that doesn’t involve data segmentation, hypothesis testing,
regression or other advanced statistical tools, and in many cases can be completed without a
data collection plan. By repeatedly asking the question “Why” at least five times, you can peel
away the layers of symptoms which can lead to the root cause of a problem.
Here is a simple example of applying the 5-Why analysis to determine the root cause of a
problem. Let’s suppose that you received many customer returns for a particular product. Let’s
attack this problem using the five whys:

Question: Why are the customers returning the product?


Answer: 90 percent of the returns are for dents in the control panel.
Question: Why are there dents in the control panel?
Answer: The control panels are inspected as part of the shipping process. Thus, they must be
damaged during shipping.
Question: Why are they damaged in shipment?
Answer: Because they are not packed to the packaging specification.
Question: Why are they not being packed per the packaging spec?
Answer: Because shipping does not have the packaging spec.
Question: Why doesn’t shipping have the packaging spec?
Answer: Because it is not part of the normal product release process to furnish shipping with
any specifications.

Simplified Failure Modes and Effects Analysis


(SFMEA) is a top-down method of analyzing a design and is widely used in industry. In the U.S.,
automotive companies such as Chrysler, Ford and General Motors require that this type of
analysis be carried out. There are many different company and industry standards, but one of
the most widely used is the Automotive Industry Action Group (AIAG). Using this standard, you
start by considering each component or functional block in the system and how it can fail,
referred to as failure modes. You then determine the effect of each failure mode, and the
severity on the function of the system. Then you determine the likelihood of occurrence and of
detecting the failure. The procedure is to calculate the Risk Priority Number, or RPN, using the
formula:
RPN = Severity × Occurrence × Detection
The second stage is to consider corrective actions which can reduce the severity or occurrence
or increase detection. Typically, you start with the higher RPN values, which indicate the most
severe problems, and work downwards. The RPN is then recalculated after the corrective actions
have been determined. The intention is to get the RPN to the lowest value.

Conclusion
These three tools can be effectively utilized by natural work teams to resolve most problems
that could confront them as part of their day-to-day activities. None require special skills.
Instead, they rely on native knowledge, common sense, and logic. The combined knowledge,
experience and skills of the team is more than adequate for success.

Daily problem-solving tips in a lean organization:


Keep what may seem like ‘little problems’ from adding up and becoming big problems in
the future. The only way to work on tomorrow’s problems is to work on the problems
today while they are still small.
Use visual management and standard work tools to catch problems before they start
adding up. Build the skills, tools and systems needed to deal with those problems as
soon as possible. Start using 5-Why analysis. Continue asking “Why?” at different stages
to dig deeper into the root cause of a problem.
Use Plan-Do-Check-Act, or PDCA. Without fully understanding the cause of what is
happening in a situation, an organization will not have the control in its processes to
sustain lean. Understand that the small problems are a valuable contribution for future
results

2.3 RECORDING & ANALYSIS TOOLS


A Cause-and-Effect Diagram is a tool that helps identify, sort, and display possible causes of a specific
problem or quality characteristic. It graphically illustrates the relationship between a given outcome
and all the factors that influence the outcome. This type of diagram is sometimes called an
"Ishikawa diagram" because it was invented by Kaoru Ishikawa, or a "fishbone diagram" because of
the way it looks

When should a team use a Cause-And-Effect Diagram?


Constructing a Cause-and-Effect Diagram can help your team when you need to
 Identify the possible root causes, the basic reasons, for a specific effect, problem, or
condition.
 Sort out and relate some of the interactions among the factors affecting a particular
process or effect.
 Analyze existing problems so that corrective action can be taken

How do we develop a Cause-and-Effect Diagram?


When you develop a Cause-and-Effect Diagram, you are constructing a structured, pictorial
display of a list of causes organized to show their relationship to a specific effect.The steps
for constructing and analyzing a Cause-and-Effect Diagram are outlined below.

Step 1 - Identify and clearly define the outcome or EFFECT to be analyzed


 Decide on the effect to be examined. Effects are stated as particular quality
characteristics, problems resulting from work, planning objectives, and the like.
 Use Operational Definitions. Develop an Operational Definition of the effect to ensure
that it is clearly understood.
 Remember, an effect may be positive (an objective) or negative (a problem), depending
upon the issue that’s being discussed.

Step 2 - Using a chart pack positioned so that everyone can see it, draw the SPINE and
create the EFFECT box.
 Draw a horizontal arrow pointing to the right. This is the spine.
 To the right of the arrow, write a brief description of the effect or outcome which results
from the process.
EXAMPLE: The EFFECT is Poor Gas Mileage
 Draw a box around the description of the effect.

Step 3 - Identify the main CAUSES contributing to the effect being studied.
These are the labels for the major branches of your diagram and become categories under
which to list the many causes related to those categories.
 Establish the main causes, or categories, under which other possible causes will be
listed. You should use category labels that make sense for the diagram you are
creating. Here are some commonly used categories:
> 3Ms and P - methods, materials, machinery, and people
> 4Ps - policies, procedures, people, and plant
> Environment - a potentially significant fifth category
 Write the main categories your team has selected to the left of the effect box, some
above the spine and some below it.
 Draw a box around each category label and use a diagonal line to form a branch
connecting the box to the spine.
 Step 4 - For each major branch, identify other specific factors which may be the
CAUSES of the EFFECT
 Identify as many causes or factors as possible and attach them as sub-branches of
the major branches.
 EXAMPLE: The possible causes for Poor Gas Mileage are listed under the
appropriate categories in fig 2.1
 Fill in detail for each cause. If a minor cause applies to more than one major cause,
list it under both.
Step 5 - Identify increasingly more detailed levels of causes and continue organizing them
under related causes or categories. You can do this by asking a series of why questions.
EXAMPLE: We’ll use a series of why questions to fill in the detailed levels for one of the
causes listed under each of the main categories.
Q: Why was the driver using the wrong gear?
A: The driver couldn't hear the engine.
Q: Why couldn't the driver hear the engine?
A: The radio was too loud. Or A: Poor hearing

Cause & Effect Diagram from Poor Gas Mileage


Step 6 - Analyze the diagram. Analysis helps you identify causes that warrant further
investigation. Since Cause-and-Effect Diagrams identify only possible causes, you may
want to use a Pareto Chart to help your team determine the cause to focus on first.

 Look at the “balance” of your diagram, checking for comparable levels of detail for most
of the categories.
> A thick cluster of items in one area may indicate a need for further study.
> A main category having only a few specific causes may indicate a need for further
identification of causes.
> If several major branches have only a few sub-branches, you may need to combine
them under a single category.
 Look for causes that appear repeatedly. These may represent root causes.
 Look for what you can measure in each cause so you can quantify the effects of any
changes you make.
 Most importantly, identify and circle the causes that you can act on.

CHECKSHEET
A check sheet is a simple form you can use to collect data in an organized manner and easily
convert it into readily useful information. With a check sheet, you can:
 Collect data with minimal effort.
 Convert raw data into useful information.
 Translate opinions of what is happening into what is happening.
In other words, “I think the problem is . . .” becomes “The data says the problem is . . .”

How to use it:


Clearly identify what is being observed. The events being observed should be clearly
labeled. Everyone has to be looking for the same thing.
Keep the data collection process as easy as possible. Collecting data should not become a
job in and of itself. Simple check marks are easiest.
Group the data. Collected data should be grouped in a way that makes the data valuable
and reliable. Similar problems must be in similar groups.
Be creative. Try to create a format that will give you the most information with the least
amount of effort
Check Sheet for Defective Items
Work Station Defective Items (Date: 23/11/2012) Total Defects
A 1111 1111 10
B 1111 11 7
C 11 2
D 111 3
E 1111 1 6

Pareto Analysis
Imagine that you've just stepped into a new role as head of department. Unsurprisingly,
you've inherited a whole host of problems that need your attention.

Ideally, you want to focus your attention on fixing the most important problems. But how do
you decide which problems you need to deal with first? And are some problems caused by
the same underlying issue?

Pareto Analysis is a simple technique for prioritizing possible changes by identifying the
problems that will be resolved by making these changes. By using this approach, you can
prioritize the individual changes that will most improve the situation.

Pareto Analysis uses the Pareto Principle – also known as the "80/20 Rule" – which is the
idea that 20% of causes generate 80% of results. With this tool, we're trying to find the 20%
of work that will generate 80% of the results that doing all the work would deliver.

How to Use the Tool

Step 1: Identify and List Problems


Firstly, write a list of all the problems that you need to resolve. Where possible, talk to
clients and team members to get their input, and draw on surveys, helpdesk logs and
suchlike, where these are available.

Step 2: Identify the Root Cause of Each Problem


For each problem, identify its fundamental cause. (Techniques such as Brainstorming,
the 5 Whys, Cause and Effect Analysis, and Root Cause Analysis will help with this.)

Step 3: Score Problems


Now you need to score each problem. The scoring method you use depends on the sort
of problem you're trying to solve.
For example, if you're trying to improve profits, you might score problems on the basis
of how much they are costing you. Alternatively, if you're trying to improve customer
satisfaction, you might score them based on the number of complaints eliminated by
solving the problem.

Step 4: Group Problems Together by Root Cause


Next, group problems together by cause- For example, if three of your problems are
caused by lack of staff, put these in the same group.

Step 5: Add up the Scores for Each Group


You can now add up the scores for each cause group. The group with the top score is
your highest priority, and the group with the lowest score is your lowest priority.

Step 6: Act
Now you need to deal with the causes of your problems, dealing with your top-priority
problem, or group of problems, first.
Keep in mind that low scoring problems may not even be worth bothering with - solving
these problems may cost you more than the solutions are worth.

Note: While this approach is great for identifying the most important root cause to deal
with, it doesn't consider the cost of doing so. Where costs are significant, you'll need to use
techniques such as Cost/Benefit Analysis, and use IRRs and NPVs to determine which
changes you should implement.

Pareto Analysis Example


Jack has taken over a failing service center, with a host of problems that need resolving. His
objective is to increase overall customer satisfaction.
He decides to score each problem by the number of complaints that the center has received
for each one. (In the table below, the second column shows the problems he has listed in
step 1 above, the third column shows the underlying causes identified in step 2, and the
fourth column shows the number of complaints about each column identified in step 3.)
Pareto Analysis for Service Center
# Problem (Step 1) Cause (Step 2) Score
(Step 3)
1 Phones aren't answered quickly enough. Too few service centers staff. 15
2 Staff seem distracted and under pressure. Too few service centers staff. 6
3 Engineers don't appear to be well organized. They Poor organization and
4
need second visits to bring extra parts. preparation.
4 Engineers don't know what time they'll arrive. This Poor organization and
means that customers may have to be in all day preparation. 2
for an engineer to visit.
5 Service center staff doesn’t always seem to know Lack of training.
30
what they're doing.
6 When engineers visit, the customer finds that the Lack of training.
21
problem could have been solved over the phone.

Jack then groups problems together (steps 4 and 5). He scores each group by the number of
complaints, and orders the list as follows:
Lack of training (items 5 and 6) – 51 complaints.
Too few service centers staff (items 1 and 42) – 21 complaints.
Poor organization and preparation (items 3 and 4) – 6 complaints.

60
50
40
30
20
10
0
Lack of Training Too Few Service Poor Organization
Center Staff & Prepration

Pareto Diagram from Service Centre Failure


As you can see from figure 1 above, Jack will get the biggest benefits by providing staff with
more training. Once this is done, it may be worth looking at increasing the number of staff in
the call center. It's possible, however, that this won't be necessary: the number of
complaints may decline, and training should help people to be more productive.

By carrying out a Pareto Analysis, Jack is able to focus on training as an issue, rather than
spreading his effort over training, taking on new staff members, and possibly installing a new
computer system to help engineers be more prepared.

2.4 QUANTITATIVE TOOLS


Decision making is a human process: in as much as they are made under conditions of uncertainty,
decisions require human judgment. Sometimes, that judgment can be based upon our “gut feeling”
which ideally arises based on learning from experience. For most decisions that are simple, this “gut
feeling” is adequate. However, with increasing uncertainty and/or a growing number of
independent variables, decisions become more complex and our intuitive judgments become less
reliable. At that point, we require reliable methods and tools to help us make wiser choices
between alternate courses of action.

Decision Making Matrix


A decision-making matrix (table 2.4) can be an effective way to choose between, or to rank
competing alternatives.
The process for using the decision-making matrix can be as follows:
 Identify viable alternatives.
 Identify criteria that are to be used for evaluating the alternatives.
 Assign relative weight (1-10, with 10 being the most important) to each criterion.
Note that more than one criterion may have the same weight.

 Score each alternative for each criterion (again 1-10, with 10 being the best.)

 For each alternative, multiply the score with the corresponding criteria weight and
add these multiples in the last column. This is the total score for each alternative.

 Choose the preferred alternative based on the total score.


Decision Making Matrix
Alternative Criteria A B C D E Total Score/Rank
Relative Score Score Sum of Weight X
Weights Score/Rank
1 …

2
3
4

Sensitivity Analysis
Sensitivity analysis answers the question: “how sensitive is the end result to changes in various
factors affecting it?” Accordingly, sensitivity analysis can help us to decide between alternate
courses of action based on those factors. For example, one set of data may suggest the validity
of a particular decision but, because of the high sensitivity to changes in one or more factors,
another decision may become more appealing if those factors are considered in the decision-
making process. Sensitivity analysis can be used effectively in combination with other
quantitative methods when input data is questionable.

Monte Carlo Simulation


The Monte Carlo Simulation was developed by John Von Neumann on the Manhattan Project
during World War II. It provides a range of values and their probabilities for achieving the result.
This is useful when we are deciding under conditions of uncertainty because it provides a
probability associated with the desired result.
It is based on the following mathematical simulation model:
f(x) = f(x1) + f(x2) + f(x3) …, where:
f(x) is the dependent variable, the result.
Where x1, x2, x3, etc. are the independent variables, or factors affecting the result.
For each independent variable, we can establish a range of possible values and the probability
within that range. Then, we can calculate the range, and the probability distribution within the
range, for the end result, by entering the ranges and the probabilities for each independent
variable and choosing one value and its probability per iteration, for each independent variable,
literally thousands of times.
In simpler terms, we can calculate the probability of completing a project prior to a certain date
by using the schedule network as the simulation model and entering activity durations as a
range of values and probabilities. Similar results can be obtained for project cost estimates.
Because of the large number of iterations, independent variables, their values and the value
probabilities, the Monte Carlo simulation are virtually impossible without computers with
substantial CPUs. These have become available only in the last 10-15 years and we are seeing
more and more use of this method of decision making.

2.5 WORKER & MACHINE RELATIONSHIPS


Multiple activity charts
Chart on which activities of workers, product and machines are recorded on a common time scale to
show their relationships (a variety of charts with different names, but all do the same thing):
Worker-machine process chart (man-machine chart): seeking most effective relationship
between operator(s) and machine(s), i.e. minimum total %idle time.
 worker/operator on left e) break in line (white) = idle time
 Machine on right f) broken/dotted line (shaded) =loading/unloading
 Common vertical time scale g) summary
 Solid line (dark) = productive time
Gang process chart (multi-man chart): multiple activity chart applied to a group of workers,
seeking most effective relationship between several workers, same principles as above
Worker-machine relationshipscan be of three types:
 Synchronous servicing
 Random (asynchronous) servicing
 Combination of both - ‘real-life’

Synchronous servicing: case with a fixed machine cycle time in which the worker loads /unloads
the machine (both worker and machine are utilized simultaneously!) the machine at regular
intervals. Ideally, several machines can be serviced (machine coupling).
In an ideal case:
N = R+m/R
Where: N = number of machines that can be serviced by one operator
R = loading/unloading time per machine = 1
m = machine cycle time (automatic run time) = 2
Operator Vs Machine Load
Operator Machine#1 Machine#2 Machine#3
Loads#1 Loads#1

Loads#2 Loads#2

Loads#3 Loads#3

(Source: Automation and alienation: Jon M. Shepard: Business & Economics,1975)

In real life, the operator will be able to service fewer machines because of w= worker (walk)
time:
N <R+m
R+w
Also, N is typically non-integer; then a decision (typically an economic one, lowest unit cost)
must be made regarding who (worker vs. machine) will be idle
EX: Consider a walk time of 0.1 min (with R = 1.0 and m =2.0). Also, the operator earns
$10.00/hr and the machine cost $20.00/hr to run. Then N = (R+m)/(R+w) = 3/1.1 = 2.7

2.6 KEYWORDS
Problem solving tools-Its purpose is to identify, correct and eliminate recurring problems, and it is
useful in product and process improvement. It establishes a permanent corrective action based on
statistical analysis of the problem (when appropriate) and focuses on the origin of the problem by
determining its root causes.

Monte-carlo simulation-A problem solving technique used to approximate the probability of certain
outcomes by running multiple trial runs, called simulations, using random variables.

Pareto analysis- Pareto Analysis is a simple technique for prioritizing possible changes by identifying
the problems that will be resolved by making these changes.

Why-Why analysis-It is a method of questioning that leads to the identification of the root cause(s)
of a problem

Cause and effect analysis-This diagram is a causal diagram created by Kaoru Ishikawa (1968) that
shows the causes of a specific event. Common uses of the Ishikawa diagram are product design and
quality defect prevention, to identify potential factors causing an overall effect.
PDCA cycle-PDCA (plan–do–check–act or plan–do–check–adjust) is an iterative four-step
management method used in business for the control and continuous improvement of processes
and products.

Multi activity chart- Multiple Activity Charts' illustrate parallel activities and time relationships
between two or more 'resources' i.e. workers, plant/equipment or materials. They are useful where
the interactions between workers, plant/equipment and materials repeat in periodic cycles, e.g.
concreting operations on construction sites.

2.7 SUMMARY

Cause-focused brainstorming and decision-making incorporate both data and experience to identify
and resolve issues. The C&E Matrix uses the voice of the customer (and experience and collected
data) to determine where we should focus our improvement efforts.

The Criteria-based Decision Matrix allows team members to debate and quantify the importance of
selected criteria and identify the best possible option after reviewing how each alternative
addresses the issues and concerns. The 5 Whys can be used individually or, as a part ofthe fishbone
diagram
UNIT 3 OPERATION ANALYSES
Objectives
After going through this unit,you will be able to:
 schedule the two jobs in an appropriate sequence.
 analyze the product design for best suitability according to market.
 state application of various tools and techniques used in operations.
 design the specifications and tolerances for a product or process.
 recognize the critical factors in material handling.
Structure
3.1 Introduction
3.2 Operations purpose
3.3 Part/Product Design
3.4 Specification and Tolerance
3.5 Manufacturing and Process Sequencing
3.6 Setup and Tools
3.7 Material Handling
3.8 Work Design
3.9 Keywords
3.10 Summary

3.1 INTRODUCTION

An operation analysis is a procedure used to determine the efficiency of various aspects of a


business operation. Most operation analysis reports include a scrutiny of a company's
production methods, material costs, equipment implementation and workplace conditions.
Professional consultants are often brought in from outside a company to perform an unbiased
operational analysis. Performing an analysis of operations provides a company with hard data
concerning waste issues and operational risks. Many companies use the information from an
operation analysis to decide on what changes need to be made to improve operations.
The process of operation analysis typically begins with a period of observation. One person or
the group performing the analysis watches and takes detailed takes notes on all day-to-day
operations of the business in this initial stage. Some details of a business's production and
customer service may be timed or tracked during the observational period to produce statistical
information for the report. Employees are commonly asked to perform tasks as they normally
would and try to ignore the presence of the evaluators. On-site observation may last a day or
several weeks, depending on the size of the company.

Companies and organizations making products and delivering it for profit or not for profit rely
on a handful of processes to get their products manufactured properly and delivered on time.
Each of the process acts as an operation for the company. To the company this is essential. That
is why managers find operations management more appealing. We begin this section by looking
at what operations are. Operations strategy is to provide an overall direction that serves the
framework for carrying out all the organization’s functions.

Have you ever imagined a car without a gear or the steering wheel? Whilst, what remains of an
utmost importance to you is to drive the locomotive from one location to another for whatever
purpose you wish, but can only be made possible with each and every part of the car working
together and attached.

3.2 OPERATIONS PURPOSE


Operations are the central functions of all organizations whether producing goods or services, or
in the private, public, or voluntary sectors. The function of the production and operations
department(s) in a company is to take inputs and fashion them into outputs for customer use.
Inputs can be concrete physical objects, data driven, or service driven. The outputs can be
intended for consumer use or for business use. The goal of production and operations is to
create a product in the most economic and efficient way possible. We can take here the
example of production operations in a manufacturing company; the components are mentioned
below.

Creation
The foundation of every production and operations department is the creation of goods or
services. Traditionally, production included the physical assembly of goods, but production
can also include data-based goods such as websites, analysis services and order processing
services.
Customer Service
In many companies, the production and operations department contain the customer-facing
customer service department that addresses the needs of the customer after the purchase
of goods or services. The support function usually is served through phone, online or mail-
based support.
Profit
The main function of the production and operations department is to produce a product or
service that creates profit and revenue for the company. Actualization of profit requires
close monitoring of expenses, production methodology and cost of inputs.
Evaluation
Every production and operations department must function as self-evaluating entity that
monitors the quality, quantity, and cost of goods produced. Analysis usually takes the form
of statistical metrics, production evaluation and routine reporting.
Tasks
Common task functions in a production and operation department include forecasting,
scheduling, purchasing, design, maintenance, people management, flow analysis, reporting,
assembly, and testing.
Fulfillment
Production and operations departments typically function as a fulfillment entity that
ensures the timely delivery of the output from production to customers. Traditionally
fulfillment is shipping and mailing based function but can be electronically based in a data-
driven product.
Analysis
Standard analysis functions in a production and operations department include critical path
analysis, stock control analysis, utilization analysis, capacity analysis, just-in-time analysis of
inputs, quality metrics analysis and break-even analysis.

3.3 PART DESIGN


You have heard this often, usually as “Keep it simple!” But for good designers, just keeping it
simple is not enough. If you only just keep things simple, you will still have complicated designs.
You must simplify, simplify, simplify. So, what makes a design simple? Can your institution alone
judge simplicity? Will you know it when you see it? The less thought and the less knowledge a
device require, the simpler it is. This applies equally to its production, testing and use. Use these
criteria-how much thought, how much knowledge- requires judge your designs, judge best by
comparing one solution to another. Of course, it may take lots of thought and knowledge to get
to a design requiring little of either; that in design. Product design is the process of creating a
new product to be sold by a business to its customers.

It is the efficient and effective generation and development of ideas through a process that
leads to new products. In a systematic approach, product designers conceptualize
and evaluate ideas, turning them into tangible products. The product designer's role is to
combine art, science, and technology to create new products that other people can use. Their
evolving role has been facilitated by digital tools that now allow designers to communicate,
visualize, and analyze ideas in a way that would have taken greater manpower in the past.

Product design is sometimes confused with industrial design and has recently become a broad
term inclusive of service, software, and physical product design. Industrial design is concerned
with bringing artistic form and usability, usually associated with craft design, together to mass
produce goods.

Mechanical part design constitutes a vast majority of industrial product-related design and
manufacturing work throughout the different branches of engineering. Often, if not always, it is
the first step in product manufacturing, often coupled with and simultaneously done with
preliminary product-related calculations. For example, when designing a mechanical gearbox
(gear-chain) system the whole design work constitutes of simultaneous sizing and material
selection procedure together with mechanical design of each gear and axle parts in 2D draft as
well as 3D modeling.

3.4 SPECIFICATIONS & TOLERANCE


Specifications
Specifications are an important part of the design and construction process. Specifications
are a valuable tool during the design stage, are part of the contract documentation, and
have a key role in the efficiency of project fulfillment.

A specification is:
 a tool for conveying the required quality of products (both fabric and services) and
their installation for a particular project; and
 a means of drawing together all the relevant information and standards that apply
to the work to be constructed.

The specification is a set of instructions from owner/client to constructors, prepared by


designers, and should be written in a form that is easily understood. Specifications must be
arranged in a clear structure so specific information is easy to find. The language must be
clear, concise, complete, and correct. The writing style should be unambiguous with minimal
legal phraseology or stilted formal terms. The content must be current and authoritative.

The best guarantee that the project specification will meet these high standards is to base it
on a sound master specification.

A specification is an explicit set of requirements to be satisfied by a material, product, or


service. Should a material, product or service fail to meet one or more of the applicable
specifications, it may be referred to as being out of specification.

Under mass production conditions, mechanical parts cannot be manufactured to exact


dimensions or geometric perfection. A tolerance specifies the range of imperfections in size
and shape that can be permitted for a part to be acceptable for assembly and use.
Dimensional and geometric tolerances describe all the geometric ways that variations in
sizes and shapes are allowed and yet permit each part or assembly to function as intended.
National and international standards (ASME Y14.5, ISO 1101) have been established to
ensure proper communication of geometric dimensions and tolerances (GD&T).
But these standards are based on ad-hoc conventions collected from years of engineering
practice rather than on mathematical principles. This leads to two major problems: (1)
miscommunication and misinterpretation of design specifications by manufacturers,
inspection departments, or suppliers, and (2) unavailability of full three-dimensional analysis
of tolerance stackups involving all types of dimensional and geometric variations.
Comprehensive 3D analysis of stack-ups is only possible if a mathematical model of such
variations exists. The attempt to “retrofit” an “official” math model to the tolerance
standard has not gone far enough. Researchers have proposed replacing the standard
completely—a proposition unacceptable to industry because the valuable empirical
knowledge contained in the current standard will be lost.

Tolerances
As products have gotten increasingly sophisticated and geometrically more complex, the
need to better specify regions of dimensional acceptability has become 2 more apparent.
Traditional tolerances schemes are limited to translation (linear) accuracies that get
appended to a variety of features. In many cases bilateral, unilateral, and limiting tolerance
specifications are adequate for both location as well as feature size specification.
Unfortunately, adding a translation zone or region to all features is not always the best
alternative. In many cases in order to reduce the acceptable region, the tolerance
specification must be reduced to ensure that mating components fit properly.

The result can be a high than necessary manufacturing cost. In this chapter, we will
introduce geometric tolerances; how they are used; and how they are interpreted.

In 1973, the American National Standards Institute, Inc. (ANSI) introduced a system for
specifying dimensional control called Geometric Dimensioning and Tolerance (GD&T). The
system was intended to provide a standard for specifying and interpreting engineering
drawings and was referred to as ANSI Y14.5 – 1973. In 1982, the standard was further
enhanced and a new standard ANSI Y14.5 – 1982 was born. In 1994, the standard was
further evolved to include formal mathematical definitions of geometric dimensioning and
tolerance and became ASME Y14.5 – 1994 may be placed in front of the tolerance value to
denote the tolerance is applied to the whole diameter. The meaning of the modifier will be
discussed later and omitted here.

Following the modifier could be from zero to several Datums (Figure 3-1 and Figure 3.2). For
tolerances such as straightness, flatness, roundness, and cylindricalness, the tolerance is
internal; no external reference feature is needed. In this case, no datum is used in the
feature control frame.

A datum is a plane, surface, point(s), line, axis, or other information source on an object.
Datums are assumed to be exact, and from them, dimensions like the reference location
dimensions in the conventional drawing system can be established. Datums are used for
geometric dimensioning and frequently imply fixturing location information. The correct use
of Datums can significantly affect the manufacturing cost of a part. Figure 3-3 illustrates the
use of Datums and the corresponding fixturing surfaces. The 3-2-1 principle is a way to
guarantee the work piece is perfectly located in the three-dimensional space. Normally
three locators (points) are used to locate the primary datum surface, two for secondary
surface, and one for tertiary surface. After the work piece is located, clamps are used on the
opposing side of the locators to immobilize the work piece.
Geometric tolerance symbols (ASME Y14.5M-1994 GD&T (ISO 1101, geometric tolerance; ISO
5458 positional tolerance; ISO 5459 Datums; and others)

(3-2-1 principle in datum specification)


Source: ASME Y14.5M-1994 GD&T (ISO 1101, geometric tolerance; ISO 5458 positional
tolerance; ISO 5459 Datums; and others
Symbol modifiers
Symbolic modifiers are used to clarify implied tolerances (Figure 3-1). There are three
modifiers that are applied directly to the tolerance value: Maximum Material Condition
(MMC), Regardless of Feature Size (RFS), and Least Material Condition (LMC). RFS is the
default, thus if there is no modifier symbol, RFS is the default callout. MMC can be used
to constrain the tolerance of the produced dimension and the maximum designed
dimension. It is used to maintain clearance and fit. It can be defined as the condition of
a part feature where the maximum amount of material is contained. For example,
maximum shaft size and minimum hole size are illustrated with MMC as shown in Figure
3-5. LMC specifies the opposite of the maximum material condition. It is used for
maintaining interference fit and in special cases to restrict the minimum material to
eliminate vibration In rotating components. MMC and LMC can be applied only when
both of the following conditions hold:
 Two or more features are interrelated with respect to the location or form (e.g.,
two holes). At least one of the features must refer to size.

 MMC or LMC must directly reference a size feature.


M- Maximum material condition- MMC assembly
O- Diametrical tolerance zone
L- Least material condition- LMC less frequently used
P- Projected tolerance zone
RFS (implied unless specified)

Modifiers and applicability


Maximum material diameter and least material diameter
When MMC or LMC is used to specify the tolerance of a hole or shaft, it implies that the tolerance
specified is constrained by the maximum or least material condition as well as some other
dimensional feature(s). For MMC, the tolerance may increase when the actual produced feature size
is larger (for a hole) or smaller (for a shaft). Because the increase in the tolerance is compensated by
the deviance of size in production, the final combined hole-size error and geometric tolerance error
will still be larger than the anticipated smallest hole. Figure 3-6 illustrates the allowed tolerance
under the produced hole size. The allowed tolerance is the actual acceptable tolerance limit; it varies
as the size of the produced hole changes. The specified tolerance is the value.

Allowed tolerances under the produced whole size


Source: ASME Y14.5M-1994 GD&T (ISO 1101, geometric tolerance; ISO 5458 positional tolerance;
ISO 5459 Datums; and others)

3.5 MANUFACTURING SEQUENCE & PROCESS SEQUENCING

The theory of sequencing and scheduling is the study of allocating resources over time to perform a
collection of tasks. Such problems occur under widely varying circumstances. Various interpretations
are possible. Tasks and resources can stand for jobs and machines in a manufacturing system,
patients and hospital equipment, class and teachers, ships and dockyards, programs and computers,
or cities and (traveling) salesmen.

Sequencing and scheduling are concerned with the optimal allocation of resources to activities over
a period of time, which could be infinite of finite. Of obvious practical importance, it has been the
subject of extensive research since the early 1950’s and an impressive amount of literature has been
created. The terminology ‘flow shop’ is used to describe a serial production system in which all jobs
flow along the same route. A more general case would however be when some jobs are not
processed on some machines. In other words, these jobs simply flow through the machines, on
which they are not processed at, but without having to spend any time on them. To generalize the
situation, we can assume that all the jobs must flow through all the machines but have a zero-
processing time at the machines, which are not in the routing matrix. The static flow shop-
sequencing problem denotes the problem of determining the best sequence of jobs on each
machine in the flow shop. The class of shops, in which all the jobs have the same sequence on all the
machines, is called ‘Permutation’ flow shop. Thus, in this case, the problem is then be that of
sequencing the jobs only on the first machine, due to the addition of an extra constraint of same job
sequence at each machine. Ironically this problem is a little harder to address than the more general
case, even though this might seem as a small part (sub problem) of the general case. Various
objectives can be used to determine the quality of the sequence, but most of the research considers
the minimization of make span (i.e., the total completion time of the entire list of jobs) as the
primary objective. Other objectives that can be found in the literature of flow shops are flow time
related (e.g., minimal mean flow time), due-date related (e.g., minimal maximum lateness), and cost
related (e.g., minimal total production cost).

While the Gantt charts are useful for tracking job loading, they do not have the sophistication to
help management determine what job order priorities should be. Sequencing is a process that
determines the priorities job orders should have in the manufacturing process. Sequencing results in
priority rules for job orders.

What are priority rules?

The basic function of priority rules is to provide direction for developing the sequence in which jobs
should be performed. This assists management in ranking job loading decisions for manufacturing
centers. There are several priority rules which can be applied to job loading. The most widely used
priority rules are:
DD-Due Date of a job- The job having the earliest due date has the highest priority.
FCFS-First Come- First Served. The first job reaching the production center is processed first.
LPT-Longest Processing Tim-Jobs having the longest processing time have the highest priority.
PCO-Preferred Customer Order_ A job from a preferred customer receives the highest priority.
SPT-Shortest Processing Time- The job having the shortest processing time has the highest priority.
EXAMPLE:Using the data contained in the table Job Processing Data, it is necessary to schedule
orders according to the priority rules of Due Date (DD), First Come, First Served (FCFS), Longest
Processing Time (LPT), Preferred Customer Order (PCO), and Shortest Processing Time (SPT).
Job Processing Data
Job Processing Data
Job Preferred Customer Processing Time (days) Due Date (days)
Status (1= Highest)
A 3 7 9
B 4 4 6
C 2 2 4
D 5 8 10
E 1 3 5
(Source: Jae K. Shim, Joel G. Siegel, Abraham J. Simon, Business and Economics, 2004)

Priority Rules and Job Sequencing Outcomes


The critical ratio method assigns a priority that is a continually updated ratio between the time
remaining until due date and the required job processing time. When used in conjunction with other
jobs waiting to be processed, it is a relative measure of critical job order priority.

The critical ratio gives the highest priority to jobs that must be done to maintain a predetermined
shipping schedule. Jobs that are falling behind a shipping schedule receive a ratio of less than 1,
while a job receiving a critical ratio greater than 1 is ahead of schedule and is less critical. A job
receiving a critical ratio score of 1.0 is precisely on schedule.

The critical ratio is calculated by dividing the remaining time until the date due by the remaining
process time using the following formula:
critical ratio = remaining time / remaining process time = (Due Date - Today`s date) / days of
remaining process time
EXAMPLE: On day 16, four jobs, A, B, C, and D, are on order for Ferguson`s Kitchen Installation
Service:

Jobs on Order
Job Due Date Days of Remaining Process Time
A 27 8
B 34 16
C 29 15
D 30 14
Using this data, the critical ratios and priority order are computed.

Critical Ratios and Priority


Critical Ratios and Priority
Job Critical Ratio Priority
A (27-16)/8=1.38 4
B (34-16)/16=1.13 3
C (29-16)/15=0.87 1
D (30-16)/14=1 2
(Source: Jae K. Shim, Joel G. Siegel, Abraham J. Simon, Business and Economics, 2004)

Job C has a critical ratio less than one indicating it has fallen behind schedule. Therefore, it gets the
highest priority. Job D is exactly on schedule, but jobs B and A have respectively higher critical ratios
indicating they have some slack time. This gives them respectively lower priorities.

Johnson`s rule for scheduling N jobs in two production centers


Johnson`s rule provides an optimum prioritization based on minimum processing time when N
jobs must be sequentially processed in two production centers. The net result of utilizing
Johnson`s rule is a minimization of total idle time at a production center.

The procedure for using Johnson`s rule is the following:


 Show all the processing times for all orders at each respective processing site.
 Find the job having the shortest processing time. If the job is at the first processing site,
schedule it first; however, if it is at the second processing site, schedule it last.
 Once the job is scheduled, it no longer receives further consideration.
 The remaining jobs are scheduled using rules 2 and 3.
Example
Five job orders must be sequentially processed through two processing centers. The orders
need to be sequenced to achieve minimum idle time.

Processing time for jobs (Hrs)


Processing Time for Jobs in Hours
Job Processing Centre 1 Processing Centre 2
A 11 3
B 6 8
C 9 4
D 2 9
E 10 6
(Source: Jae K. Shim, Joel G. Siegel, Abraham J. Simon, Business and Economics, 2004)

Now it is necessary to sequence the jobs starting with the smallest processing time. The smallest job
is job D in Processing Center 1. Since it is in Processing Center 1, it
is sequenced first and then eliminated from further consideration.

The second smallest processing time is A in Processing Center 2. It is placed last, since it is at
Processing Center 2, and eliminated from further consideration.

D A
The next smallest processing time is job C in Processing Center 2. It is placed next to last.

D CA

For the next smallest processing time, there is a tie in job B in Processing Center 1 and Job E in
Processing Center 2. B is placed in the next highest sequence after job D and job E is placed directly
after B.

D BECA
The resulting sequential processing times are:
Table 3.5 Sequential processing time

Processing
2 6 10 9 11
Centre 1
Processing
9 8 6 4 3
Centre 2

In Processing Center 2 the five jobs are completed in 41 hours, and there are 11 hours of idle time.

3.6 SETUP & TOOLS


SMED
SMED is the term used to represent the Single Minute Exchange of Die or setup time that can be
counted in a single digit of minutes. SMED is often used interchangeably with “quick
changeover”. SMED and quick changeover are the practice of reducing the time it takes to
change a line or machine from running one product to the next. The need for SMED and quick
changeover programs is more popular now than ever due to increased demand for product
variability, reduced product life cycles and the need to significantly reduce inventories.
KANBAN
Set up your Kanban system. (Kanban is the Japanese word for “card,” “ticket,” or “sign” and is a
tool for managing the flow and production of materials in a Toyota-style “pull” production
system.). You plug in the andon, which is a visual control device in a production area that alerts
workers to defects, equipment abnormalities, or other problems using signals such as lights,
audible alarms, etc.
KAIZEN
Kaizen is the lean manufacturing term for continuous improvement and was originally used to
describe a key element of the Toyota Production System. In use, Kaizen describes an
environment where companies and individuals proactively work to improve the manufacturing
process.
OVERALL EQUIPMENT EFFECTIVENESS (OEE)
OEE is an abbreviation for the manufacturing metric Overall Equipment Effectiveness. OEE
considers the various subcomponents of the manufacturing process – Availability, Performance
and Quality. After the various factors are considered the result is expressed as a percentage. This
percentage can be viewed as a snapshot of the current production efficiency for a machine, line,
or cell.
OEE= Availability x Performance x Quality

3.7 MATERIAL HANDLING


Hazard Identification
The first element needed when assessing a material handling task is to identify the hazards and
analyze the risks involved. This can be done by using tools to help you identify problems. OSHA
300 forms or workers’ compensation claims can be reviewed to show past injuries. Checklists
can also be used to identify risks. A material handling checklist might include the following
factors to be considered:
 Weight of load being lifted
 Distance material is moved
 Object itself - Is it easy to grasp? Does it have handles?
 Personal Protective Equipment used – Does it fit properly?
 Adjustability of work surfaces
 Awkward postures
 Frequency of lift
By identifying hazards, problems can be resolved, and efforts can be directed where they are
needed most. Potential problems can also be identified and handled prior to becoming worse.
During the assessment step of the ergonomics program, it is important to involve employees.
After all, they are one of the beneficiaries of a safe workplace. Obtaining worker input allows for
a feeling of importance in the ergonomics process, and enhanced worker motivation. Employees
can participate in the hazard analysis step by completing surveys regarding discomfort felt from
performing the job. Once the risk factors and hazards have been identified, methods to reduce
or eliminate these risks can be developed. These methods are known as controls. Development
of controls is the second element of an ergonomics program.

Development of Controls
Controls can be one of three types: engineering, administrative, or personal protective
equipment. Each has its positive and negative points. When choosing an appropriate control for
a task, the activity must be analyzed as discussed previously and the problems causing the injury
risk can be alleviated using one of the three types of controls.

Engineering Controls
Engineering controls reduce or eliminate hazardous conditions by designing the job to take
account for the employee. The design of the job matches it to the employee, so the job demands
do not pose stress to the worker. Jobs can be redesigned to reduce the physical demands
needed while performing the task. Suggested Engineering controls to be used with a material
handling task include the following:
 Reduced weight lifted
 Decreased distance traveled
 Addition of handles to boxes
 Adjustable work surfaces
 Lifting/Carrying Aids- Hoists, Carts, Conveyors
Engineering controls, while usually more expensive than other controls, are the preferred
approach to preventing and controlling injuries due to material handling tasks. These permanent
design changes allow for increased worker safety while performing the task.

Administrative Controls
Administrative controls deal with work practices and policies. Changes in job rules and
procedures such as rest breaks, or worker rotation schedules are examples of administrative
controls. Administrative controls can be helpful as temporary fixes until engineering controls can
be established.

Personal Protective Equipment


Wrist supports, back belts, and gloves are all examples of tools that serve to protect employees.
These devices may reduce the duration or intensity of exposure to the injury. Administrative
controls can be helpful as temporary fixes until engineering controls can be established. NIOSH
has concluded that insufficient evidence exists to prove that back belts are effective in
preventing injuries while performing manual material handling tasks.

3.8 WORK DESIGN


Departments are encouraged to purchase adjustable equipment for the reasonable accommodation
of users. Some users may have special needs, such as left- handedness, color blindness, vision
impairment, etc. The goal should be flexibility to accommodate the user population so that
personnel may interface effectively with equipment. Equipment should be sized to fit the individual
user.
Purpose
Ergonomic furniture should be designed to facilitate task performance, minimize fatigue and
injury by fitting equipment to the body size, strength, and range of motion of the user. Office
furnishings, which are generally available, have adjustable components that enable the user to
modify the workstation to accommodate different physical dimensions and the requirements of
the job. Ergonomically designed furniture can reduce pain and injury, increase productivity,
improve morale, and decrease complaints. The purchase of equipment should be task specific to
eliminate:
 static or awkward posture,
 repetitive motion,
 poor access or inadequate clearance and excessive reach,
 display that are difficult to read and understand, and
 Controls that are, confusing to operate or require too much force. Therefore,
furniture that is selected should be suitable for the types of tasks performed and be
adaptable to multi-purpose use. Office workstations must be designed carefully to
meet the need of the staff and to accomplish the goals of the facility.

Design objectives should support humans to achieve the operational objectives for which they
are responsible. There are three goals to consider in human-centered design.
 Enhance human abilities
 Overcome human limitations
 Foster user acceptance.To achieve these objectives, there are several key elements of
ergonomics in the office to consider.

Equipment - video display terminals


Software design - system design and screen design for greater usability
Workstation design - chairs, work surfaces and accessories
Environment - space planning, use of colors, lighting, acoustics, air quality and thermal factors
Training - preparing workers to deal with technology

Recommendations
To give departments guidance in selecting office furniture and setting up workstations, the
following guidelines are from the American National Standards Institute and the Environmental
Health and Safety Center. Included are diagrams and a checklist to guide you through the
process.
Keyboard Tray Adjustment Procedure
ERGONOMIC CHAIR CHECKLIST

1. Chair has wheels or castors suitable for the floor surface Yes No

2. Chair swivels Yes No

3. Backrest is adjustable for both height and angle Yes No

4. Backrest supports the inward curve of the lower back Yes No

5. Chair height is appropriate for the individual and the work surface height Yes No
6. Chair is adjusted so there is no pressure on the backs of the legs, and feet are flat on Yes No
the floor or on a foot rest

7. Chair is adjustable from the sitting position Yes No

8. Chair upholstery is a breathable fabric Yes No

9. Footrests are used if feet do not rest flat on the floor Yes No

MONITOR CHECKLIST

1. Top surface of the keyboard space bar is no higher than 2.5 inches above the work surface Yes No

2. During keyboard use, the elbow forms an angle of 90-100 with the upper arm almost Yes No
vertical, the wrist is relaxed and not bent, wrist rests are available

3. If used primarily for text entry, keyboard is directly in front of the operator Yes No

4. If used primarily for data entry, keyboard is directly in front of the keying hand Yes No

5. Top of screen is at eye level or slightly lower Yes No

6. Viewing distance is 18-24 inches Yes No

7. Screen is free of glare or shadows Yes No

8. Images on the screen are sharp, easy to read and do not flicker Yes No

3.9 KEYWORDS

KAIZEN-philosophy or practices that focus upon continuous improvement of processes in


manufacturing, engineering, and business management.
KANBAN-is a scheduling system for lean and just-in-time (JIT) production. Kanban is a system to
control the logistical chain from a production point of view and is not an inventory control system.
SMED-Single-Minute Exchange of Die (SMED) is one of the many lean production methods for
reducing waste in a manufacturing process. It provides a rapid and efficient way of converting a
manufacturing process from running the current product to running the next product
OEE-Overall equipment effectiveness (OEE) is a hierarchy of metrics developed by Seiichi Nakajima
in the 1960s to evaluate how effectively a manufacturing operation is utilized
Material Handling-Material handling is the movement, storage, control and protection of materials,
goods, and products throughout the process of manufacturing, distribution, consumption and
disposal.
Gantt Charts- A Gantt chart is a type of bar chart, developed by Henry Gantt in the 1910s, that
illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal
elements and summary elements of a project.
Specification or Tolerance-A specification (often abbreviated as spec) may refer to an explicit set
of requirements to be satisfied by a material, design, product, or service

3.10 SUMMARY
Operational analysis is conducted in order to understand and develop operational processes that
contribute to overall performance and the furthering of company strategy.
Ideally, a thorough operational analysis should seek to examine a number of functional areas
including strategic planning, customer and business results, financial performance, and quality of
innovation. The objective of this process should principally be to reassess existing processes and
determine how objectives might be better met and how costs could be saved. Operational analysis is
an important part of a business’s self-assessment. The tools like SMED, KAIZEN, KANBAN and OEE
are effective tools to enhance operational performance of a firm.
UNIT 4 DESIGN OF MANUAL WORK, WORKPLACE, EQUIPMENT
& TOOLS
Objectives
After going through this unit,you will be able to:
 identify the relationship between equipment design, tool design and performance of
worker.
 design a suitable workplace for an organization.
 analyze the various situations regarding work holders, fixtures, hand tools, portable power
tools.
 discuss, impact of repetitive motion injuries on an individual.
 Arrange bins and drop delivery to reduce reach and move times.
Structure
4.1 Introduction
4.2 Anthropometry & Design
4.3 Principles of work Design
4.4 Principles of work place design
4.5 Principles of machine and equipment design
4.6 Principles of Tool Design
4.7 Cumulative Trauma Disorders (CTD)
4.8 Motion Economy & Motion Study
4.9 Keywords
4.10 Summary

4.1 INTRODUCTION

The musculoskeletal system is composed of osseous (bone) tissue and muscle tissue. Both are
essential parts of the complex structure that is the body. The skeletal system has a major role in the
total structure of the body, but bones and joints alone cannot produce movement. Together,
skeletal tissue and muscle tissue are important parts of the functioning of the body as a whole.
Because of their structural and functional interrelationships, the muscular and skeletal systems are
often considered as a composite system, the musculo-skeletal system. Organs of the skeletal system
include the numerous skeletal elements (cartilages and bones) consisting of cartilaginous and
osseous CT and associated CT coverings.

 About 200 skeletal elements occur in humans.


 At the gross anatomical level, skeletal elements are classified (based on shape) as follows:
- Long elements have a length >> width (thickness) and usually consist of a shaft
(diaphysis) and 2 rounded ends (epiphyses)
- Short elements have L = W = T and are usually cube-shaped
- Flat elements have T << L or W
- Irregular elements have a complex shape

In adult humans, long and short elements usually initially form as cartilages in the embryo, then are
replaced by bone tissue during the process of endochondrial bone formation. Flat and irregular
elements may form as cartilage and remain as cartilage in the adult, or may form as bone during the
process of intra-membranous bone formation.
Organs of the skeletal muscular system are the numerous skeletal muscles comprised of striated
skeletal muscle cells (parenchyma tissue) and associated CT coverings.

 About 700 skeletal muscles occur in humans.


 At a gross anatomical level, skeletal muscles may be classified based on overall shape and
arrangement of muscle fibers with respect to tendons.
- Flat muscles
- Triangular muscles
- Tapered muscles
- Circular muscles

4.2 ANTHROPOMETRY & DESIGN

The primary guideline is to design the workplace to accommodate most individuals with regard to
structural size of the human body. Anthropometry is the science of measuring the human body
(weight, length)
Selected Body Dimensions
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Body Dimensions and Weights of U.S Adult Civilians


(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

A kth percentile is the value such that k% of the data is below this value and 100 k% of the data
values are above this value. To form a standard normal distribution, approximate by z = (x μ)/σ with
μ the mean value and σ the standard deviation
• 50th percentile = μ
Half of the males are shorter than 68.3 in (1.73 m) while other half are taller. Work

Fig 4.3 Normal Distribution of U.S adult male statures


For normal distribution
kth percentile = μ ± zσ
z values from Table 4.1

Z Value Table
Kth Percentile 10 or 90 5 or 95
Z value ±1.28 ±1.645
kth percentile 10 or 90 5 or 95
z value ± 1.28 ± 1.645
Mean value of male height, μ = 68.3 in, Stand. Deviation, σ = 2.71 in
95th percentile male height = 68.3 +1.645 x (2.71) = 72.76 in (≈185 cm)
5th percentile male height = 68.3 - 1.645 x (2.71) = 63.84 in (≈162 cm)

4.3 PRINCIPLES OF WORK DESIGN


Design for extremes
 A specific design feature is a limiting factor in determining either the maximum or minimum
value of a population variable that will be accommodated.

 Doorway or an entry opening into a storage tank should be designed for the maximum
individual: 95th percentile male stature or shoulder width.

However, this is not true in military aircraft or submarines since space is expensive!
 Reaches to a brake pedal or control knob are designed for the minimum individual: 5th
percentile female leg or arm length – 95% of all females and practically all males will have
longer reach.

Design for adjustability


 Typically used for equipment or facilities that can be adjusted to a wider range of individuals
 Chairs, tables, desks, vehicle seats, steering columns are typically adjustable to
accommodate the population ranging from 5th percentile females to 95th percentile males
 It is the preferred method of design but there is a trade‐off with the cost of implementation

Design for the average


 Cheapest but least preferred approach
 Adjustability can be impractical and too costly in certain situations
 Most industrial machines and tools are too large and too heavy to include height
adjustability for the operator
 If those machines are designed for 50% of the population, tall male or very short females
may experience discomfort

Practical considerations
 Industrial designer should consider the legal rules or advices
 Special accessibility guidelines (Indian Department of Justice, 1991): entryways into
buildings, assembly area, ramps, elevators, doors, water fountains, lavatories, restaurants,
alarms, telephones, etc.
 Very useful to build a full scalemockup and ask the end users to evaluate it before a mass
production

Steps of a typical design problem


 Determine the body dimensions critical for the design
 Define the population being served: adult, child, male, female, etc.
 Select a design principle and the percentage of the population to be accommodated
 Find the appropriate anthropometric values from the given Table 5.1
 Add allowances (shoes, hats, relaxed postures, etc.) and test
Ex. Designing seating in a large training room
Seating arrangement in training room
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

 Determine the body dimensions critical for the design:- Sitting height, eye height
 Define the population being served:- Adult males and females
 Select a design principle and the percentage of the population to be accommodated:-Design
for extremes and accommodate 95% of the population -Allow 5th percentile female sitting
behind a 95th percentile male.

Find the appropriate anthropometric values from the given Table 4.1
-5th percentile female eye height sitting is 26.6 in (67.5 cm) 95th percentile male sitting
height is 38.1 in (96.7 cm)
A rise height of 11.5 in (29.2 cm) is necessary and– Too much and too steep it can be decreased
Example of designing work place: Gender Basis

(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
4.4PRINCIPLES OF WORK PLACE DESIGN
Determine work surface height by elbow height
 Upper arms hanging down naturally
 Elbows are flexed at 90 degree
 Forearms are parallel to the ground
 If work surface is too high → shoulder fatigue
 If surface is too low → back fa gue

work surface height from elbow height


(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Adjust the work surface height based on the task being performed
 Fine assembly → raise the work surface up to 8 in (20 cm) to bring the details closer to
the optimal line of sight
 Light, normal assembly
 Rough assembly involving the lifting of heavy parts → lower the work surface up to 8 in
(20 cm) to take advantage of the stronger trunk muscles
Surface Height Based on the Task Being Performed
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Provide a comfortable chair for the seated operator


 Seating posture reduces both the stress on the feet and the overall energy expenditure
 Comfort is an individual response
 Strict principles for good seating are difficult to define

Sitting Standing
Posture of the Spine When Sitting And Standing
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Provide Adjustability in the seat


 Seat height is the most critical
 Provide lumbar support
 Increase body‐leg angle
 Armrests for shoulders and arm support
 Footrest for shorter individuals
 Wheels assist in movement
 Chair should be slightly contoured and slightly cushioned
 Use breathable fabric to prevent moisture.

Six Basic Seating Postures


(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Arms:When Operator’s hand are on keyboard , upper arm and forearm should form right angle;
hands should be lined up with forearm; if hands are angled up from the wrist, try lowering or
downward tilting the keyboard; optional arm rests should be adjustable.

Backrest: Adjustable for occasional variations; shape should match contour of lower back,
providing even pressure and support.

Posture: Sit all the way back into chair for proper back support; back, neck should be
comfortably erect; knees should be slightly lower than hips; do not cross legs or shift weight
one side; give joints, muscles a chance to relax; periodically, get up and walk around.

Desk: Thin work surface to allow leg room and posture adjustments; adjustable surface height
preferable; table should be large enough for books, files, telephone while permitting
different positions of screen, keyboard, and mouse pad.

Telephone: Cradling telephone receiver between head and shoulder can cause muscle strain;
headset allows head, neck to remain straight while keeping hands free.

Document Holder: Same height and distance from user as the screen, so eyes can remain
focused as they look from one to the other.

Avoiding Eye Strain: 1 Get Glasses that improve focus on screen; measure distance before
visiting eye doctor.
 Try to position screen or lamps so that lighting is indirect; do not have light shining
directly at screen or into eyes
 Use a glare-reducing screen
 Periodically rest eyes by looking into the distance
Properly Adjusted Work Station
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Encourage postural flexibility

 The work station height should be adjustable so that the work can be performed
efficiently either standing or sitting
• The human body is not designed for long periods of sitting
• Provide a sit/stand stool so that the operator can change postures easily
– Needs height adjustability
– Large base of support
Posture Flexibility
(Source: A guide to ergonomics by C. Berry, 2008)

Provide anti-fatigue mats for a standing operator


 Standing for extending periods on a cement floor is fatiguing
 The mats allow small muscle contractions in the legs, forcing the blood to move and
keeping it from tending to pool in the lower extremities
Anti-Fatigue Mats for A Standing Operator
(Source: A guide to ergonomics by C. Berry, 2008)

Locate all tools and materials within the normal working area

Length of arm 28"

Length of Forearm 10"

Length of upper arm 12"

Length of Hand 6.7 "

Length of End Joint (2nd 0.9"


Finger)

(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Normal and Maximum working areas in the horizontal plane for women (for men multiply
by 1.09)
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Normal and maximum working areas in vertical plane for women (for men, multiply by
1.09)

Fix locations for all tools and materials to permit the best sequence

 Eliminates or minimizes the short hesitations required to search for and select the
objects needed to do the work (S, SE: ineffective therbligs)

Tool Balancers provide fixed locations for tools


(Source: A guide to ergonomics by C. Berry, 2008)
Use gravity bins and drop delivery to reduce reach and move times
 Reach (RE) and move (M) are directly proportional to the distance that the hands
must move
 Gravity bins eliminates long reaches to get the supplies
 Gravity chutes allow the disposal of completed parts within the normal area
eliminating long moves

Gravity Bins and Drop Delivery to Reduce Reach and Move Times
(Source: A guide to ergonomics by C. Berry, 2008)

Arrange tools, controls, and other components optimally to minimize motions


 Optimum arrangement depends on many characteristics: human (strength, reach,
sensory) and task (loads, repetition, orientation)
 Locate the components relative to their importance and frequency‐of‐use
 Once a general location is determined for a group of components, the functionality
and sequence of use must be considered
 Place the components or subassemblies in the order that they are assembled (to
reduce wasteful motions)
 Use Systematic Layout Planning (SLP) or other techniques to develop alternative
layouts

4.5PRINCIPLES OF MACHINE & EQUIPMENT DESIGN


Combine two or more tools in one
• Advanced production planning for the most efficient manufacture includes combination of
tools

Use a fixture instead of the hand as a holding device

• Remember: Hold (H) is an ineffective therblig


• A fixture can be designed to hold the work, thus allowing both hands to do useful work
• Fixtures not only save time in processing parts but also permit better quality
• Foot‐operated mechanisms can be used to allow both hands to perform productive work

Locate all control devices for best operator accessibility and strength capability
• Frequently used controls should be positioned between elbow and shoulder height
• Seated operators can apply maximum force to levers located at elbow level, standing
operators at shoulder height
• Hand wheel and crank diameters depend on the torque to be expended and the position
• The diameters of knobs should be increased as greater torques are needed

Use shape, texture, and size coding for controls


• Shape coding, using 2- or 3-dimensional geometric configurations permits both tactual and
visual identification
• It is especially useful under low light conditions
• As the number of shapes and textures increases discrimination can be difficult and slow
• If the operator wears gloves, it is possible to discriminate only 2 to 4 shapes

Use proper control size, displacement, and resistance


The three parameters that have a major impact on performance:
• Control size: A control that is either too small or too large cannot be activated efficiently.
• Control response ratio (C/R): is the amount of movement in a control divided by amount of
movement in the response. Low ratio indicates high sensitivity, high ratio means low
sensitivity
• Control resistance: Important for providing feedback to the operator. It can be
displacement with no resistance, force with no displacement or incorporating features of
both
• Size coding is used principally where the controls cannot be seen by the operators

Generalized illustrations of low and high control‐response ratios (C/R ratios) for lever and rotary
controls the C/R ratio function of display
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Relationship between C/R ratio and movement time


(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Ensure proper compatibility between controls and displays
• Compatibility is defined as the relationship between controls and displays that is consistent
with human expectations
• Affordance/Intuitive: the perceived property results in the desired action
– A door with a handle to pull to open / a door with a plate to push to open
• Mapping: the clear relationship between controls and responses
– Controls on the stoves, clockwise movement to increase
• Feedback: so that the operator knows that the function is accomplished

4.6 PRINCIPLES OF TOOL DESIGN

Use a power grip for tasks requiring force and pinch grips for tasks requiring precision
• In a power grip the handle of the tool, whose axis is perpendicular to the forearm, is held by
the partly flexed fingers and the palm. Opposing pressure is applied by the thumb, which
slightly overlaps the middle finger.
– The force parallel to forearm: sawing
– The force at an angle to the forearm: hammering
– The force acting on a moment arm: using a screwdriver
• The pinch grip is used for control and precision. The item is held between the distal ends of
one or more fingers and the opposing thumb.

Type of Grip
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Avoid prolonged static muscle loading
• Tools held for extended period’s results in fatigue, reduced work capacity, and soreness

Perform twisting motions with the elbows bent


• When the elbow is extended (> 90⁰), tendons and muscles in the arm stretch out and
provide low force capability

Maintain a straight wrist


• As the wrist is moved from its neutral position a loss of grip strength occurs
• Awkward hand positions may result in soreness of the wrist
• Carpal tunnel syndrome

Grip strength as a function of wrist and forearm position


(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Avoid tissue compression


• Considerable compressive force on the palm or the fingers obstructs blood flow to the
tissues and may result in numbness and tingling of the fingers
• Handles should be designed with large contact surfaces, to distribute the force over a larger
area or to direct it to fewer sensitive areas

Design tools so that they can be used by either hand or by most individuals
• Alternating hands allows reduction of the local muscle fatigue
• 90% of the population is right‐handed, 10% are left‐handed
• Female grip strength typically ranges from 50 to 67% of male strength with a smaller grip
span
• The best solution is to provide a variety of tool sizes

Avoid repetitive finger action


• Trigger forces should be kept low to reduce load on the index finger
• Two or three finger- operator controls are better
• NIOSH found high rates of muscle-tendon disorders in workers exceeding 10,000 motions
per day

Use the strongest working fingers: the middle finger and the thumb
• Index finger is usually the fastest but not the strongest
• When a relatively heavy load is involved, use the middle finger, or a combination of middle
finger and the index finger

Design 1.5-in (3.8 cm) handle diameters for power grips


• Power grips around an object should surround the circumference
• The handle diameter for precision grips should be 0.5 in (1.3 cm)

Design handles lengths to be a minimum of 4-in (10.2 cm)


• For both handles and cutouts, there should be enough space to allow for all four fingers
• 5 in (12.7 cm) may be more comfortable
Design a 3-in (7.6cm) grip span for two handled tools
3-in (7.6cm) grip span for two handled tools
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

Design appropriately shaped handles


• Design for maximum surface contact to minimize unit pressure
• For screwdriver type tools the handle end should be rounded to prevent pressure at the
palm
• T-handles yield a much higher torque

Design grip surface to be compressible and nonconductive


• Wood is usually the material of choice for tool handles
• Since wooden handles can break and stain with grease and oil, there has recently been a
shift to plastic and even metal
• Increase friction

Keep the weight of the tool below 5 lb (2.3 kg)


• The tool should be well balanced, with the center of gravity as close as possible to the center
of gravity of the hand

Use gloves for safety and comfort


• A tradeoff between increased safety and decreased performance with gloves must be
considered

Use power tools instead of manual tools


• Power tools perform work faster p than manual tools and do the work with considerably less
operator fatigue
• Power hand tools produce vibration which can cause health problems

Use the proper configuration and orientation of power tools

Configuration and orientation of tools


(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)

4.7 CUMULATIVE TRAUMA DISORDERS

Cumulative trauma disorders (CTD)


Repetitive motion injuries or work‐related musculoskeletal disorders are injuries to the
musculoskeletal system that develop gradually because of repeated micro-trauma due to poor
design and the excessive use of hand tools and other equipment
 61% of all occupational illnesses are associated with repetitive motions
 15 to 20% of workers in key industries are at potential risk for CTD
 Because of the slow onset and relatively mild nature of the trauma, the condition is often
ignored until the symptoms become chronic and more severe injury occurs
 A collection of a variety of problems including repetitive motion disorders, carpal tunnel
syndrome, tendinitis, ganglionitis, tenosynovitis, bursitis.
Four major work-related factors:
 Excessive force,
 Awkward or extreme joint motions
 High repetition,
 Duration of work
Most common symptoms associated with CTD:
– Pain
– Joint movement restriction
– Soft tissue swelling
 If the nerves are affected, sensory responses and motor control may be impaired
 If left untreated, CTD can result in permanent disability

4.8 MOTION ECONOMY & MOTION STUDY

Motion study involves the analysis of the basic hand, arm, and body movements of workers as they
perform work.
Work design involves the methods and motions used to perform a task.
This design includes
 the workplace layout and environment
 The tooling and equipment (e.g., work holders, fixtures, hand tools, portable power
tools, and machine tools). Work design is the design of the work system.

Any manual task is composed of work elements, and the work elements can be further
undivided into basic motion elements. We will define basic motion elements and how they can
be used to analyze work. Frank Gilbreth was the first to catalog (list) the basic motion elements.

Therbligs are the basic building blocks of virtually all manual work performed at a single
workplace and consisting primarily of hand motions. – A list of Gilbreth’s 17 therbligs is
presented along with the letter symbol used for each as well as a brief description. With some
modification, these basic motion elements are used today in several work measurement
systems, such as Methods - Time Measurement (MTM) and the Maynard Operation Sequence
Technique (MOST). Methods analysis at the therblig level seeks to eliminate or reduce
ineffective therbligs. Some of the motion element names and definitions have been revised

Therbligs
 Transport empty (TE) – reach for an object
 Grasp (G) – grasp an object
 Transport loaded (TL) – move an object with hand and arm
 Hold (H) – hold an object
 Release load (RL) – release control of an object
 Use (U) – manipulate a tool
 Pre-position (PP) – position object for next operation
 Position (P) – position object in defined location
 Assemble (A) – join two parts
 Disassemble (DA) – separate multiple parts that were previously joined
 Search (Sh) – attempt to find an object using eyes or hand
 Select (St) – choose among several objects in a group
 Plan (Pn) – decide on an action
 Inspect (I) – determine quality of object
 Unavoidable delay (UD) – waiting due to factors beyond worker control
 Avoidable delay (AD) – worker waiting
 Rest (R) – resting to overcome fatigue

Each therblig represents time and energy spent by a worker to perform a task. If the task is
repetitive, of relatively short duration, and will be performed many times, it may be appropriate
to analyze the therbligs that make up the work cycle as part of the work design process. The
term micro motion analysis is sometimes used for this type of analysis.

4.9 KEYWORDS

Anthropometry- Anthropometry refers to the measurement of the human individual


Therblig- Therbligs are 18 kinds of elemental motions used in the study of motion economy in
the workplace.
Work Place- The workplace is the physical location where someone works. Such a place can range
from a home office to a large office building or factory.

Equipment and Tool Design-The applied science of equipment design, as for the workplace,
intended to maximize productivity by reducing operator fatigue and discomfort.

Cumulative Trauma Disorders (CTD)- A cumulative trauma disorder is a condition where a part of the
body is injured by repeatedly overusing or causing trauma to that body part

4.10 SUMMARY
Disorders of the musculoskeletal system are a major cause of absence from work and lead,
therefore, to considerable cost for the public health system. Health problems arise if the mechanical
workload is higher than the load-bearing capacity of the components of the musculoskeletal system
(bones, tendons, ligaments, muscles, etc.). Apart from the mechanically-induced strain affecting the
locomotors organs directly, psychological factors such as time-pressure, low job decision latitude or
insufficient social support can augment the influence of mechanical strain or may induce
musculoskeletal disorders by increasing muscle tension and affecting motor coordination.
UNIT 5 DESIGN OF WORK ENVIRONMENT
Objectives
After going through this unit,you will be able to:
 control the heat in real workplace.
 realize the impact of noise in manufacturing industry.
 analyze the importance of ventilation in manufacturing environment.
 implement OHSAS 18000:2004 in manufacturing environment.
 differentiate between temporary noise induced hearing loss and permanent noise induced
hearing loss.
Structure
5.1 Introduction
5.2 Impact of Temperature
5.3 Role of Ventilation
5.4 Noise and Its Impact
5.5 Lighting
5.6 OHSAS 18001:2004
5.7 Keywords
5.8 Summary

5.1 INTRODUCTION

The physical aspects of a workplace environment can have a direct impact on the productivity,
health and safety, comfort, concentration, job satisfaction and morale of the people within it.
Important factors in the work environment that should be considered include building design and
age, workplace layout, workstation set-up, furniture and equipment design and quality, space,
temperature, ventilation, lighting, noise, vibration, radiation, air quality.

Applying ergonomic principles to the design, modification, and maintenance of workplace


environments, has a benefit on people’s work performance and short- and long-term health and
safety. When people are working in situations that suit their physical and mental abilities, the
correct fit between the person and the work task is accomplished. People are then in the optimum
situation for learning, working and achieving, without adverse health consequences, e.g. injury,
illness.

When assessing the workplace environment, consideration should be given to individual human
characteristics such as age, sex, experience, physical stature etc., and how well these human
characteristics match the physical environment. Appropriate design of workplace environments will
ensure that they accommodate a broad variety of human characteristics.

The work environment should satisfy the physical and mental requirements of the people who work
within it. The necessary adjustments to the work area, in terms of the heights and angles of furniture
and equipment, should be made for the comfort and safety of each person. Temperature;
ventilation; noise; lighting all has an impact on workers’ health and safety in factories and require a
variety of control mechanisms.

5.2 IMPACT OF TEMPERATURE


Some workers face extremes of temperature as part of their daily work. In India, many workers face
hot, humid conditions – few, however, experience cold working environments (cold store workers,
some computer operators etc.)

Temp in workplace

Cold temperatures are rarely a Many workers experiencehot,


problem for workers in factories. humid conditions, especiallythose
Occasionally, workers in the in the hot section. Thereare
computer design rooms experience number of control measuresthat
cold temperatures.Such can be introduced to reducethe
environments are optimal for temperature.
thecomputer and not for the workers.

(Source: Better factories environment, 2013 (betterfactories.org)


A worker’s ability to do his/her job is affected by working in hot environments. One of the most
important conditions for productive work is maintaining a comfortable temperature inside the
workplace. Of course, the temperature inside the factory varies according to the season and several
methods can be used to address the problem. There are two main ways in which heat (or cold) gets
into the factory:
Directly: through windows, doors, air bricks etc.;
Indirectly: by conduction through the actual fabric of the building namely the roof, walls, and
floor. These warm up through the day as the sun shines and the heat is transferred to the
internal environment often making it hot and sticky for the workers.

There are several measures that management can take to try to reduce the sun’s heat from entering
the factory. These include:
 ensuring that the external walls are smooth in texture and painted in a light colour to help to
reflect the heat.

 improving the heat reflection of the roof.

 Improving heat insulation of walls and ceilings (investigate the possibility of dry lining walls
or adding an insulated ceiling below the roof. Although this is an expensive option it should
be considered in the plans for all new buildings and local, cheap materials should be used as
far as possible);

 Ensuring that the factory is shaded as far as possible by natural means (trees, bushes,
hedges etc) or with shades on windows, doors etc., (note that any shades should not inhibit
access/egress for safety reasons). In very expensive offices, you can see that the windows
are darkened or have sun-reflecting glass. This is not an option for garment factories
because of expense – a simple, cheap option is to whitewash the top part of windows.

How does heat affect workers in the manufacturing industry?


For workers in the factories, too much heat can result in the following health and safety
problems.
Impact on Safety and Health in manufacturing Industry
Safety Health
Fatigue and Dizziness Heat stress/strain (distress)
Sweating palms (become slippery) Heat cramps
Fogging of safety glasses Heat exhaustion/heat stroke
Possible burns Heat rash (prickly heat)
Lower performance/alertness Fainting (syncope)/ increasing irritability

The safety problems tend to be more obvious than the health issues. For example, there is
always the risk of burns for workers in the ironing section through accidental contact with hot
objects. There also tends to be an increased frequency in accidents as workers lose
concentration, get more fatigued, and become more irritable. Tools/equipment can also slip
through sweaty palms and fingers thereby adding to the safety problem. The health problems
associated with hot working environments tend to be more insidious and affect workers more
slowly.

How the body handles heat:


In hot, humid conditions, workers can lose heat and cool down naturally in several ways:
• Byevaporation – by sweating.

• Byradiation – by increasing blood flow and the temperature of the skin surface. It needs
cooler objects nearby for this method to be effective.

• Byconvection – exchange of heat between the body surface and the surrounding air. It
needs air movement to be effective.

• Byconduction– direct exchange of heat between the body and cooler, solid objects.

How do you control heat in the workplace?


There are several basic approaches to tackling heat hazards in garment factories. All involve
reducing exposure by keeping heat away from workers through:
- Engineering controls.
- changing work practices.
- Use of personal protective equipment (as a last resort)
Engineering controls include:
• the use of increased general ventilation throughout the factory by opening windows, by
ensuring that air bricks, doors etc. are not blocked.

• The use of “spot cooling” by the use of fans to reduce the temperature in certain
sections of the factory (the correct placement of fans is essential – see Pictures 22 and
23).

• the use of local exhaust ventilation systems in hot spots such as the ironing section to
directly remove the heat as close to the source of the heat as possible – see Picture 24.

• The use of air conditioners/coolers

Changing work practices include:


 increasing the number and duration of rest periods.

 introducing job rotation so that workers are not always doing so-called “hot work”.

 doing “hot work” in the coolest part of the day.

 providing more workers to reduce the work load so that workers spend shorter

 Times in hot environments

It is important to know the humidity inside the factory. If the factory is very hot and humid,
the process of sweating is not effective, and the workers are in danger of overheating.

5.3 ROLE OF VENTILATION


It is not only essential to provide a comfortable temperature inside the factory, you must ensure:
 an adequate supply of fresh air
 the removal of stale air
 The prevention of any build-up of contaminants (dust, spot cleaning chemicals, etc.)
It is important not to confuse ventilation and air circulation inside the factory. What we tend to see
inside many garment factories is air circulation, namely moving the air around inside the factory
without renewing it with fresh air from outside. In the case of air circulation, fans are placed near
workers (see picture 24) to improve thermal comfort and, in some cases, remove dust. In essence
this means that you are simply circulating stale air plus any contaminants around the factory.
Ventilation refers to replacing stale air (plus any contaminants) with fresh air (or purified air in the
case of air conditioners) at regular intervals. In an average workplace, the air needs to be changed
between 8 and 12 times per hour and that there should be at least 10 cubic meters of air per
worker.Many Indian garment factories rely on the principle of general ventilation by allowing the
free flow of air through the factory from one side to the other – referred to as horizontal airflow.
This can be achieved by opening doors and windows and putting more air bricks in the walls to take
advantage of any prevailing wind. However, it is all too common to find doors and windows etc.,
locked for security reasons or blocked with excess stock or boxes of finished goods awaiting export.
As a result, ventilation is limited.

If you are trying to improve the general ventilation in your factory, here are a few simple suggestions
that can help:
• if you have ventilation systems or free-standing fans in the factory, make sure that they
increase the natural flow of air through the factory and not try to blow air against any
prevailing wind.

• ensure that hot, stale air that rises to the factory roof can easily be removed and replaced
with fresh air (see fig 5.2).

• make sure that all fans are well maintained and regularly cleaned so that they work
efficiently.

• ensure that the airflow to and from fans is not blocked;

• try to ensure that any “hot” processes such as the ironing section is sited next to the “down
wind” wall so that the heat is extracted directly outside rather than being spread around the
factory.
Ventilation and space for stale air
(Source: Better factories environment, 2013 (betterfactories.org)

In cases where there is a buildup of contaminants or heat in specific areas of the factory, local
exhaust ventilation must be used to remove the hazard. This type of ventilation uses suction and
hoods, ducts, tubes etc. to remove the hazard as close to the source as possible and extract it to the
outside environment. It works on a principle like that of a vacuum cleaner but on a much larger
scale.

Vacuum cleaner usage in manufacturing


(Source: Better factories environment, 2013 (betterfactories.org)
Local exhaust ventilation tubes, sucking the dust away from the sewing machine, into a waste
reservoir is emptied, daily. Note that the suction tubes are placed as close as possible to the source
of the hazard
Checklist for Temperature and Ventilation
Yes No Action Required
Are temperatures in the factory maintained at
comfortable working levels?
Are there any hot or cold areas in the factory?
Have any workers complained about these areas?
Is there good natural ventilation (through open
doors, windows, air bricks etc.) in the factory?

Are draughts avoided for those workers seated


near windows, doors etc.?
Is this natural ventilation blocked when there are?
excess boxes of incoming/outgoing stock?
Are fans provided where the natural ventilation is
Inadequate?
Do the fans circulate any fumes, dusts or other?
Harmful chemicals around the factory?
In processes where fumes, dusts etc. may be
released, have any local exhaust ventilation
systems been installed?
Do these systems exhaust contaminated air
safely?
Outside the factory?
Are the filters in these systems checked/changed?
Regularly?
Are the air flows in these systems checked
regularly?

5.4 NOISE & ITS IMPACT


Noise is probably one of the most widespread and underestimated of industrial hazards. High noise
levels are experienced in many parts of the garment industry, especially in those factories that have
weaving machines. Not all the sound we hear is classed as noise – after all, we all enjoy different
types of music. We experience sound in different ways.
What some people find enjoyable and stimulating, others may find noisy and unpleasant. Thus, the
perception of what is sound, or noise is personal, however it is clear that workers can have their
hearing damaged, in some cases permanently, if the sound/noise levels are too high. Most people
define noise as unwanted or unpleasant sound.

Noise can cause a variety of effects including:


 Noise can cause stress and interfere with concentration thus affecting your ability to work.
This can be a contributory factor in workplace accidents as workers lose concentration and
co-ordination. Over the long-term, this increase in stress can lead to a number of health
problems including heart, stomach and nervous disorders.

 Noise can mask or interfere with conversation in the workplace and may contribute to
accidents as warning shouts may not be heard.

 Workers exposed to high noise levels often have difficulty in sleeping when they get home
and are constantly fatigued with that feeling of being tired all the time. Some workers take
painkillers on a regular basis to get rid of headaches induced by the noise. Not surprisingly,
when these workers return to work, their job performance will be reduced. High noise levels
in the workplace are thought to be a contributory factor to increased absenteeism.

 Workers exposed to high noise levels suffer from what is known as noise induced hearing
loss (NIHL) which can lead to several social problems. These workers often cannot hear or
understand instructions at work; they are left out of conversations as fellow workers, family
members or friends get fed up with having to repeat everything; they have to have the
volume of the TV or radio up much higher than others can tolerate leading to arguments at
home. As a result, workers suffering from NIHL tend to be isolated and alone.

How does noise affect our hearing?


The health effects of noise on our hearing depend primarily on the level of the noise and the
length of the exposure. If, after spending a short time in a noisy part of the factory, you go
outside or move to a quieter section, you may notice that you cannot hear too well for a few
minutes – your hearing has been reduced and the condition is known as temporary noise-
induced hearing loss.This kind of “deafness” is reversible and will soon wear off after a short
period of rest. However, the longer you are exposed to the noise, the longer it takes for your
hearing to return to normal. There comes a point, however, when your hearing does not return
to normal and the condition becomes permanent. This is known as permanent noise-induced
hearing loss.In such cases, you have been exposed to excessive noise for too long and the
sensitive components of the hearing organ have been permanently damaged – it cannot be
repaired. When workers first begin to lose their hearing, there are several warning signals that
are significant:

Workers may notice that normal conversation is difficult to hear or have difficulty listening to
someone talking in a crowd or on the telephone. This is often masked to friends or work
colleagues as people suffering from NIHL begin to lip read as people talk to them. In other
words, they adapt themselves to the situation.
The ear can tolerate low tones more easily than high tones. As a result, it is the high tones which
disappear first so that workers suffering from NIHL will hear people with deeper voices more
easily than colleagues with high voices.

When visitors or new workers come to a noisy part of the factory, it is always interesting to note
their reaction if they are not wearing any form of hearing protection. Do they cover their
ears? Do they shout to hold a conversation? Do they leave in a hurry? All these indicators
are significant.

How do you know if the noise level in the factory is too high?
One of the problems is trying to find out if the noise in certain parts of the factory is too high.
One method is to take measurements and compare them with so-called safe levels as
recommended by national regulations7. Unfortunately, few factories or the Labour Inspectorate
have sound level meters to take such noise measurements. Another method is to undertake a
survey and ask workers if they find the workplace too noisy – BE CAREFUL. Many of the workers
will reply that “it was noisy at the beginning, butI’ve got used to it”. Remember the term “I’ve
got used to it” – No they haven’t – the noise level is still the same. All that has happened is that
they have started to lose their hearing. Sound usually consists of many tones of different
volumes (loudness) and pitches (high or low frequency). We find that it is a combination of
volume and pitch that effects our hearing – not solely the volume. High tones irritate much more
than low tones. The volume of sound is measured in decibels (dBA) and the pitch is measured in
hertz (Hz).
Inside a typical garment factory, noise may come from a number of different sources such as the
sewing machines, weaving looms, compressors, radios, background noise, etc. The noise, in the
form of sound waves, is transmitted directly through the air and reflects off walls and ceilings as
well as passing through the factory floor. Obviously, the further away you are from the source of
the noise, the quieter and less harmful it is as the sound waves lose their intensity and die out.
So one method of control is to be as far away as possible from the source of the noise –
unfortunately, many workers cannot do this as they have to operate the noisy machine. If you
want to identify the noise problem in a factory you should measure the noise from each source
and then calculate the overall level using the decibel scale. This is unusual as the scale is a
logarithmic one in which a change of 3 dBA means that the sound has either doubled or halved.
For example, if two machines each create noise levels of 80dBA by themselves, the total noise
level they make together is 83 dBA (not 160 dBA). Similarly, if the noise level has been cut from
90 dBA to 80dBA it means the reduction is the same as if we removed 9 out of 10 noisy
machines from the factory.

Is there a safe level of noise?


A so-called safe level of noise depends on the volume and how long you are exposed to it. In
India, the standard states 85 dBA but doesn’t give an exact indication of the duration apart from
referring to a “daily level”. Most international standards refer to 85 dBA over an 8-hour working
day. If workers are exposed to higher noise levels without any form of hearing protection, the
exposure time must be reduced, either by rotating workers or providing longer rest periods. The
following chart gives some recommended limits of noise level for the number of hours exposed:
Level of Noise
Number of hours exposed Sound level dBA
8 85-90
6 92
4 95
3 97
2 100
1.5 102
1 105
0.5 110

Methods of Noise Control


Workplace noise can be controlled in three ways:
• At the source of the noise.
• Along the path between the source and the worker; and lastly
• At the worker (see below).

Source of Sound
Source: Better factories environment, 2013 (betterfactories.org)

In common with all control strategies for health and safety problems, the most effective method
is to control the hazard at source. However, this often requires considerable expense, and with
profitability being cut to a minimum in the global market, owners and managers are often loathe
spending money, in this area. The least effective, but most common and cheapest method of
control is to put the emphasis on workers wearing some form of personal protective equipment
(PPE). Let us look at some of these methods of control in more detail:

Controlling the noise at source:


Ideally, any machines in the factory should conform to national and international standards and not
produce noise levels above 85 dBA in the first place. Unfortunately, many of the machines are old,
require regular servicing, and should be replaced when possible. In Europe and North America,
machines that no longer meet national standards must be replaced with new machines that certify
that the noise levels emitted are well below 85 dBA and that all possible safety devices etc. are
included. Tragically for many workers in the region, this obsolete equipment is often sold on to
developing countries together with all the faults. Against this background, there are several
mechanisms that can be used to control/reduce noise levels at source including:
• Purchase “quieter” machines.

• Enclose entire machines or particularly noisy parts of machines with soundproof casing.
Remember that no part of the enclosure should in contact with the machine otherwise
the sound waves will be transferred through to the outside. The number of holes in the
enclosure (access points, holes for wires, piping etc.) should be minimized and fitted
with rubber gaskets where possible.

• Regularly service and maintain machines.

• Replace worn or defective machine parts.

• Reduce the vibration in component parts and casings. Ensure that the machines are
mounted correctly on rubber mats or other damping material and that mounting bolts
are secured tightly.

• Replace metal parts with others made of sound absorbing materials e.g. plastic or heavy-
duty rubber.

• Fit mufflers on exhaust outlets and direct them away from the working area.

The noise generated in the handling of materials can also be reduced in many ways,like:
• Reduce the dropping height of goods/waste being collected in bins and containers.
Make sure these boxes and containers are rigid and made of sound absorbing material
such as heavy plastic or rubber.

• Ensure that chutes, conveyor belts etc., are made of similar sound absorbing materials.

• Reduce the speed of any conveyor systems.

• Use belt conveyors rather than the roller type.

Controlling noise along the path between the source and the workers:
If it is not possible to control the noise at source, then methods can be used to minimize the
spread of the sound waves around the factory. Sound waves travel through the air rather
like the ripples on water if you throw a pebble into a pond – the waves spread out from the
source. Accordingly, any method that can be used to stop the spread or absorb the sound
waves can effectively reduce the noise problem. Such methods include:
• Use sound absorbing materials where possible on the walls, floors, and ceilings.
• Place sound absorbing screens between the source of the noise and workers.
• Hang sound absorbing panels from the ceilings to “capture” some of the sound waves
and reduce the overall noise level.
• Build sound-proof control areas and rest rooms.
• If possible, increase the distance between a worker and the source of the noise.

Controlling the noise at the worker:


The most common form of noise “control” is the use of personal protective equipment in
the form of hearing protectors. They work on the principle of preventing damaging sound
waves from reaching the sensitive parts of the inner ear. There are basically two types of
protectors – ear plugs and ear muffs.

Ear Plugs Noise Control Ear Muffs


Source: Better factories environment, 2013 (betterfactories.org)

Ear plugs are worn in the internal part of the ear and they are made of a variety of materials
including rubber, moldable foam, coated plastic or any other material that will fit tightly in
the ear (see figure 5.5). Ear plugs are the least desirable type of hearing protection from an
efficiency and hygiene perspective. On no account should workers be encouraged to stuff
cotton wool in their ears to act as some form of ear plug – all that happens is that some of
the cotton wool gets left behind when the plug is removed and causes an ear infection. From
a health and safety perspective, ear muffs are more efficient than ear plugs providing they
are worn correctly. They must fit over the whole ear (not press the ear flap against the side
of the head) and seal the ear from the sound waves. Workers who have beards or wear
glasses have difficulty in getting a tight seal around the ear.
Source: Better factories environment, 2013 (betterfactories.org)
Look closely at this picture. The worker on the right is wearing a set of earplugs but not the
one on the left. Conversely, the one on the left is wearing a dust mask but not the worker on
the right. Is there a noise problem or a dust problem or both?
Whatever type of ear protection is used, there are several points to remember:
• The noise problem is still present – it has not been reduced.
• In the hot, humid conditions that exist in many Indian factories, most workers find
the wearing of any type of PPE uncomfortable.
• Workers cannot communicate easily if they are wearing hearing protection which
can be a problem in the case of emergency.
• Ear plugs and muffs must be thoroughly tested before use and regularly cleaned,
repaired or replaced.
• Workers must be given training in the correct use of the PPE.

Vibration combined with noise:


Many machines in garment factories are mounted incorrectly or are in need of servicing and,
as a result, vibrate and cause a noise problem. As the machines vibrate, they transmit their
vibrations to the workers. The part of the body affected depends upon which part of the
body is in contact with the machine. These vibrations can injure muscles, joints and, in
particular, the blood vessels. For example, workers whose hands and fingers are in contact
with machines which vibrate can suffer from a condition known as Vibration White Finger.
The solution rests primarily with reducing the vibration from the machine.

Check List for Noise


Check List for Noise
Yes No Action Required
Does the factory conform to national regulations?
on noise?
Are noisy parts of machines enclosed?
Are machines serviced and maintained regularly?
Is there a policy to replace older, noisy machines
with quieter ones?
Are machines correctly mounted to avoid
vibration and reduce noise levels?
Are sound absorbing materials used on ceilings,
walls and floors?
Are adequate barriers used to prevent noise
spreading around the workplace?
Are people working in quieter sections of the
factory protected from noise sources?
Are workers in noisy areas rotated so that their
noise exposure is reduced in duration?
Are workers provided with the best form of?
hearing protection?
Are the earmuffs/plugs etc. regularly cleaned,
maintained or replaced as necessary?
Have workers been given training in the correct
use of ear muffs or ear plugs?

5.5 LIGHTING

From the workers’ perspective, poor lighting at work can lead to eye strain, fatigue, headaches,
stress, and accidents. On the other hand, too much light can also cause health and safety problems
such as “glare” headaches and stress. Both can lead to mistakes at work, poor quality and low
productivity. Various studies suggest that good lighting at the workplace pays dividends in terms of
improved productivity and a reduction in errors. Improvements in lighting do not necessarily mean
that you need more lights and therefore use more electricity – it is often a case of making better use
of existing lights; making sure that all lights are clean and in good condition; and that lights are
positioned correctly for each task. It is also a case of making the best use of natural light.
Most garment factories have a combination of natural and artificial lighting. However, little attention
appears to be paid on the nature of the work – it is as though all work in the factory requires the
same degree of lighting. As we will see, this is not the case. Let us look at some common lighting
problems in the factory:

Lack of Natural and Artificial Light in the Factory:


Although we have already spoken about the need for shading windows to reduce heat inside the
factory, there is also a need to make sure that all windows, skylights, etc., are clean and in the
best position to allow the maximum amount of natural light into the workplace. Companies can
always use appropriate shading methods for reducing the temperature – they should not rely on
the windows being dirty. Skylights and windows high up the factory walls let in much lighter (and
air) than low windows, which often get blocked with stock, raw materials etc.
Similarly, all lights (and reflectors) in the factory should be well maintained and cleaned on a
regular basis (especially when you consider how much dust is released into the atmosphere
during each shift).

It has been known for companies, when the order books are low, to introduce “energy saving”
program to save costs. In the case of lighting, “non-essential” light bulbs may be removed or
reduced in number, flickering fluorescent tubes which need changing are left in place – this
proves to be a false economy as quality and productivity fall. One simple way to improve the
lighting levels in the factory is to paint the walls and ceilings with light, pale, matt colors – the
use of matt paint avoids reflection of light which can lead to problems of glare. The color of
equipment such as sewing machines, workbenches, etc., should normally be matched with that
of the walls and again avoid black, shiny paints. By brightening up the workplace, this helps to
produce a more pleasant place to work which impacts on workers’ well-being and, ultimately,
productivity.

Find the best place for the light source:


It may sound like common sense, but it is essential for the light to focus on the work in hand and
not directly or indirectly in to the workers’ eyes. The more detailed the task, the lighter that is
needed for the workers to carry out the job efficiently
It is also essential that lights are positioned in the correct place so that workers do not have to
adopt poor working postures to see the task in hand. It is also important to have adequate
lighting near any potential hazards such as steps, ramps, etc. and outside the factory for security
at night.

Avoid glare:
Although lighting levels may be adequate in the factory as a whole, glare from a direct light
source or reflected off equipment or shiny surfaces can cause discomfort, eye strain and fatigue
– all of which contribute to an increase in errors, and a reduction in quality and productivity.
Glare has been described as “light in the wrong place” and comes in three different kinds:

Disability glare – can dazzle and impede vision, and so may be a cause of accidents. It is the result of
too much light entering the eye directly.

Discomfort glare – is more common in work situations – it can cause discomfort, strain and fatigue,
especially over long periods. It is caused by direct vision of a bright light source and background.

Reflected glare – is bright light reflected by shiny surfaces into the field of vision. There are several
methods that you can use to avoid or reduce glare in the workplace:

To reduce glare from windows:


• Use blinds, curtains, louvers, or shades.
• Replace clear glass with opaque/translucent materials – paint glass with whitewash.
• Change the layout of workstations.

To reduce glare from lamps:


• Ensure that no naked lights are in direct view of workers.
• Raise the light fittings (if suspended) providing this does not reduce the overall level of
lighting.
• Use shades or shields but ensure that the work area is well lighted.

To reduce reflected glare:


• Change position of the light source and reduce its brightness.
• Cover reflecting surfaces with opaque, non-glossy materials.
• Change the layout of the workstations.

How is light measured?


The level of light is measured in LUX using a light meter. Unfortunately, few factories or the
Labor Inspectorate have any of these meters. The table below gives an indication of some typical
light levels:

Light Intensity
Very bright sunny day Up to 100,000 lux
Overcast day 30,000- 40,000 lux
Dusk 1,000 lux
Shady room in daylight 100 lux

Are there any lighting standards?


Although no detailed standards for lighting exist in India, there are a number of general guidelines
which can be used for reference. These give recommendations for the amount of light that should
be available for the type of work – for example:

Machine shops:
- Rough work and assembly 300 lux
- Medium bench and machine work 500 lux
- Fine bench and machine work 1000 lux

Office work or in a garment factory:


- General tasks 500 lux
- More detailed work 750 lux
- Very fine work 1000 lux
Checklist for illumination requirement
Yes No Action Required
Is there good general illumination (without glare)
Throughout the factory?
Is there regular cleaning and maintenance of lights and
windows?
Where necessary, are windows or skylights
Whitewashed or shaded to avoid glare?
Is there local lighting for close work to reduce eye
Strain and fatigue?
Are "flickering" fluorescent tubes replaced as soon as
possible?
Are the walls and ceilings painted in light colors and kept
clean?
Is there adequate emergency lighting in all areas?
Are outside areas satisfactorily lit for work and access
during hours of darkness for security as well as safety?

5.6 OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION (OSHA) OSHA


ERGONOMICS PROGRAM OR OHSAS 18001:2004
Since 1950, the International Labor Organization (ILO) and the World Health Organization (WHO)
have shared a common definition of occupational health. It was adopted by the Joint ILO/WHO
Committee on Occupational Health at its first session in 1950 and revised at its twelfth session in
1995.

The definition reads:


"Occupational health should aim at: the promotion and maintenance of the highest degree of
physical, mental and social well-being of workers in all occupations; the prevention amongst workers
of departures from health caused by their working conditions; the protection of workers in their
employment from risks resulting from factors adverse to health; the placing and maintenance of the
worker in an occupational environment adapted to his physiological and psychological capabilities;
and, to summarize, the adaptation of work to man and of each man to his job (Helander M, 1995 A
Guide to Ergonomics in Manufacturing).

"The main focus in occupational health is on three different objectives: (i) the maintenance and
promotion of workers’ health and working capacity; (ii) the improvement of working environment
and work to become conducive to safety and health and (iii) development of work organizations and
working cultures in a direction which supports health and safety at work and in doing so also
promotes a positive social climate and smooth operation and may enhance productivity of the
undertakings. The concept of working culture is intended in this context to mean a reflection of the
essential value systems adopted by the undertaking concerned. Such a culture is reflected in practice
in the managerial systems, personnel policy, principles for participation, training policies and quality
management of the undertaking.

Physical hazards are a common source of injuries in many industries. They are perhaps unavoidable
in many industries such as construction and mining, but over time people have developed safety
methods and procedures to manage the risks of physical danger in the workplace. Employment of
children may pose special problems.

Falls are a common cause of occupational injuries and fatalities, especially in construction,
extraction, transportation, healthcare, and building cleaning and maintenance.

An engineering workshop specializing in the fabrication and welding of components must follow the
Personal Protective Equipment (PPE) at work regulations 1992. It is an employer’s duty to provide
‘all equipment (including clothing affording protection against the weather) which is intended to be
worn or held by a person at work which him against one or more risks to his health and safety’. In a
fabrication and welding workshop an employer would be required to provide face and eye
protection, safety footwear, overalls, and other necessary PPE.

5.7 KEYWORDS
Noise- Noise is any annoying, disturbing, or unwanted sound

Manufacturing Industry- Manufacturing is the production of goods for use or sale using labor
and machines,tools, chemical and biological processing, or formulation

Lighting- lighting is one of the most essential elements for good office ergonomics. Having the
proper lighting level for the type of task being performed increases your comfort and accuracy and
reduces eye strain

OHSAS 18000:2004-OHSAS 18001 is a Standard for occupational health and safety management
systems. It exists to help all kinds of organizations put in place demonstrably sound occupational
health and safety performance. It is widely seen as the world’s most recognized occupational health
and safety management systems standard.
Temperature-The temperature of a body is a quantity which indicates how hot or cold the body is. It
is measured by detection of heat radiation, or by a material thermometer, which may
be calibrated in any of various temperature scales, Celsius, Fahrenheit, Kelvin, etc.

5.8 SUMMARY
As an employer, it is your responsibility to provide a safe work environment for all employees, free
from any hazards and complying with all state and central government laws. When we think about
lighting in the workplace, the first thing that comes to mind is the obvious physical effect it has on
us. Inappropriate lighting can lead to a host of problems, ranging from eyestrain to serious
musculoskeletal injuries. The worker should not be affected by light, noise, and temperature inside
the factory. Being manufacturing in the factories, employer, and managers should have to have
taken in consideration the minimum requirement and maximum a worker can be exposed to a
particular environment.
UNIT 6 DESIGN OF COGNITIVE WORK
Objectives
After going through this unit,you will be able to:
 minimize informational workload.
 recognize the importance of visual displays for long, complex messages in noise areas.
 use auditory displays for warnings and short, simple messages.
 value the color and flashing lights to get attention.
 limit absolute judgments to 7 ±2 items.
Structure
1.1 Introduction
1.2 Information Theory
1.3 Human Information Processing Model
1.4 Perception and Signal Detection Theory
1.5 Coding of Information: General Design Principles
1.6 Display of Visual Information
1.7 Display of auditory information
1.8 Environmental Factors
1.9 Dissociating the signal from noise
1.10 Human computer interaction: hardware considerations
1.11 Pointing Devices
6.12 Human computer interaction: software considerations
6.13 Keywords
6.14 Summary

6.1 INTRODUCTION

The design of cognitive work has not been traditionally included as part of methods engineering.
However, with ongoing changes in jobs and the working environment, it is becoming increasingly
important to study not only the manual components of work but also the cognitive aspects of work.

Machines and equipment are becoming increasingly complex and semi, if not, fully automated. The
operator must be able to perceive and interpret large amounts of information, make critical
decisions, and be able to control these machines quickly and accurately. Furthermore, there has
been a gradual shift of jobs from manufacturing to the service sector. In either case, there typically
will be less emphasis on gross physical activity and a greater emphasis on information processing
and decision making, especially via computers and associated modern technology.

6.2 INFORMATION THEORY

Information, in the everyday sense of the word, is knowledge received regarding a particular fact. In
the technical sense, information is the reduction of uncertainty about that fact. For example, the
fact that the engine (oil) light comes on when a car is started provides very little information (other
than that the light bulb is functioning) because it is expected. On the other hand, when that same
light comes on when you are driving down a road, it conveys considerable information about the
status of the engine because it is unexpected and a very unlikely event. Thus, there is a relationship
between the likelihood of an event and the amount of information it conveys, which can be
quantified through the mathematical definition of information. Note that this concept is irrespective
of the importance of the information; that is the status of the engine is quite a bit more important
than whether the windshield-washer container is empty.

Information theory measures information in bits, where a bit is the amount of information required
to decide between two equally likely alternatives. The term “bit” came from the first and last part of
the words binary digit used in computer and communication theory to express the on/off state of a
chip or the polarized/reverse polarized position of small pieces of ferromagnetic core used in archaic
computer memory. Mathematically this can be expressed as:
H = log2 n Where: H = the amount of information.
n = the number of equally likely alternatives.
With only two alternatives, such as the on/off state of a chip or the toss of anot weighted coin, there
is one bit of information presented. With ten equally likely alternatives, such as the numbers from 0
to 9, 3.322 bits of information can be conveyed (log210 = 3.322). An easy way of calculating log2 is
to use the following formula:
log2 n = 1.4427 × ln n
When the alternatives are not equally likely, the information conveyed is determined by:
H = ∑pi × log2 (1/pi)
Where: pi = the probability of the ith event.
i = Alternatives from 1 to n.
As an example, consider a coin weighted so that heads come up 90 percent of the time and tails only
10 percent of time. The amount of information conveyed in a coin toss becomes:
H = 0.9 × log2 (1/0.9) + 0.1 × log2 (1/0.1) = 0.9 × 0.152 + 0.1 × 3.32
= 0.469 bits
Note, that the amount of information (0.469) conveyed by a weighted coin is less than the amount
of information conveyed by an unweighted coin (1.0). The maximum amount of information is
always obtained when the probabilities are equally likely. This is because the more likely an
alternative becomes, the less information is being conveyed (i.e., consider the engine light upon
starting a car). This leads to the concept of redundancy and the reduction of information from the
maximum possible due to unequal probabilities of occurrence. Redundancy can be expressed as:
% redundancy = (1 - H/Hmax) ×100
For the case of the weighted coin, the redundancy is:
% redundancy = (1 - .469/1) × 100 = 53.1%
An interesting example relates to the use of the English language. There are 26 letters in the
alphabet (A through Z) with a theoretical informational content for a randomly chosen letter of 4.7
bits (log2 26 = 4.7). Obviously, with the combinations of letters into words, considerably more
information can be presented. However, there is a considerable reduction in the amount of
information that can be presented due to the unequal probabilities of occurrence. For example,
letters s, t, and e are much more common than q, x, and z. It has been estimated that the
redundancy in the English language amounts to 68 percent (Sanders and McCormick, 1993). On the
other hand, redundancy has some important advantages that will be discussed later with respect to
designing displays and presenting information to users.

One final related concept is the bandwidth or channel capacity, the maximum information
processing speed of a given communication channel. In terms of the human operator, the bandwidth
for motor-processing tasks could be as low as 6–7 bits/sec or as high as 50 bits/sec for speech
communication. For purely sensory storage of the ear (i.e., information not reaching the decision-
making stage), the bandwidth approaches 10,000 bits/sec (Sanders and McCormick, 1993). The
latter value is much higher than the actual amount of information that is processed by the brain in
that time because most of the information received by our senses is filtered out before it reaches
the brain. Source: Energy: Choices for Environment and Development, UN, 2013

6.3 HUMAN INFORMATION PROCESSING MODEL


Numerous models have been put forward to explain how people process information. Most of these
models consist of black boxes (because of relatively incomplete information) representing various
processing stages. Figure 7–1 presents one such generic model consisting of four major stages or
components: perception, decision and response selection, response execution, memory, and
intentional resources distributed over various stages. The decision-making component, when
combined with working memory and long-term memory, can be considered as the central
processing unit while the sensory store is a very transient memory, located at the input stage
(Wickens, Gordon, and Liu, 1997).

A model of human information processing


(From: Sanders and McCormick, 1993. Reproduced with permission of the McGraw-Hill Companies.)

6.4 PERCEPTION AND SIGNAL DETECTION THEORY


Perceptionis the comparison of incoming stimulus information with stored knowledge to categorize
the information. The most basic form of perception is simple detection that is, determining whether
the stimulus is actually present. It becomes more complicated if the person is asked to indicate the
type of stimulus or the stimulus class to which it belongs and then gets into the realm of
identification and recognition with the use of prior experiences and learned associations. The
consequent linkage between long-term memory and perceptual encoding is shown in Figure 6–1.
This latter more complex perception can be explained in terms of feature analysis, breaking down
objects into component geometric shapes or text into words and character strings, and,
simultaneously, of top-down or bottom-up processing to reduce the amount of information entering
central processing. Top-down processing is conceptually driven using high-level concepts to process
low-level perceptual features, while bottom-up processing is data driven and guided by sensory
features.
The detection part of perceptual encoding can be modeled or, in simple tasks, even quantified
through signal detection theory (SDT). The basic concept of SDT is that in any situation, an observer
needs to identify a signal (i.e., whether it is present or absent) from confounding noise. For example,
a quality inspector in an electronics operation must identify and remove defective chip capacitors
from the good capacitors being used in the assembly of printed circuit boards. The defective chip
capacitor is the signal, which could be identified by excessive solder on the capacitor that shorts out
the capacitor. The good capacitors, in this case, would be considered noise. Note that one could just
as easily reverse the decision process and consider good capacitors the signal and defective
capacitors noise. This would probably depend on the relative proportions of each. Given that the
observer must identify whether the signal is present or not and that only two possible states exist
(i.e., the signal is either there or not there), there is a total of four possible outcomes:
Hit—saying there is a signal when the signal is present
Correction rejection—saying there is no signal when no signal is present
False alarm—saying there is a signal when no signal is present
Miss—saying there is no signal when the signal is present

Both the signal and noise can vary over time, as is the case with most industrial processes. For
example, the soldering machine may warm up and, initially, expel a larger drop of solder on the
capacitors, or there may be simply “random” variation in the capacitors with no cause yet
determined. Therefore, both the signal and noise form distributions of varying solder quantity from
low to high, which typically are modeled as overlapping normal distributions (Figure 7–2). Note, the
distributions overlap because excessive solder on the body of the capacitor would cause it to short
out causing a defective product (in this case a signal). However, if there is excessive solder, but
primarily on the leads, it may not short out and thus is still a good capacitor (in this case noise). With
ever-shrinking electronic products, chip capacitors are smaller than pinheads, and the visual
inspection of these is not a trivial task. When a capacitor appears, the inspector needs to decide if
the quantity of solder is excessive or not and whether to reject the capacitor or not. Either through
instructions and/or sufficient practice, the inspector has made a mental standard of judgment, which
is depicted as the vertical line in Figure and termed the response criterion. If the detected quantity of
the solder, which enters the visual system as a high level of sensory stimulation, exceeds the
criterion, the inspector will say there is a signal. On the other hand, if the detected quantity is small,
a smaller level of sensory stimulation is received, landing below the criterion, and the inspector will
say there is no signal. Related to the response criterion is the quantity beta. Numerically beta is the
ratio of the height of the two curves (signal to noise) in at the given
Conceptual illustration of signal detection theory
(From: Sanders and McCormick, 1993. Reproduced with permission of the McGraw-Hill Companies)

criterion point. If the criterion shifts to the left, beta decreases with an increase of hits but at the
cost of a corresponding increase of false alarms. This behavior on the part of the observer is termed
risky. If the criterion were at the point where the two curves intersect, beta would be 1.0. On the
other hand, if the criterion shifts to the right, beta increases with a decrease of both hits and false
alarms. This behavior on the part of the observer would be termed conservative. The response
criterion (and beta) can easily change depending on the mood or fatigue of the visual inspector. It
would not be unexpected for the criterion to shift to the right and the miss rate to increase
dramatically late Friday afternoons shortly before quitting times. Note, that there will be a
corresponding decrease in the hit rate because the two probabilities sum to one. Similarly, the
probabilities of a correct rejection and false alarms also sum to one. The change in the response
criterion is termed response bias and could also change with prior knowledge or changes in
expectancy. If it was known that the soldering machine was malfunctioning, the inspector would
most likely shift the criterion to the left, increasing the number of hits. The criterion could also
change due the costs or benefits associated with the four outcomes. If a particular batch of
capacitors were being sent to NASA for use in the space shuttle, the costs of having a defect would
be very high, and the inspector would set a very low criterion producing many hits but also many
false alarms with corresponding increased costs (e.g., losing good products). On the other had, if the
capacitors were being used in cheap give-away cell phones, the inspector may set a very high
criterion, allowing many defective capacitors to pass through the checkpoint as misses.

6.5 CODING OF INFORMATION: GENERAL DESIGN PRINCIPLES


Many, if not most, industrial functions or operations will be performed by machines, because of the
greater force, accuracy, and repeatability considerations. However, to ensure that these machines
are performing satisfactorily at the desired specifications, there will always be the need for a human
monitor. This operator will then receive a variety of information input (e.g., pressure, speed,
temperature, etc.) that must be presented in a manner or form that will be both readily
interpretable and unlikely to result in an error. Therefore, there are several design principles that
will assist the industrial engineer in providing the appropriate information to the operator.
Cognitive work evaluation checklist
Perception Considerations
Yes No

1. Are key signals enhanced? ❑ ❑

2. Are overlays, special patterns, or grazing light used to enhance defects? ❑ ❑

Are both top-down and bottom-up processing used simultaneously? ❑ ❑

a. Are high-level concepts used to process low-level features? ❑ ❑

b. Is data-driven information used to identify sensory features? ❑ ❑

4. Is better training used to increase sensitivity of signal detection? ❑ ❑

5. Are incentives provided to change the response bias and increase hits? ❑ ❑

Memory Considerations
Yes No

1. Is short-term memory load limited to 7±2 items? ❑ ❑

2. Is chunking utilized to decrease memory load? ❑ ❑

3. Is rehearsal utilized to enhance recall? ❑ ❑


4. Are numbers separated from letters in lists or chunks? ❑ ❑

5. Are similar-sounding items separated? ❑ ❑

6. Are mnemonics and associations used to enhance long-term memory? ❑ ❑

Decision and Response Selection

Yes No

1. Are enough hypotheses examined? ❑ ❑

2. Are enough cues utilized? ❑ ❑

3. Are later cues given equal weight to early cues? ❑ ❑

4. Are undesirable cues filtered out? ❑ ❑

5. Are decision aids utilized to assist in the process? ❑ ❑

6. Are a sufficient number of responses evaluated? ❑ ❑

7. Are potential losses and gains weighted appropriately? ❑ ❑

8. Are speed-accuracy trade-offs considered? ❑ ❑

9. Are the stimuli and responses compatible? ❑ ❑

Attentional Resource Considerations


Yes No

1. ls their task variety? ❑ ❑

2. Is performance feedback provided to the operator? ❑ ❑

3. Does the operator have internal stimulation (e.g., caffeine)? ❑ ❑

4. Does the operator have external stimulation (e.g., music, incentives)? ❑ ❑

5. Are rest breaks provided? ❑ ❑

Type of information to be presented


Information to be presented can be either static or dynamic, depending on whether it changes
over time. The former includes any printed text (even scrolling text on a computer screen), plots,
charts, labels, or diagrams that are unchanging. The latter is any information that is continually
updated such as pressure, speed, temperature, or status lights. Either of the two categories can
also be classified as:
 Quantitative—presenting specific numerical values (e.g., 50˚F, 60 rpm)
 Qualitative—indicating general values or trends (e.g., up, down, hot, cold)
 Status—reflecting one of a limited number of conditions (e.g., on/off,
stop/caution/go)
 Warnings—indicating emergencies or unsafe conditions (e.g., fire alarm)
 Alphanumeric—using letters and numbers (e.g., signs, labels)
 Representational—using pictures, symbols, and color to code information (e.g.,
“wastebasket” for deleted files)
 time-phased—using pulsed signals, varying in duration and interspinal interval (e.g.,
Morse code or blinking lights)
Note that one informational display may incorporate several of these types of information
simultaneously. For example, a stop sign is a static warning using both alphanumeric letters and
an octagonal shape and the red color as representations of information.

Display modality
Since there are five different senses (vision, hearing, touch, taste, smell), there could be five
different display modalities for information to be perceived by the human operator. However,
given that vision and hearing are by far the most developed senses and most used for receiving
information, the choice is generally limited to those two. The choice of which of the two to use
depends on a variety of factors, with each sense having certain advantages as well as certain
disadvantages. The detailed comparisons given in Table 6–1 may aid the industrial engineer in
selecting the appropriate modality for the given circumstances.
Taste has been used in a very limited range of circumstances, primarily added to give medicine a
“bad” taste and prevent children from accidentally swallowing it. Similarly, odors have been
used in the ventilation system of mines to warn miners of emergencies or in natural gas to warn
the homeowner of a leaking stove.

6.6 DISPLAY OF VISUAL INFORMATION: SPECIFIC DESIGN PRINCIPLESFIXED


SCALE, MOVING POINTER DESIGN
There are two major alternative analog display designs: a fixed scale with a moving pointer and a
moving scale with a fixed pointer (see Figure 7–13). The first is the preferred design because all
major compatibility principles (as discussed in Chapter 5) are maintained: increasing values on the
scale go from left to right and a clockwise (or left to right) movement of the pointer indicates
increasing values. With a moving scale and fixed pointer, one of these two compatibility principles
will always be violated. Note, that the display can be,circularor semicircular or a vertical bar or a
horizontal bar, or an open window. The only situation in which the moving scale and fixed pointer
design has an advantage is for very large scales, which cannot be fully shown on the fixed scale
display.

Comparison of Pointers, Scales, and Counters


Service rendered

Indicator Quantitative Qualitative Setting Tracking


reading reading

Moving Fair Good (Changes Good (Easily Good (Pointer


Pointer are easily discernible position is easily
detected) relation between controlled and
setting knob and monitored)
pointer)

Moving Scale Fair Poor (may be Fair (may be Fair (may have
difficult to identify difficult to identify ambiguous
direction and relation between relationship to
magnitude) setting and manual-control
motion) motion)

Counter Good Poor (Position Good (accurate Poor (not readily


(minimum Change may not
time to read include qualitative Method to monitored)
and results in change) monitor numerical
minimum setting)
error)

In that case an open window display can accommodate a very large scale behind the display with
only the relevant portion showing. Note that the fixed scale and moving pointer design can display
very nicely quantitative information as well as general trends in the readings. Also, the same displays
can be generated with computer graphics or electronics without a need for traditional mechanical
scales.
Types of displays for presenting quantitative information
(From: Sanders and McCormick, 1993. Reproduced with permission of the McGraw-Hill Companies)

6.7 DISPLAY OF AUDITORY INFORMATION: SPECIFIC DESIGN PRINCIPLES


Auditory Signals for Warnings
As discussed previously, there are special characteristics of the auditory system that warrant
using an auditory signal for warnings. Simple reaction time is considerably quicker to auditory
than visual signals (e.g., consider the starter pistol to start races). An auditory signal places much
higher attention demands on the worker than a visual signal. Since hearing is unidirectional and
sound waves penetrate barriers (to some degree, depending on thickness and material
properties), auditory signals are especially useful if workers are at an unknown location and
moving about.

Two-Stage Auditory Signals


Since the auditory system is limited to short and simple messages, a two-stage signal should be
considered when complex information is to be presented. The first stage should be an attention-
demanding signal to attract attention, while the second stage would be used to present more
precise information.

Human Abilities and Limitations


Since human auditory sensitivity is best at approximately 1,000 Hz, use auditory signals with
frequencies in the range of 500 and 3,000 Hz. Increasing signal intensity will serve two purposes.
First it will increase the attention-demanding quality of the signal and decrease response time.
Second, it will tend to better differentiate the signal from background noise. On the other hand,
one should avoid excessive levels (e.g., well above 100 dB) as these will tend to cause a startle
response and perhaps disrupt performance. Where feasible, avoid steady-state signals so as to
avoid adaption to the signal. Thus, modulation of the signal (i.e., turning the signal on and off in
a regular cycle), in the frequency range of 1 to 3 Hz will tend to increase the attention-
demanding quality of the signal.

6.8 ENVIRONMENTAL FACTORS

Since sound waves can be dispersed or attenuated by the working environment, it is important to
take environmental factors into account. Use signal frequencies below 1,000 Hz when the signals
need to travel long distances (i.e., more than 1,000 ft), because higher frequencies tend to be
absorbed or dispersed more easily. Use frequencies below 500 Hz when signals need to bend around
obstacles or pass through partitions. The lower the signal frequency the more similar sound waves
become to vibrations in solid objects, again with lower absorption characteristics.

6.9 DISSOCIATING THE SIGNAL FROM NOISE

Auditory signals should be as separate as possible from other sounds, whether useful auditory
signals or unneeded noise. This means the desired signal should be as different as possible from
other signals in terms of frequency, intensity, and modulation. If possible, warnings should be placed
on a separate communication channel to increase the sense of disposabilityand increase the
attention demanding qualities of the warning. The above principles for designing displays, both
auditory and visual, are summarized as an evaluative checklist in table 7.3. If purchased equipment
has dials or other displays that don’t correspond to these design guidelines, then there is the
possibility for operator error and potential loss. If at all possible, those problems should be
corrected, or the displays replaced.

6.10 HUMAN COMPUTER INTERACTION: HARDWARE


CONSIDERATIONSKEYBOARDS

The standard computer keyboard used today is based on the typewriter key layout patented by C. L.
Sholes in 1878. Termed a QWERTY keyboard, because of the sequence of the first six leftmost keys in
the third row, it has the distinction of allocating some of the most common English letters to the
weakest fingers
Checklist for Hardware Considerations
General Principles
Yes No

1. Is the number of absolute judgments limited to 7±2 items? ❑ ❑

2. Is the difference between coding levels well above the JND? ❑ ❑

3. Is the coding scheme compatible with human expectations? ❑ ❑

4. Is the coding scheme consistent with existing schemes? ❑ ❑

5. Is redundancy utilized for critical situations? ❑ ❑

Visual Displays
Yes No

1. Is the message long and complex? ❑ ❑

2. Does the message deal with spatial information? ❑ ❑

3. Does one need to refer to the message later? ❑ ❑

4. Is hearing overburdened or is noise present? ❑ ❑

5. Is the operator in a stationary location? ❑ ❑

❑ ❑
6. For general purpose and trends, is a fixed-scale, moving-pointer
display ❑ ❑
being used? ❑ ❑

a. Do scale values increase from left to, right? ❑ ❑


b. Does a clockwise movement indicate increasing values?
❑ ❑
c. Is there an orderly numerical progression with major markers at 0, 10,
20, etc.? ❑ ❑

d. Are there intermediate markers at 5, 15, 25, etc. and minor markers at ❑ ❑
each unit?
❑ ❑
e. Does the pointer have a pointed tip just meeting the smallest scale
markers?
❑ ❑
f. Is the pointer close to the surface of the scale to avoid parallax?

7. For a very large scale, is an open-window display being used? ❑ ❑

8. For precise readings, is a digital counter being used? ❑ ❑

9. For check reading of a panel of dials, are pointers aligned and a ❑ ❑

pattern utilized? ❑ ❑

10. For attentional purposes, are indicator lights being used? ❑ ❑

a. Do the lights flash (1 to 10/sec) to attract attention? ❑ ❑


b. Are the lights large (1 degree of visual arc) and bright? ❑ ❑
c. Does the light remain on until the improper condition has been
remedied? ❑ ❑

❑ ❑

11. Are alphanumeric characters of proper size? ❑ ❑

a. Are they at least a 10-point font at a distance of 20 inches (22 min of ❑ ❑


visual arc)?
❑ ❑
b. In a well-illuminated area, are the letters dark on a light background?

Do they have a stroke width-to-height ratio of approximately 1:6? ❑ ❑

c. In a dark area, are the letters white on a dark background? ❑ ❑

Do they have a stroke width-to-height ratio of approximately 1:8 for ❑ ❑


nighttime use?
❑ ❑
d. Are both uppercase and lowercase letters utilized?

e. For special emphasis, are capitals or boldface utilized? ❑ ❑

❑ ❑

Auditory Displays
Yes No

1. Is the message short and simple? ❑ ❑

2. Does the message deal with events in time? ❑ ❑

Is the message a warning or is immediate action required? ❑ ❑

4. Is vision difficult or overburdened? ❑ ❑

5. Is the operator moving about? ❑ ❑

6. Is a two-stage signal being utilized? ❑ ❑

7. Is the frequency of the signal in the range of 500 to 3,000 Hz for best
❑ ❑
auditory sensitivity?

8. Is the sound level of the signal well above background noise? ❑ ❑

9. Is the signal being modulated (1 to 3 Hz) to attract attention? ❑ ❑

10. If the signal is traveling over 1,000 ft or around obstacles, is the ❑ ❑

frequency below 500 Hz? ❑ ❑

11. If a warning, is a separate communication channel being used? ❑ ❑

6.11 POINTING DEVICES


The primary device for data entry is the keyboard. However, with the growing iniquitousness of
graphical user interfaces and depending on the task performed, the operator may spend less than
half the time using the keyboard. Especially for window and menu-based systems, some type of a
cursor-positioning or a pointing device better than the cursor keys on a keyboard is needed. A wide
variety of devices has been developed and tested. The touch screen uses either a touch-sensitive
overlay on the screen or senses the interruption of an infrared beam across the screen as the finger
approaches the screen. This approach is quite natural with the user simply touching the target
directly on the screen. However, the finger can obscure the target and, despite the fairly large
targets required, accuracy can be poor. The light pen is a special stylus linked to the computer by an
electrical cable that senses the electron scanning beam at the location on the screen. The user has a
similar natural pointing response as with a touch screen, but usually with more accuracy.

A digitizing tablet is a flat pad placed on the desktop, again linked to the computer. Movement of a
stylus is sensed at the appropriate position on the tablet, which can either be absolute (i.e., the
tablet is a representation of the screen) or relative (i.e., only direct movement across the tablet is
shown). Further complications are tablet size versus accuracy trade-offs and optimum control
response ratios. Also, the user needs to look back at the screen to receive feedback.

Both displacement and force joysticks (currently termed track sticks or track points) can be used to
control the cursor and have a considerable background of research on types of control systems,
types of displays, control response ratios, and tracking performance. The mouse is a hand-held
device with a roller ball in the base to control position and one or more buttons for other inputs. It is
a relative positioning device and requires a clear space next to the keyboard for operation. The
trackball, an upside-down mouse without the mouse pad, is a good alternative for work surfaces
with limited space. More recently, touchpad, a form of digitizing tablets integrated into the
keyboard, have become popular especially for notebook PCs.

Notebooks and Hand-Held Pcs


Portable PCs or laptop or notebook computers are becoming very popular, accounting for 34
percent of the U.S. PC market in 2000. Their main advantage over a desktop is reduced size (and
weight) and portability. However, with the smaller size, there are distinct disadvantages; smaller
keys and keyboard, keyboard attached to screen, and lack of a peripheral cursor-positioning
devices. The lack of adjustability in placing the screen has been found to give rise to excessive
neck flexion (much beyond the recommended 15 degrees), increased shoulder flexion, and
elbow angles greater than 90 degrees, which has accelerated feelings of discomfort as compared
to using a desktop PC. Adding an external keyboard and raising the notebook computer or
adding an external monitor helps alleviate the situation.

Even smaller handheld computers, termed personal digital assistants, have been developed but
are too new to have had detailed scientific evaluations performed. Being pocket-sized, they offer
much greater portability and flexibility, but at an even greater disadvantage for data entry.
Decrements in accuracy and speed, when entering text via the touch screen, have been found.
Alternate input methods such as handwriting, or voice input may be better.

6.12 HUMAN COMPUTER INTERACTION: SOFTWARE CONSIDERATIONS


The typical industrial or methods engineer will not be developing programs but will, most likely, be
using a variety of existing software. Therefore, that person should be aware of current software
features or standards that allow best human interaction with the computer and minimize the
number of errors that could occur through poor design.
Most current interactive computing software utilizes the graphical user interface (GUI), identified by
four main elements: windows, icons, menus, and pointers (sometimes collectively termed WIMP).
Windows are the areas of the screen that behave as if they were independent screens. They typically
contain text or graphics and can be moved around or resized. More than one window can be on a
screen at once, allowing users to switch back and forth between various tasks or information
sources. This leads to a potential problem of windows overlapping each other and obscuring vital
information. Consequently, there needs to be a layout policy with windows being tiled, cascaded, or
picture-in-a-picture (PIP). Usually windows have features that increase their usefulness such as
scrollbars, allowing the user to move the contents of the window up and down or from left to right.
This makes the window behave as if it were a real window onto a much larger world, where new
information is brought into view by manipulating the scrollbars. There is usually a title bar attached
to the top of the window, identifying it to the user, and there may be special boxes in the corners of
the window to aid in resizing, and closing. Icons are small or reduced representations of windows or
other entities within the interface. By allowing icons, many windows can be available on the screen
at the same time, ready to be expanded to a useful size by clicking on the icon with a pointer
(typically a mouse). The icon saves space on the screen and serves as a reminder containing the
dialog. Other useful entities represented by icons include a wastebasket for deleting unwanted files,
programs, applications, or files accessible to the user. Icons can take many forms: they can be
realistic representations of the objects they stand for or they can be highly stylized, but with
appropriate reference to the entity (known as compatibility) so that users can easily interpret them.

The pointer is an important component of the WIMP interface since the selection of an appropriate
icon requires a quick and efficient means of directly manipulating it. Currently the mouse is the most
common pointing device, although joysticks and trackballs can serve as useful alternatives. A touch
screen, with the finger serving as a pointer, can serve as very quick alternative and even redundant
backup/safety measure in emergency situations. Different shapes of the cursor are often used to
distinguish different modes of the pointer, such as, an arrow for simple pointing, crosshairs for
drawing lines, and a paintbrush for filling in outlines. Pointing cursors are essentially icons or images
and thus should have a hot spot that indicates the active pointing location. For an arrow, the tip is
the obvious hot spot. However, cutesy images (e.g., dogs and cats) should be avoided because they
have no obvious hot spot.

Menus present an ordered list of operations, services, or information that is available to the user.
This implies that the names used for the commands in the menu should be meaningful and
informative. The pointing device is used to indicate the desired option, with possible or reasonable
options highlighted and impossible or unreasonable actions dimmed. Selection usually requires an
additional action by the user, usually clicking a button on a mouse or touching the screen with the
finger or a pointer. When the number of possible menu items increases beyond a reasonable limit
(typically 7 to 10), the items need to be grouped in separate windows with only the title or a label
appearing on a menu bar. When the title is clicked, the underlying items pop up in a separate
window known as a pull-down menu. To facilitate finding the desired item, it is important to group
menu items by functionality or similarity. Within a given window or menu, the items should be
ordered by importance and frequency of use. Opposite functions, such as SAVE and DELETE, should
be clearly kept apart to prevent accidental miss election. Other menu like features include buttons,
isolated picture-in-picture windows within a display that can be selected by the user to invoke
specific actions, toolbars, a collection of buttons or icons, and dialog boxes that pop up to bring
important information to the user’s attention such as possible errors, problems, or emergencies.

Other principles in screen design include simple usability considerations: orderly, clean, clutter-free
appearance, expected information located where it should be consistently from screen to screen for
similar functions or information. Eye-tracking studies indicate that the user’s eyes typically move
first to the upper left center of the display and then move quickly in a clockwise direction. Therefore,
an obvious starting point should be in the left upper corner of the screen, permitted the standard
left-to-right and top-to-bottom reading pattern found in Western cultures. The composition of the
display should be visually pleasing with balance, symmetry, regularity, predictability, proportion, and
sequence.

Graphical User Interface Features Checklist


Windows Features
Yes No

1. Does the software use movable areas of the screen termed windows? ❑ ❑

2. Is there a layout policy for the windows (i.e., are they tiled, cascaded, or picture-in- ❑ ❑
picture)?

3. Are there scrollbars to allow the contents of windows to be moved up or down? ❑ ❑

4. Are there meaningful titles identifying the windows? ❑ ❑

5. Are there special boxes in the corners of the windows to resize or close them? ❑ ❑

Icon Features
Yes No
1. Are reduced versions of frequently used windows, termed icons, utilized? ❑ ❑

2. Are the icons easily interpretable or realistic representations of the given feature? ❑ ❑

Pointer Features
Yes No

1. Is a pointing device (mouse, joystick, touch screen) utilized to move icons? ❑ ❑

2. Is the pointer or cursor easily identifiable with an obvious active area or hot spot? ❑ ❑

Menu Features
Yes No

1. Are meaningful menus (list of operations) with descriptive titles provided? ❑ ❑

2. Are menu items functionally grouped? ❑ ❑

3. Are menu items limited to a reasonable number (7 to 10)? ❑ ❑

4. Are buttons available for specific common actions? ❑ ❑

5. Are toolbars with a collection of buttons or icons used? ❑ ❑

6. Are dialog boxes used to notify the user of potential problems? ❑ ❑

Other Usability Considerations


Yes No

1. Is the screen design simple, orderly, and clutter free? ❑ ❑

2. Are similar functions located consistently from screen to screen? ❑ ❑

3. Is the starting point for the screen action the upper left-hand corner? ❑ ❑

4. Does the screen action proceed left to right and top to bottom? ❑ ❑

5. Is any text brief and concise and does it use both uppercase and lowercase ❑ ❑
fonts?

6. Is color used sparingly for attention (i.e., limited to eight colors)? ❑ ❑

7. Does the user have control over exiting screens and undoing actions? Is ❑ ❑
feedback provided for any action?

6.13 SUMMARY
This unit presented a conceptual model of the human as an information processor along with the
capacities and limitations of such a system. Specific details were given for properly designing
cognitive work so as not to overload the human with regard to information presented through
auditory and visual displays, to information being stored in various memories, and to information
being processed as part of the final decision-making and response-selection step. Also, since the
computer is the common tool associated with information processing, issues, and design features
with regard to the computer workstation were also addressed. With manual work activities, the
physical aspects of the workplace and tools, and the working environment, the cognitive element is
the final aspect of the human operator at work and the analyst is now ready to implement the new
method.

6.14 KEYWORDS

Human Factor Engineering (HFE) - Human Factors Engineering (HFE) is an interdisciplinary approach
to evaluating and improving the safety, efficiency, and robustness of work systems, such as
healthcare delivery

Display of Visual Information- the Information in the form of graphs and bar charts etc.

Display of auditory information- Auditory display is the use of sound to communicate information
from a computer to the user

Human information processing model-Human information processing theory deals with how people
receive, store, integrate, retrieve, and use information

Information Theory-Information theory is a branch of applied mathematics, electrical


engineering, bioinformatics, and computer science involving the quantification of information

Human Computer Interaction- Human–computer interaction (HCI) involves the study, planning, and
design of the interaction between people (users) and computers. It is often regarded as the
intersection of computer science, behavioral sciences, design and several other fields of study
UNIT 7 ANTHROPOMETRY & WORK DESIGN
Objectives
After going through this unit,you will be able to:
 design the workplace with anthropometric considerations.
 recognize the importance design limit approach and consideration in design.
 use clearance dimension at the 95th percentile.
 use limiting dimension at 5th percentile.
 use the concept of combined data for design criteria.
Structure
7.1 Introduction
7.2 Using Design Limits
7.3 Avoiding Pitfalls in Applying Anthropometric Data
7.4 Solving A Complex Sequence of Design Problems
7.5 Need for Indian Anthropometry
7.6 Guidelines for Design Use
7.7 Percentile Selection for Design Use
7.8 Use of Average
7.9 Concept of Male-Female Combined Data for Design Use
7.10 Practical Applications
7.11 Keywords
7.12 Summary

7.1 INTRODUCTION
Designers and human factors specialists incorporate scientific data on human physical capabilities
into the design of systems and equipment. Human physical characteristics, unlike those of machines,
cannot be designed. However, failure to consider human physical characteristics when designing
systems or equipment can place unnecessary demands and restrictions upon user personnel.

The term anthropometry literally means the measure of humans. From a practical standpoint, the
field of anthropometry is the science of measurement and the art of application that establishes the
physical geometry, mass properties, and strength capabilities of the human body (Roebuck 1995).
Anthropometric data are fundamental in the fields of work physiology (Åstrand and Rodahl 1986),
occupational biomechanics (Chaffin, Anderson, and Martin 1999), and ergonomics/work design
(Konzand Johnson 2004). Anthropometric data are used in the evaluation and design of
workstations, equipment, tools, clothing, personal protective equipment, and products, as well as in
biomechanical models and bioengineering applications.
It is a fundamental concept of nature that humans come in a variety of sizes and proportions.
Because there is a reasonable amount of useful anthropometric data available, it is usually not
necessary to collect measurements on a specific workforce. The most common application involves
design for a general occupational population.

Definitions
Anthropometry is the scientific measurement and collection of data about human physical
characteristics and the application (engineering anthropometry) of these data in the design and
evaluation of systems, equipment, manufactured products, human- made environments, and
facilities.

Biomechanicsdescribes the mechanical characteristics of biological systems, in this case the human
body, in terms of physical measures and mechanical models. This field is interdisciplinary (mainly
anthropometry, mechanics, physiology, and engineering). Its applications address mechanical
structure, strength, and mobility of humans for engineering purposes.

Use of data- Anthropometric and biomechanics data shall be used in the design of systems,
equipment (including personal protection equipment), clothing, workplaces, passageways, controls,
access openings, and tools. [Source: National Aeronautics and Space Administration (NASA-STD-
3000A), 1989]

The human's interface with other system components needs to be treated as objectively and
systematically as are other interface and hardware component designs. It is not acceptable to guess
about human physical characteristics or to use the designer's own measurements or the
measurements of associates. Application of appropriate anthropometric and biomechanics data is
expected.

Using population extremes: Designers and human factors specialists shall draw upon the extremes of
the larger male population distribution and the extremes of the smaller female population
distributions to represent the upper and lower range values, respectively, to apply to
anthropometric and biomechanics design problems. [Source: NASA-STD-3000A, 1989].
7.2 USING DESIGN LIMITS

Initial rules in this section address the design limits approach. To understand this approach, it is
helpful to consider the overall steps and choices that one makes in applying anthropometric and
biomechanics data. The design limits approach entails selecting the most appropriate percentile
values in population distributions and applying the appropriate associated data in a design solution.
These steps are listed in this introductory material and are explained in detail in the initial three
rules of this subsection. If the reader has applied the design limit approach and understands it, the
reader can skip the rest of this introductory material as well as the explanations associated with the
first three rules. However, the reader should not skip the rules.
The design limits approach is a method of applying population or sample statistics and data about
human physical characteristics to a design so that a desired portion of the user population is
accommodated by the design. The range of users accommodated is a function of limits used in
setting the population portion. To understand the design limits approach, it is helpful to consider
step by step the choices that design personnel make in applying these human physical data.

 Select the correct human physical characteristic and its applicable measurement characteristic
(description) for the design problem at hand.

 Select the appropriate population, representative sample, or rule information on the selected
human physical characteristic and measurement description to apply to the design problem.

 Determine the appropriate statistical point(s), usually percentile points from rule information or
from the sample distribution(s) to accommodate a desired range of the human characteristic
within the distribution of the user population.

 Read directly or determine statistically the measurement value(s) that corresponds to the
selected statistical point(s) relevant to the population distribution.

 Incorporate the measurement value as a criterion for the design dimension, or in the case of
biomechanics data, for the movement or force solution in the design problem.

Clearance dimension at the 95th percentile


Design clearance dimensions that must accommodate or allow passage of the body or parts of the
body shall be based upon the 95th percentile of the male distribution data. [Source: Department
of Defense (MIL-STD-1472D), 1989]

Limiting dimension at the 5th percentile


Limiting design dimensions, such as reach distances, control movements, display and control
locations, test point locations, and handrail positions that restrict or are limited by body or body
part size, shall be based upon the 5th percentile of female data for applicable body dimensions.
[Source: MIL-STD-1472D, 1989]
For example, the maximum height from floor level to an accessible part of any piece of
equipment needs to be within reach of the 5th percentile female user, which will ensure that at
least 95 percent of the user population can access this part of the equipment.

Adjustable Dimensions
Any equipment dimensions that need to be adjusted for the comfort or performance of the
individual user shall be adjustable over the range of the 5th to 95th percentiles. [Source: MIL-
STD-1472D, 1989]

Sizing Determinations
Clothing and certain personal equipment dimensions that need to conform closely to the contour of
the body or body parts shall be designed and sized to accommodate at least the 5th through the
95th percentile range.
If necessary, this range shall be accommodated by creating a number of unique sizes, where
each size accommodates a segment of the population distribution. Each segment can be
bounded by a small range of percentile values. [Source: MIL-STD-1472D, 1989]

Critical Life Support Equipment


Dimensions or sizes of critical life support equipment shall accommodate, at least, the range defined
by the 1st through the 99th percentiles of the distribution. [Source: MIL-STD-1472D, 1989]

7.3 AVOIDING PITFALLS IN APPLYING ANTHROPOMETRIC DATA


There are several common errors to be avoided by designers when they apply anthropometric data
to design. These are: (1) designing to the midpoint (50th percentile) or average, (2) the
misperception of the typical sized person, (3) generalizing across human characteristics, and (4)
summing of measurement values for like percentile points across adjacent body parts.

The 50th percentile or mean shall not be used as design criteria as it accommodates only half of the
users. [Source: NASA-STD-3000A, 1989].

When the population distribution is Gaussian (normal), the use of either the 50th percentile or the
average for a clearance would, at best, accommodate half the population.

Misperception of the typically sized person


Designers or human factors specialists shall not use the concept of a typically sized person where the
same percentiles values are expected across many dimensions. A person at the 95 percentiles in
height is unlikely to measure at the 95th percentile in reach or other dimensions. A percentile value
and its measurement value that pertains to a particular body part shall be used exclusively for
functions that relate to that body part. [Source: Department of the Air Force (AFSC DH 1-3), 1980; NASA-STD-
3000A, 1989].

When the middle 30 percent of a population of 4000 men was measured on 10 dimensions, only one
fourth of them were "average" in a single dimension (height), and less than 1 percent were average
in five dimensions (height, chest circumference, arm length, crotch height, and torso
circumference). Keeping in mind that there is not an "average person," one also must realize that
there is neither a “5th percentile person” nor a 95th percentile” person Different body part
dimensions are not necessarily highly correlated. An implication is that one cannot choose a person
who is 95 percentile in stature as a test subject for meeting 95 percentile requirements in reach or
other dimensions Summation of segment dimensions. Summation of like percentile values for body
components shall not be used to represent any human physical characteristic that appears to be a
composite of component characteristics. [Source: NASA-STD-3000A, 1989]

The 95th percentile arm length, for instance, is not the addition of the 95th percentile shoulder-to
elbow length plus the 95th percentile elbow-to-hand length. The actual 95th percentile arm length
will be somewhat less than the erroneous summation. To determine the 95th percentile arm length,
one must use a distribution of arm length rather than component part distributions

7.4 SOLVING A COMPLEX SEQUENCE OF DESIGN PROBLEMS


In this section, rules are presented for approaching complex design problems that require the
consideration of a sequence of relevant design reference locations (such as seat reference points
and eye reference zones), human physical characteristics, statistical points, and measures. The
recommended approach involves identifying the necessary human activities and positions and
establishing reference points and envelopes for the necessary activities. These envelopes impact the
location and design of controls and displays, as well as the placement of work surfaces, equipment,
and seating accommodations. The effects of clothing or carried equipment are then used to expand
the dimensions.

Design to Body Positions and Motions of the Tasks


Design personnel shall base the necessary operator and user body positions and motions on
personnel tasks to be performed during normal, degraded, and emergency modes of operations
and maintenance. [Source: NASA-STD-3000A, 1989]

Construction or Collection of Unique Position Data


If the common and mobile working positions data do not represent the unique working positions
associated with a design, then design personnel shall construct the applicable human physical
characteristics and measures from the static and dynamic data. If no applicable data can be
found or calculated for important design measures, then, with the prior approval of the
acquisition program office, sample measures shall be taken on appropriate personnel for the
unique working positions. [Source: DOD-HDBK-743A, 1991; Roebuck, Kroemer, & Thomson,
1975].

Anthropometric measurement needs to be done by professionals because there are many


complexities and potential interactions among positions of body segments, as well as many
technical points and pitfalls to avoid in measurement practice.

Building and Using Reach Envelopes


If reach data do not apply to a specific design problem, then reach design dimensions or envelopes
for design use should be constructed considering:

 one-handed or two-handed operation

 grasp requirements which may affect the functional reach envelope


 positional relationship of a shoulder reference point or arm rotation point to the seat back,
seat reference point, or other posture reference or design reference points

 the appropriate samples and anthropometric measurements from the data provided in this
document or in DOD-HDBK-743A, 1991 [Source: DOD-HDBK-743A, 1991]

Effects of Clothing
Because most anthropometric data sources represent nude body measurements (unless otherwise
indicated), suitable allowances shall be made for light, medium, or heavy clothing and for any
special protective equipment that is worn.

The additive effects of clothing on static body dimensions and shows the 95th percentile gloved
hand measures. If special items of protective clothing or equipment are involved, the effects
shall be measured in positions required by the users' tasks. The effects on the extremes of the
population distribution shall be determined. [Source: Department of Defense (MIL-HDBK-759B), 1992;
Johnson, 1984]. Nude dimension and light clothing can be regarded as synonymous for practical
purposes.

Additional information on the changes in anthropometric measurement values imposed by


different clothing ensembles is found in Johnson, 1984.

Use of Distribution and Correlation Data


Complex uses of statistical data concerning human physical dimensions or capabilities are
introduced in this section. Data and distribution information on a single physical characteristic
and its measures provides no information about that characteristic's composite relationship with
any other characteristic and its measures. For design, the relationship between two or more
characteristics and how their measures vary together is important. Consider sizing clothing and
designing seats. Bivariate distributions and correlation statistics can be used by knowledgeable
professionals to determine design criteria.

Gaussian distribution of measurement values on a single human physical characteristic. The


relationship between the Gaussian distribution and the measurement value equivalent to the
desired percentile statistic value should best be determined from a smoothed frequency
distribution if the following conditions are met:
 the percentile value is not given in applicable Human machine-interface data, and
 the population distribution for the applicable human physical characteristic is known to be
Gaussian (normal) and the mean and variance are known. [Source: Israelski, 1977]

Using bivariate distribution data- Bivariate data should be professionally applied and interpreted
since knowledge of the population distribution characteristics are necessary to project and
extract design limits and to apply them to design problems. [Source: MIL-HDBK-759B, 1992].

The variability of two body measurements and their interrelationship with each other may be
presented in a graph or a table. Bivariate information includes the ranges of two measurements
and the percentages or frequencies of individuals who are characterized by the various possible
combinations of values of the two measurements. Knowledgeable professionals can tell about
the relationships from the appearance and shape of the joint distribution of measures.
Correlation statistics, when the relationship warrants, provide additional insight, and when
appropriate samples are large enough, may provide predictions of population values.

Use of Correlation and Multiple Correlation Data


When two or more human physical characteristics are applicable to a design problem,
professionals should apply and interpret correlation statistics. Knowledge about distributions
and inter-correlations among the distributions need to be factored into the use of these data.
[Source: MIL-HDBK-759B, 1992; Kroemer, Kroemer, & Kroemer-Elbert 1990]
The relationships or correlations between specific body measurements are highly variable
among the various human characteristics and may differ across samples and populations. For
example, breadth measurements tend to be more highly correlated with weight than with
stature. The degree of the relationship may be expressed by a correlation coefficient or "r"
value.

Although common percentile values may not be used to sum data across adjacent body parts,
regression equations derived from the applicable samples can be used in constructing composite
body measures.

Definition: The correlation coefficient or "r" value describes the degree to which two variables
vary together (positive correlation) or vary inversely (negative correlation). The correlation
coefficient, "r", has a range of values from +1.0 (perfect positive correlation) through -1.0
(perfect negative correlation). Multiple correlations involve the predictable relationship of two
or more variables with another criterion variable (such as a composite measurement value). "R"
is the multiple correlation coefficients. It is recommended that only correlations with strong
predictive values be used (that is where r or R is at least or greater than |.7|). (Note: R2 is the
square of the multiple correlation coefficients and equates to the proportion of the variation
accounted for in the prediction. An R of 0.7 would account for about 50 percent of the
variation).

Anthropometric variability factors


There are many factors that relate to the large variability observed in measures of the human
body. These factors include:

(1) body position, (2) age, health, and body condition, (3) sex, (4) race and national origin, (5)
occupation, and (6) evolutionary trends. These factors affect future population sampling and
encourage the use of the most recent data on the populations of interest. If designers and
human factors specialists need to draw upon other data or accomplish some special purpose
sampling, the following rules related to data variability may assist.

Foreign populations: If a specific use of the system or equipment involves operation or


maintenance by foreign personnel in locations outside the United States, sample data should be
obtained that represents the foreign work force. [Source: Israelski, 1977]

Body slump: In determining body position and eye position zones for seated or standing
positions, a slump factor which accompanies relaxation should be considered. Seated-eye height
measurements can be reduced by as much as 65 mm (2.56 in) when a person sits in a relaxed
position. Body slump, when standing, reduces stature as much as 19 mm (.75 in) from a perfectly
erect position. These slump factors should be considered in designing adjustable seats, visual
envelopes, and display locations. [Source: Israelski, 1977]

Anthropometric and Biomechanics Data


Dimensions of the human body which influence the design of personal and operational
equipment are of two types: (1) static dimensions, which are measurements of the head, torso,
and limbs in normal positions, and (2) dynamic dimensions, which are measurements taken in
working positions or during movement. [Source: AFSC DH 1-3, 1980; NASA-STD-3000A, 1989]
Use of Anthropometric and Biomechanics Data
If designers and human factors specialists need additional data to solve anthropometric design
problems associated with human physical characteristics, they Task considerations. Designers
and human factors specialists shall take the following task conditions into consideration when
using the human physical characteristic data presented:

 the nature, frequency, and difficulty of the related tasks to be performed by the operator or
user of the equipment.

 the position of the body during performance of operations and maintenance tasks.

 mobility and flexibility demands imposed by maintenance tasks.

 the touch, grasp, torque, lift, and carry requirements of the tasks.
 increments in the design-critical dimensions imposed by clothing or equipment, packages,
and tools.

 increments in the design-critical dimensions imposed by the need to compensate for


obstacles and projections. [Source: MIL-HDBK-759B, 1992; MIL-STD-1472D, 1989]

Dynamic (mobile) Data


This section presents: (1) Information concerning the range of whole-body motion
characteristics, and (2) design rules and data on joint and body motion. Where such data in
other areas with application such as design for use and workplace design.

Range Of Whole Body Motion


Efficiency and accuracy of task performance can be maintained only if required body movements
are within safe and comfortable limits. Human variability in range of body and joint movement is
attributable to many factors, including the following:

 Age becomes a factor after age 60, at which time mobility has decreased 10 % from
youth.

 Sex differences favor greater range in females at all joints except knee.

 Body build is a significant factor. Joint mobility is decrease significantly as body build
ranges from the very slender, through the muscular, to the obese.
 Exercise increases movement range. Weight training, jogging, and the like may tend to
shorten certain muscle groups or increase their bulk, so movement is restricted.

 Fatigue, disease, body position, clothing, and environment are other factors affecting
mobility. [Source: NASA-STD-3000A, 1989; AFSC DH 1-3, 1980; Israelski, 1977].

This part provides introductory definitions related to the angular motion of skeletal joints.
Knowledge of the range of joint motion helps the designer determine the placement and
allowable movement of controls, tools, and equipment.

Trunk Movement
Workplace designs based upon design-driven body positions shall allow enough space to move the
trunk of the body. The design shall be based upon:
 the required tasks and human functions
 the need for optimal positions for applying forces
 the need for comfortable body adjustments and movements. [Source: MIL-HDBK-759B,
1992]

7.5 NEED FOR INDIAN ANTHROPOMETRY


Age, sex, race, geographical regions, even different occupations all influence human body
dimensions. Accurate dimensions of clothing and personal equipment used by persons, e.g.
headgear, footwear, spectacles, lifesaving and support equipment would be of great value because
human functional dimensions and the range of movements possible demand that appropriate
allowances should be made when specific designs are developed.

It is advocated by experts that the anthropometric data to be used for specific design considerations
of specific user's groups, should be based on the same population groups.
Anthropometric data obtained from a specific group may differ in acceptance value, when similar
data are obtained from others. For solving specific design problems of a specific user group,
anthropometric data should come from the same population group, solving specific design.
Problems of a specific user group, anthropometric data should come from the same population
group, using different percentile selections. The use of non-Indian anthropometric data in Indian
designs and other imported readymade designs often results in mismatches with the requirements
of Indian users. Accidents and serious mistakes may occur if any design dimensions do not exactly
match the body dimensions of specific groups.
Indian behavior is also not like that of foreigners. Some Indians prefer sitting on the floor and
performing a range of activities there. Non-Indian data sources do not provide the references for
these requirements.

Indian being a multicultural nation with an ethnically diverse population, it would be of direct
relevance to strengthen design practice in India with data on human dimensions collected from
Indian population groups for the specific needs of Indian users.

7.6 GUIDELINES FOR DESIGN USE (USAGE OF ANTHROPOMETRIC


PERCENTILE VALUES)
The relevant anthropometric supports, along with the intended users' behavioral pattern, should be
seen together while designing. To make an article of the correct size, to create a system of multiple
units and a work space, or to design an article for a single individual's need, the individual's own
dimensional requirements may be of direct importance. But for mass production and use, proper
percentile (definition: Experiment), selections of the anthropometric data should be made and
adequate allowances should be considered.

Data provided here (Experiments) are taken from subjects wearing minimum clothes and without
footwear or headwear, etc. Hence, while using height dimensions for any design application,
appropriate height adjustment for sole and headwear may be an added consideration. Dimensional
allowances and movement restrictions due to wearing of heavy clothes, etc., need to be considered
carefully. Support of anthropometric data (collected from the specific population groups) to design
specific articles, e.g. product, equipment, furniture, machine tools, etc., should be investigated.

It would not be an exaggeration to say that there is no person with all his or her body dimensions in
the 95th or 50th or 5th percentiles. All body parts do not follow the same proportions and even
show different somatotopic (The structure or build of a person, especially to the extent to which it
exhibits the characteristics of an ectomorph, an endomorph, or a mesomorph.)features. A person
with a 50th percentile body height may have 75th percentile hand length, 25th percentile foot
length and 95th percentile abdominal circumference; a person of 75th percentile height may have
25th percentile chest circumference and 50th percentile head circumference; a mesomorph type
general body structure may have hip portion of endomorphic nature. These mixed body types and
proportions among the body parts constitute the common population.
Dimensions of equipment or work accessories and workspaces should be considered while designing
in order to achieve effective accommodation layout and for enabling easy handling of equipment by
moving within and around the space provided.

As an example, the height of the work surface for a very small height housewife, using a cooking
platform, must be fixed according to her height to make her feel comfortable while cooking. It
should take into account, the dimensions of the cooking accessories, so that these can come within
the range of her free arm reach and movement while cooking.

Here, any standard may not be appropriate as used by designer-architects for general work surface
height. If it is to be used by others within the same users' population, then the proper percentile
values of the total human dimensions, as well as individual work surface height, should be
considered. Proper allowances should' be made for the different tasks to be performed, so that most
of the population can perform their cooking tasks without problems and uneasiness.

For design purposes to fit an intended user from amongst the known population group, different
percentile values of different human body dimensions should be considered for different design
dimensions. Designing an article or a system with a single percentile value for all the relevant human
dimensions would fail to satisfy all the other dimensional features of the design.

7.7 PERCENTILE SELECTION FOR DESIGN USE


Depending on the nature of the design and the context of use, the design should usually be
conceived to accommodate the population in between the 5th and 95th percentile, keeping the 50th
as mid-value, so that most of the population is covered. Data tending towards the lowest, below the
5th percentile, with the highest values above 95th percentile are generally considered as freak data.
Hence, these are normally neglected unless there are extreme requirements. As a general rule, to
"avoid reach" higher percentiles and to "get easy reach" lower percentile values would be of
relevance in design.

Designing an article for the large-sized users means that the higher percentile values of all
dimensions should be considered, because when the maximum number of the population has lower
values than those of large-sized users, the users with lower percentile values will not be able to get
easy reach and hence will keep away from those unwanted things beyond their reach, thereby
ensuring safety. For designing doors, stature heights of higher than the maximum value, must be
considered with appropriately defined allowances for articles supposed to be carried on the head by
intended users. A feeling of psychological clearance may be an added dimension. The higher
percentile value of the maximum body breadth for passageways, etc., maybe considered to provide
free movement facilities. Small-sized users should also be considered while designing things of easy
reach. This must be done keeping in mind the contextual use, the application of strength or any
other consideration involving human endeavor. This means that the lower percentile values of any
dimension should be considered for accommodating the maximum number of people who have
higher values than that, in order to be able to perform physical tasks more easily and with good
control of the body.

Normally, for the moving parts of a machine that are dangerous and not to be touched, i.e. those
which must be kept out of arm's reach, the higher value of the "leaning forward" arm reach, along
with appropriate allowances for safety distances should be considered, in order to ensure safety.
But, if an "on-off" type handle or switch requires to be used only a few times throughout the whole
working period, then it can be placed at a distance of the 75th percentile or so of the normal
"standing forward" grasp reach.

While working, it should not create any obstacle to normal work. When required, it could be grasped
by leaning, as it comes within the 5th percentile of the "leaning forward" grasp reach limit. If
anything has to be operated smoothly, it should be placed nearer, say, within the lower percentile
range (may be the 5th) of the am grasp reach. This means that people having a higher value than
that can reach it easily. Below this value, people can handle it with little difficulty, but the number of
such people would be very limited. Hence, these may be ignored for general purposes, if there is no
specific requirement.

7.8 USE OF "AVERAGE"

Selection of the average or mean value of a dimension depends on its contextual use and whether it
demands critically to be fitted into the whole range of the users’ population. The terms mean or
average, and median or the 50th percentile value may not be identical. But if there is no dearth of
sample sizes on which data were collected, the median (50th percentile) value normally is closed to
the mean value. Hence, in practice, the average is used as synonym for the 50th percentile.

Average value should not always be considered blindly while designing. As a classical example, while
conceptualizing a common counter height for general purpose use, the 50th percentile elbow height
may be considered, because the height differences of people lying in both halves of the data
distribution may be accommodated. This may be the 95th percentile values, so that, for specific
needs the height may be adjusted accordingly. Door heights with 50th percentile stature value may
work, but tall people can only pass through by bending. This creates inconvenience to them and
should not be used. If we consider people of average-weight say, 55 kg and design a lift system to
carry 10 people, it means that the carrying capacity is 550 kg. if 10 people of less-weight board it,
there would be no problem but, when one or more with a 75th or 95th percentile value, say of 60 kg
or 75 kg or more than 100 kg do so, then it may not be safe. Hence, the values tending towards
higher levels, even the highest limits, should be considered, along with the allowances for the
articles supposed to be carried by the people on board, as well as other safety allowances for the
articles supposed to be carried by the people on board, as well as other safety allowances so that,
overloading beyond the lift's capacity is ruled out.

Another example of the same case with reference to weight may be considered while determining
the structural strength of a seat, or in the case of an individual or a family, a design for a suspended
garden swing. Then, not only should the highest value be used but the proper allowances for the
different ways of using the seat should be considered. The user may carry something on his lap; he
may even stand and do some work. This could be an added consideration.

Consideration for Regular Percentile


For considering shoe lengths, a single percentile value of a foot length dimension cannot support
the purpose of accommodating a large percentage of the population. To accommodate most of
the people, we can never specify any single percentile value, like the 95th percentile of the foot
length. This can never be appropriate. It requires different percentile values so that they fit the
users of different foot sizes

7.9 CONCEPT OF MALE-FEMALE COMBINED DATA FOR DESIGN USE

Anthropometric data are obtained separately for males and females and also sometimes also
presented separately. We may require these separate data for designing separately for males and
females. Designers quite often speak about the 5th percentile of the female data as the lowest value
and the 95th percentile of male data as the highest value of the dimensional range of the human
body in general, for various uses of design. It is seen that all the dimensions do not always follow this
rule. Nowadays, as there is very limited scope for such divisions of jobs exclusively for males and
females, for general purposes, percentile values of anybody dimension comprising both males and
females might serve well for producing designed articles or for conceptualizing the work spaces.
According to requirements, percentile values may be derived irrespective of whether there is
predominantly a male population or female population.

For example, to avoid reaching towards a dangerously moving part in any equipment, its location in
terms of the human operator should be such that most of the population should not get within easy
reach. The higher percentile value of the forward arm reach in the "standing in front leaning"
posture, say the 95th percentile, is found to be 1336mm for males and 1199mm for females. To
ensure safety, irrespective of this data, whether for males or females, the higher value of the two
must be considered, because we are not sure whether the intended users will be only females,
males or both. Here, the male and female combined value of the 95th percentile figure, 1309mm,
may also be considered in general and the moving part can be fixed at that distance. If the
equipment is unguarded, to avoid all risks, the combined maximum value, 1500mm, irrespective of
for males (1500mm) and for females (1250mm), should be used.

Another example, where easy accommodation is the prime concern, is when designing a seat with
arm rests. The mid-breadth of the seat should accommodate the relaxed mid-thigh to thigh distance.
When the higher values say, the 95th percentile for males and females, are found to be 449mm and
529mm respectively, the combined 95th percentile value, i.e. 479mm for general purposes or the
higher of these two may be considered.

For general use we may select: a) lowest and highest value of any dimension taken from separate
male and female data sources or b) the whole data collected from both males and females may be
computed as a single population data with combined values, irrespective of whether these are
collected from male or female sources

7.10 PRACTICAL APPLICATION OF ANTHROPOMETRIC DATA

As stated earlier, anthropometric data are useful in work physiology, occupational biomechanics,
and ergonomic/work design applications. Of these, the most common practical use is in the
ergonomic design of workspaces and tools. Here are some examples:

• The optimal power zone for lifting is approximately between standing knuckle height and
elbow height, as close to the body as possible. Always use this zone for strategic lifts and
releases of loads, as well as for carrying loads but minimize the need to carry loads—use
carts, conveyors, and workspace redesign.

• Strive to design work that is lower than shoulder height (preferably elbow height), whether
standing or sitting. (Special requirements for vision, dexterity, frequency, and weight must
also be considered.)
• The upper border of the viewable portion of computer monitors should be placed at or
below eye height, whether standing or sitting (Konz and Johnson 2004).

• Computer input devices (keyboard and mouse) should be slightly below elbow height,
whether standing or sitting (Konz and Johnson 2004). Use split keyboards to promote
neutral wrist posture (Konz and Johnson 2004). Learn keyboard shortcuts to minimize
excessive mouse use. Use voice commands—speech recognition software is increasingly
effective for many users and applications.

• For seated computer workspace, the lower edge of the desk or table should leave some
space for thigh clearance (Konz and Johnson 2004).

• For seating, the height of the chair seat pan should be adjusted so the shoe soles can rest
flat on the floor (or on a foot rest), while the thighs are comfortably supported by the length
of the seat pan (Konz and Johnson 2004). Use knowledge of the popliteal (rear surface of the
knee) height, including the shoe sole allowance.

• The chair seat pan should support most of the thigh length (while the lower back is well
supported by the seat back), while leaving somepopliteal(area behind knee) clearance (Konz
and Johnson 2004).In other words, the forward portion of the seat pan should not press
against the calf muscles or back side of the knees.

• For horizontal reach distances, keep controls, tools, and materials within the forward reach
(thumb tip) distance. Use the anthropometric principle of designing for the extreme by
designing the reach distances for the 5th percentile female, thus accommodating 95 percent
of females and virtually 100 percent of males.

7.11 SUMMARY
In the science of anthropometrics, measurements of the population's dimensions are obtained
based on the population's size and strength capabilities and differences. From these measurements,
a set of data is collected that reflects the studied population in terms of size and form. This
population can then be described in terms of a frequency distribution including terms of the mean,
the median, standard deviation, and percentiles. The frequency distribution for each measurement
of the population dimension is expressed in percentiles. The X th percentile indicates that x percent
of the population has the same value or less than that value for a given measurement. The median
or average value for a particular dimension is the 50th percentile. In addition, 100-x of the
population has a value higher than x.

In ergonomic design, we do not design for the average person, or the 50th percentile, we design for
the 95th percentile. In other words, 95% of the population can use the work area safely and
efficiently, and 5% of the population may need to be accommodated. Conventionally, the 95th
percentile has been chosen to determine clearance heights or lengths. That means 95% of the
population will be able to pass through a door, while only 5 % of the population may need to be
accommodated. In addition, the 5th percentile female has been chosen to determine the functional
reach distance, that means 95 % of the population will be able to perform this reach, and only 5 % of
the population may need to be accommodated.

Standard Anthropometric tables can be used, or an Anthropometric table can be created of a


specific worker population. These data tables can be used to help determine safe and efficient work
area and workflow designs. Anthropometric tables provide useful information on population size,
shape and strength capabilities and differences so that work areas can be built or modified to
improve work design and increase efficiency.

7.12 KEYWORDS

Body Slump- To suddenly fall or sit because you are very tired or unconscious

Anthropometric- Anthropometry refers to the measurement of the human individual

Biomechanics- Biomechanics is the study of the structure and function of biological systems such
as humans, animals, plants, organs, and cells by means of the methods of mechanics.

Correlation- In statistics, dependence refers to any statistical relationship between two random
variables or two sets of data. Correlationrefers to any of a broad class of statistical relationships
involving dependence.

Design limits-The design limits approach entails selecting the most appropriate percentile values in
population distributions and applying the appropriate associated data in a design solution.
UNIT 8 MUSCULAR SYSTEMS & WORK
Objectives
After going through this unit,you will be able to:
 identify the occupational musculoskeletal disorders.
 recognize the risk factors in development of musculoskeletal disorders.
 combine different risk categories as risk qualities.
 guide on various risk factors to employee and employer.
 identify the factors for prevention of musculoskeletal disorders.
Structure
8.1 Introduction
8.2 Characteristics of health problems
8.3 Basic risk factors for the development of musculoskeletal disorders
8.4 Factorscontributingtothedevelopmentofmusculoskeletaldisorders
8.5 Factors tobeconsideredinprevention
8.6 Guidance on main risk factors
8.7 Basic rules for preventive actions in practice
8.8 Summary
8.9 Keywords

8.1 INTRODUCTION

The term musculoskeletal disorders denote health problems of the locomotor apparatus, i.e. of
muscles, tendons, the skeleton, cartilage, ligaments, and nerves. Musculoskeletal disorders include
all forms of ill-health ranging from light, transitory disorders to irreversible, disabling injuries. This
booklet focuses on musculoskeletal disorders, which are induced or aggravated by work and the
circumstances of its performance. Such work-related musculoskeletal disorders are supposed to be
caused or intensified by work, though often activities such as housework or sports may also be
involved.

The severity of these disorders may vary between occasional aches or pain to exactly diagnosed
specific diseases. Occurrence of pain may be interpreted as the result of a reversible acute
overloading or may be a pre-symptom for the beginning of a serious disease.

Disorders of the musculoskeletal system represent a main cause for absence from occupational
work. Musculoskeletal disorders lead to considerable costs for the public health system. Specific
disorders of the musculoskeletal system may relate to different body regions and occupational work.
For example, disorders in the lower back are often correlated to lifting and carrying of loads or to the
application of vibration. Upper-limb disorders (at fingers, hands, wrists, arms, elbows, shoulders,
neck) may result from repetitive or long-lasting static force exertion or may be intensified by such
activities.

8.2 CHARACTERISTICS OF HEALTH PROBLEMS

Health problems occur if the mechanical workload is higher than the load-bearing capacity of the
components of the musculoskeletal system. Injuries of muscles and tendons (e.g. strains, ruptures),
ligaments (e.g. strains, ruptures), and bones (e.g. fractures, unnoticed micro-fractures, degenerative
changes) are typical consequences.

In addition, irritations at the insertion points of muscles and tendons and of tendon sheaths, as well
as functional restrictions and early degeneration of bones and cartilages (e.g. menisci, vertebrae,
inter-vertebral discs, and articulations) may occur.

There are two fundamental types of injuries, one is acute and painful, the other chronic and
lingering. The first type is caused by a strong and short-term heavy load, leading to a sudden failure
in structure and function (e.g. tearing of a muscle due to a heavy lift, or a bone fracture due to a
plunge, or blocking of a vertebral joint due to a vehement movement). The second results from a
permanent overload, leading to continuously increasing pain and dysfunction (e.g. wear and tear of
ligaments, tendon-vaginitis(an acute or chronic inflammation of the tendon sheath, occurring in the
region of the hand, the wrist joint, the forearm (radial and ulnar ten bursitis), the foot, the ankle
joint, and the Achilles tendon) muscle spasm and hardening). Chronic injuries resulting from long-
term loading may be disregarded and ignored by the worker because the injury may seemingly heal
quickly, and it may not result in an actual significant impairment.

The number of such injuries is substantial. In industrialized countries, about one-third of all health-
related absences from work are due to musculoskeletal disorders. Back injuries (e.g. lower back pain,
ischiatic(arises from the sacral plexus and passes about halfway down the thigh where it divides into
the common peroneal and tibial nerves disc degeneration, herniation(To protrude through an
abnormal bodily opening). have the highest proportion (approximately 60%). The second position is
taken by injuries of the neck and the upper extremities (e.g. pain syndromes of the neck, shoulders,
arms, "tennis elbow", tendinitis and tenovaginitis(Tendinitis is a common joint ailment among
professional athletes and workers performing repetitive motions. carpal tunnel syndrome,
syndromes related to cumulative traumata, the so-called cumulative trauma disorders (CTDs) or
repetitive strain injuries (RSIs)), followed by injuries of knees (for example, degeneration of menisci,
arthrosis(An arthrosis is a joint, an area where two bones are attached for the purpose of motion of
body parts. An arthrosis (joint) is usually formed of fibrous connective tissue and cartilage and hips
(e.g. arthrosis). It is generally accepted that working conditions and workload are important factors
for the development and continuance of these disorders.

8.3 BASIC RISK FACTORS FOR THE DEVELOPMENT OF MUSCULOSKELETAL


DISORDERS
Work-related musculoskeletal disorders are supposed to be causally linked to physical load resulting
from occupational activities. Disorders or injuries affecting muscles, tendons, joints, ligaments, and
bones are mainly caused by mechanical overload of the respective biological structures. Potential
overload of tissues results from high intensity forces or torques acting on and inside the body.
Examples of occupational activities coinciding with high mechanical requirements are handling of
objects, as in transportation jobs, or the application of pushing or pulling forces to tools or machines.
The detrimental effect of mechanical overload depends mainly on the magnitude of the force.

Furthermore, the duration of exposure is an important factor in the development of musculoskeletal


disorders. It is mainly determined by the number of repetitions per unit time (e.g. per day) as well as
by the total exposure time (e.g. hours per day or days per month). Regarding the character of
exposure, occasional vocational loading events can be discriminated from long-lasting activities
occurring over many years during the entire occupational life. Short-term loadings may primarily
lead to acute health disturbances whereas long-lasting exposure may, in a final stage, cause chronic
disorders.

The risk for the musculoskeletal system depends to a great extent on the posture of the operator.
Especially, twisting or bending the trunk can result in an increased risk for the development of
diseases at the lower back. Postural demands play an important role, particularly, when working in
confined spaces.

Besides such types of occupational loading resulting from usual work-site conditions,
musculoskeletal disorders can also be caused by unique, unforeseen, and unplanned situations, e.g.
by accidents. The origin of disorders due to accidents is characterized by a sudden overstrains of the
organs of locomotion.
TotalMechanicalLoading
Thetotalloa daffectingthemusculoskeletalsystemdependsonthelevelofthedifferentloadfactors,
mentionedbefore, such as:
 thelevelanddirectionofforces
 thedurationofexposure
 thenumberoftimesanexertionisperformedperunitoftime
 posturaldemands
Risk qualities
Accordingtothefactorsmentionedbefore,differentriskcategoriescanbederivedusingdifferentcombina
tionsorqualitiesthereof, suchas:

 high intensity forces


 long exposure duration
 Highly repetitive exertions
 Strong postural demands
 strong or long-lasting muscular strain
 Disadvantageousenvironmentalorpsychosocialconditions

8.4 FACTORSCONTRIBUTINGTOTHEDEVELOPMENTOF
MUSCULOSKELETALDISORDERS

In the following, musculoskeletal load is characterized with respect to the main influences, such
as the level of force, repetition and duration of execution, postural and muscular effort as well
as environmental and psychosocial factors.

Exertionofhigh-intensityforcesmayresultinacuteoverloadingoftheloadedtissues.High-
intensityforcesareactivewithinthebodytissuesparticularlyduringliftingorcarryingheavyobjects.Fur
thermore,pushing,pulling,holding,orsupportinganobjectoralivingbeingisamatterofhigh-
intensityforces.

Handling loads over long periods of time may lead to musculoskeletal failures if the work is
continued for a considerable part of the working day and is performed for several month or years.
An example is performing manual materials- handling activities over many years, which may result
in degenerative diseases, especially of the lumbar spine. A cumulative dose can be regarded as an
adequate measure for the quantification of such types of loadings. Relevant factors for the
description of the dose are duration, frequency, and load level of the performed activities.

Musculoskeletal disorders may also result from frequently- repeated manipulation of objects, even
if the weight of the objects handled or the forces produced are low. Such jobs (e.g. assembling
small work pieces for a long time, long-time typing, and supermarket checkout work) may induce
disadvantages for the musculature, even if the forces applied to the handled objects are low. Under
such conditions, the same parts and fibers of a muscle are activated for long periods of time or with
a high frequency and may be subject to overload. Early fatigue, pain, and possible injuries are the
consequences.

In a well-designed workstation, work can be performed most of the time in an upright posture with
the shoulders un-raised and the arms close to the trunk. Working with a heavily bent, extended or
twisted trunk can result in an overload of spinal structures and increased activity of entire muscles.
If the trunk is simultaneously bent and twisted the risk of spinal injury is considerably increased. If
movements or postures with the hands above shoulder height, below knee level or outstretched
are performed over prolonged periods or recurrently, working conditions should be changed.
Working in a kneeling, crouching, or squatting position augments the risk of overloading
musculoskeletal elements. Also, long-time sitting in a fixed posture is accompanied by long-lasting
muscular activity which may lead to an overload within muscular structures. Such working positions
should be avoided and the time for working in such positions should be kept to a minimum if such
work is not completely avoidable.

Static muscular load is found under conditions where muscles are tensed over long periods of time
in order to keep a certain body posture (e.g. during work with the hands overhead in drilling holes
into the ceiling, holding the arms abducted in hair dressing, holding the arms in typing position
above the keyboard, working in a confined space). A characteristic of static muscular load is that a
muscle or a group of muscles is contracted without the movement of the corresponding joints. If
the muscle has no opportunity to relax during such a task, muscular fatigue may occur even at low-
force levels, and the function of muscles may be impaired, and it may hurt. In addition, static load
leads to a deficiency in blood circulation in muscles. Under normal conditions, the permanent
change between contraction and relaxation acts as a circulation- supporting pump. Continuous
contraction restricts the flow of blood from and to the contracted muscle. Swelling of legs, for
example, is an indicator of such a lack in blood circulation.
Muscular inactivity represents an additional factor for the development of musculoskeletal
disorders. Muscles need activation to maintain their functional capacity, and the same is true of
tendons and bones. If activation is lacking, a de-conditioning will develop, which leads to functional
and structural deficits. As a result, a muscle is no longer able to stabilize joints and ligamental
structures adequately. Joint instabilities and failures, in coordination connected with pain,
movement abnormalities and overloading of joints may be the consequences.

Monotonous repetitive manipulations with or without an object over long periods of time may lead
to musculoskeletal failures. Repetitive work occurs when the same body parts are repeatedly
activated and there is no possibility of at least a short period of relaxation, or a variation in
movement is not possible. Relevant determining factors are the duration of the working cycles,
their frequency, and the load-level of the performed activity. Examples of repetitive work are
keyboard use while typing, data entry, clicking or drawing a computer mouse, meat cutting, etc.
Unspecific complaints due to repetitive movements of the upper extremities are often summarized
in the term "repetitive strain injury - RSI".

Strain on the locomotors system may also occur due to the application of vibration. Vibration may
result from hand-held tools (e.g. rock drill) and, therefore, exert vibration strain on the hand-arm
system. Hand-arm vibration may result in the dysfunction of nerves, reduced blood circulation,
especially in the fingers (white finger syndrome) and degenerative disorders of bones and joints of
the arms. Another risk concerns whole body vibration generated by vibrating vehicles and platforms
such as earth-moving machines, low-lift platform trucks or tractors and trucks driving off-road. The
vibration is transferred to the driver via the seat. Whole-body vibration can cause degenerative
disorders, especially in the lumbar spine. The effect of vibration may be intensified if the vehicle is
driven in a twisted body posture. A vibration-attenuating driving seat may help to reduce the effect
of vibration.

Physical environmental factors such as unsuitable climatic conditions can interact with mechanical
load and aggravate the risk of musculoskeletal disorders. In particular, the risk of vibration- induced
disorders of the hands is considerably enhanced if low temperature coincides with the use of hand-
held vibrating tools. Another example of environmental factors influencing the musculoskeletal
strain is the lighting conditions: If lighting and visual conditions are deficient, muscles are strained
more intensively, particularly in the shoulder and neck region.
Besides the mechanically induced strain affecting the locomotor organs directly, additional factors
can contribute to the beginning or aggravation of musculoskeletal disorders. Psychosocial factors
can intensify the influence of mechanical strain or may induce musculoskeletal disorders by
themselves due to increasing muscle tension and by affecting motor co- ordination. Furthermore,
psychosocial influences such as time pressure, low job decision latitude or insufficient social support
can augment the influence of physical strain.

Asummaryofthemainfactorscontributingtotheriskofdevelopingwork-
relatedmusculoskeletaldisordersisprovidedinthetable.

Main factors contributing to musculoskeletal disorders

Factor Possible result or Example Good practice example


consequence or solution

Exertion of high- Acute overloading of the tissues Lifting, carrying, pushing, Avoid manual handling of
intensity forces pulling heavy objects heavy objects

Handling heavy Degenerative diseases especially Manual materials- handling Reduce mass of objects or
loads over long of the lumbar spine number of handlings per
periods of time day

Frequently Fatigue and overload of muscular Assembly work long time Reduce repetition
repeated structures typing, check-out work frequency
manipulation of
objects

Working in Overload of skeletal and muscular Working with heavily bent Working with an upright
unfavorable elements or twisted trunk, or hands trunk and the arms close to
posture and arms above shoulders the body

Static muscular Long-lasting muscular activity and Working overhead, working Repeated change between
load possible overload in a confined space activation and relaxation of
muscles

Muscular Loss of functional capacity of Long-term sitting with low Repeated standing up,
inactivity muscles, tendons and bones muscular demands stretching of muscles,
remedial gymnastics,
sports activities

Monotonous Unspecific com- plaints in the Repeated activation of the Repeated interruption of
repetitive upper extremities (RSI) same muscles without activity and pauses
manipulations relaxation alternating tasks
Application of Dysfunction of nerves reduced Use of vibrating hand-tools, Use of vibration-
vibration blood flow, degenerative sitting on vibrating vehicles attenuating tools and seats
disorders

Physical Interaction with mechanical load Use of hand-held tools at Use gloves and heated
environmental and aggravation of risks low temperatures tools at low temperatures
factors

Psychosocial Augmentation of physical High time pressure, low job Job rotation, job
factors strain, increase in absence decision latitude, low social enrichment, reduction of
from work support negative social factors

(Source: WHO guidelines by PDDIM Jäger)

8.5 FACTORSTOBECONSIDEREDINPREVENTION

About maintenance and promotion of health, a weighed balance between activity and rest is necessary.
Rest pauses are a prerequisite for recovery from load-induced strain and for pre- venting accumulation
of fatigue. Movement should be preferred to static holding; the aim should be a combination of active
periods with loading and inactive periods of relaxation. The individual "favorable load" can vary from
subject to subject depending on functional abilities and individual resources. Overload as well as
inactivity should be avoided. Appropriate load effects training of muscles leading to adaptation and thus
an increase in the capacity of muscles, tendons, and bones. This is essential for health and well-being.

CAVEAT: This general view, however, needs refinement in special cases, since parts of the
musculoskeletal system may not adapt to loads in the same way. For example, repetitive lifting of
heavy loads probably does increase muscle capacity, but probably does not increase the capacity of
the spinal discs to withstand mechanical loading. Consequently, strength training could mislead
individuals to believe they could safely lift greater loads and thus risk back problems. Jobs should,
therefore, be so designed that most people are able to carry them out, rather than only a few
strong individuals.

Work Performance Strategies


A risk factor for overloading the musculoskeletal system results from the method of performing the
work by the worker. There are risky and less risky strategies to execute the task. An example is
lifting heavy objects having the centre of gravity near the body. To fulfil this demand, heavy objects
should be lifted, whenever possible, by bending the knees instead of bending the back. Further
measures to reduce overloading risk are avoiding twisted and laterally bent postures, working
continuously at a moderate pace and not during short time-periods with high time-pressure. The
worker must be informed about those possibilities and should be motivated to use them.

Avoid Accidents and Injuries


The avoidance of accidents is another important field for the prevention of musculoskeletal
disorders. Hazardous situations, for plunges, can occur during work at greater height, for
example on a ladder, a scaffold or a construction site. The risk for plunges can be reduced by
securing the standing position and by stabilizing the equipment on which the worker climbs. In
particular, the use of steady ladders and fixing the ladders to the floor or on
stableobjectsisindispensable.Onlysufficientlystableandsteady
scaffoldsshouldbeused,andtheseshouldbefixedtothebuilding.Furthermore,securingthepositionoft
heworkerbyropingtotheclimbingaid(ladder;scaffold)orbuildingisanimportantmeasureforpreventin
gplunges.
Injuriestohead,handandfootcanbeavoidedbyusingprotectivehelmets,gloves,orshoes,respectively
.Anotherimportantmeasureistoavoidobjectsfromfallingbyfasteningorcoveringtheminanappropria
temanner.Inparticular,duringthetransportation
ofgoodswithcranesorotherhoistingdevices,goodsshouldbe
fixedbycoveringthemorbyspreadingaplaneunderneath.

8.6 GUIDANCE ON MAIN RISK FACTORS


In this part, some of the main risk factors for the development of musculoskeletal disorders are
listed, and examples of tasks and working conditions are provided. Additionally, potential causes for
health disturbances and injuries as well as suggestions as to how to avoid these are made.

Risk frequently results from exposure to mechanical loading. The main influencing factors are high
forces resulting from lifting and pushing or pulling heavy objects, high repetition frequency, and long
duration of force execution, unfavorable posture, uninterrupted muscle force exertion or working on
or with vibrating machines. In some cases, the degree of handling precision, rather than the actual
force exerted, constitutes an additional hazardous factor.

Risk Factor: Manipulation of Heavy Loads

Where are heavy loads manipulated?


Examples are:
 lifting and carrying of heavy objects in transportation jobs, construction sites, etc.,
 Transferring persons in health professions, in old-age care and hospital care.

Holding and moving of heavy loads requires high muscular force; this may lead to acute overload
and/or fatigue of muscles. Examples: Repeated manipulation of heavy bricks in construction
work, loading of coffee sacks, cement bags or other loads in ships, containers, or Lorries.

During holding and moving of heavy loads, high forces occur in the skeletal system, too. Risk of
acute overloading and damage may result. Loadings being incurred over a long period of time
may cause or promote degenerative disorders, especially in the low-back area (e.g. when
handling loads with a bent back). For the individual risk of manual materials-handling activities,
the functional capacity of the working person plays an important role.

Heavy Load Handling


The most important factors concerning the risk are the weight of the object to be manipulated,
the horizontal distance between the load and the body and the duration and repetition
frequency of task execution. This leads to some important measures for handling objects.

Advice to the Employee


 Lift loads close to the body.
 Lift with both hands, symmetrically to the mid-sagittal plane, brings the load as close as
possible to the body.
 Lift heavy loads with an upright trunk by extending the initially flexed legs and avoid
manipulation of loads in un-favorable postures (e.g. lateral bending or twisting).
 Use cranes, lifters, dollies, hoists, pallet jacks, mobile elevators, or similar devices, if
available, for lifting and transporting heavy loads.
 Carry heavy and/or unwieldy loads with two persons.

Advice to the Employer


 Avoid manual handling tasks, especially of heavy loads. If manual handling is still necessary,
introduce ergonomic measures to minimize the resulting risk.
 Avoid moving loads over obstacles.
 Avoid carrying over uneven or slippery routes, over steps or stairs.
 Avoid high or frequent handling procedures.
 Avoid large masses (e.g. instead of one heavy sack use two sacks of smaller weight).
 Provide aids (hoists, or similar devices).
 Mark heavy loads.
 Mark non-symmetrical load distribution within the object. Mark containers or barrels with
movable content (fluids, granules, etc.).
 Suggest and carry out training on "handling".

Risk Factor: Work with High Force Exertion


Where is high force exertion found?
Examples are:
 pushing or pulling of heavy objects
 pulling of trolleys and other means of transportation
 positioning of packages in a transportation vehicle
 manipulation of scaffold parts
 transfer of patients

Why is high force exertion harmful?


The exertion of force requires high muscle forces. This may lead to an acute overloading
and/or fatigue of muscles.

During such work high forces also occur in the skeletal system. This may lead to an acute
overloading and injury of the skeletal structures. Force exertion, where the force acts
distantly from the body, bears a high risk of damage to the lumbar spine tissues. For tasks
with long-lasting or frequently repeated high force exertion, there is risk of degenerative
diseases especially of the lumbar spine. This is true if force exertion is carried out in
unfavorable body postures.

How can high force exertion be avoided?


Advice to the employee:
 Carry out pushing and pulling in such a way that the force acts close to the body.
 Avoid pushing or pulling with only one hand.
 Avoid pushing or pulling with strong lateral bending and/or twisted trunk.
Advice to the employer:
 Provide conditions for secure standing.
 Provide wheeled vehicles, trolleys, dollies, or similar devices.
 Avoid pushing or pulling in confined rooms because of con- strained postures.
 Avoid obstacles and uneven ground.

Risk factor: Working in unfavorable body postures


Where do unfavorable postures occur?
Examples are:
Work overhead
 work in constrained positions
 work in confined rooms
 work in extremely bent, twisted, or extended postures
 work in a continuously inclined posture (e.g. construction work, concrete
reinforcement work)
 work out of reach
 Work in a kneeling, lying, crouching, or squatting position.

Why are unfavorable postures harmful?


The keeping up of a certain body posture demands high muscle force; acute overload and/or
fatigue of muscles can, therefore, be the result.
Examples are:
Construction work far from the body demands high activation of musculature for holding the
arms twisted or extended body postures demand high muscular strain and tension of trunk
muscles.

During unfavorable body postures, high forces occur also in the skeletal system. This may lead to
acute overloading and damage of skeletal structures. For long-lasting activities with an inclined
trunk, degenerative disorders, especially in the lumbar region, can arise if such work is executed
over a period of many years.

Maintaining unfavorable body postures for long periods of time relates to long-term activation
of certain muscles which may lead to muscular fatigue and considerable reduction in blood
circulation. Such partial decrease in the functional ability of the musculature leads to a reduced
ability to react on sudden impacts and may therefore result in increased accident risk.

How can unfavorable postures be avoided?


Advice to the employee
 Bring body close to the position where the object must be handled, or where force
application is performed.

 Avoid strong lateral bending or twisting of the trunk.

 Approach the working area and body close enough to enable carrying out the task within
reach; use aids such as scaffolds and ladders, if suitable.

 Change posture often to activate different muscles alternately while carrying out tasks;
consider alternating between standing and sitting postures.

Advicetotheemployer
 Offer adjustable equipment: chairs, tables, scaffolds, etc.

 Supply rooms of sufficient size to avoid constrained postures.

 Arrange tools within reach.

 Set time-limits when constrained postures are unavoidable and/or alternate tasks of
different nature.

 Avoid giving tasks that require a kneeling, lying, crouching, or squatting position.

Risk factor: Monotonous repetitive tasks


Where do monotonous repetitive tasks occur?
 Execution of similar or identical movements during a large part of the working time with a
high rate of repetition (i.e. several times per minute).

 During work the working person has often little influence on the working pace, speed, task
sequence and works and break schedule
Commonly, the working person cannot abandon the workplace without being replaced by
another person.

Examples are:
 assembly line
 cash registration
 loading of packing machines

Why are monotonous repetitive tasks harmful?


Long-lasting repetitive muscle load leads to muscle fatigue, which- if sufficient recovery is
not guaranteed - may lead to irreversible changes in the muscular structure. Not only high-
level forces but also low-level forces may cause such an effect. Repetitive movements are
often superimposed with static loading, in particular postural load.

How can monotonous repetitive tasks be avoided?


Advice to the employee:

 Avoid continuous loading of the same muscles for longer periods of time.

 Strive for changes in motion to avoid identical muscular activation patterns. For
strongly monotonous work, changes in the execution of movements may be limited.

 Change body posture frequently to reduce static loading.

 Use rest pauses.

Advice to the employer:


Provide for organizational changes, such as job rotation, job diversification or job
enrichment, to reduce the extent of task repetition for individuals.
 Enable autonomous decisions about the timing of breaks.
 Mechanize unavoidable monotonous tasks with high load.

Risk factor: Long-lasting Loadings


Where do long-lasting loadings occur?
Examples are:
 maintaining a static posture

 during bricklaying at floor level; concrete reinforcement work; picking of fruits and
vegetables at floor level; writing; typing; work with computer mouse

 Holding of objects or tools (e.g. drilling in a ceiling; overhead painting; holding of


operation instruments in surgery, carrying a tray uninterruptedly).

Why are long-lasting loadings harmful?


Long-lasting muscle load leads to muscle fatigue. Fatigue without sufficient recovery can
lead to irreversible changes in the muscular structure. Even the exertion of low-level forces
(for example, long- lasting fixed posture) can lead to over-exertion and fatigue of small
muscles or muscle groups. Long-lasting contraction of muscles may result in insufficient
blood circulation.

In the skeletal system long-lasting loading (e.g. due to long-lasting work in an inclined
posture) can lead to deficient nutrition of the spinal discs.

How can long-lasting loading be avoided?

Advice to the employee:


 Move instead of holding a static position.
 Use tools for holding objects.
 Strive for frequent change in body position.
 Strive for frequent upright positions from inclined positions.
 Stand up from time to time when working in a sitting position, for example, while
making a telephone call.

Advice to the employer:


 Provide tools for holding (e.g. screw-clamps; handles, which enable holding with low
muscle force).
 Provide scaffolds, ladders, or similar devices.
 Supply arm supports at computer workplaces.
 Supply handles or grips which can be used with right as well as left hand.
 Place handles/grips to enable use in a neutral position of wrist and arm.

Risk factor: Physical environmental conditions


Where is the risk on account of physical environmental conditions?

Vibration:
 Hand-arm vibration encountered through hand-held tools may lead to degenerative
disorders or to blood circulation problems in the hand (especially the fingers - white
finger syndrome).
 Whole-body vibration in vehicles may lead to degenerative disorders, in particular, of
the lumbar and thoracic spine.

Climate:
High temperatures during the manipulation of heavy loads may lead to blood pressure
problems and to an increase in body temperature. At low temperatures, a decrease in
dexterity may occur.

Lighting
Insufficient lighting or dazzling may induce constraint postures. Furthermore, it may increase
the danger of stumbling or falling.

Slips and falls:


Unsuitable, uneven, unsteady, or slippery working surfaces and floors can cause tense,
strenuous working postures and movements, in particular, in tasks entailing the handling of
loads.

How can the risk of physical environmental conditions be reduced?

Vibration:
The effect of hand-arm vibration can be reduced by using tools with low vibration, reducing
the time of usage of vibrating equipment, wearing gloves, and avoiding coinciding influence
of low temperatures.
The effect of whole-body vibration can be reduced by using vibration-absorbing seats and
reducing the time during which vibration is applied to the body.

Climate
Wearing of appropriate garments, regular change of stay in rooms with high and low
temperatures, limited stay in rooms with high or low temperature

Lighting
Supplying sufficient and un-dazzling lighting equipment

Slips and falls:


Avoid unsuitable, uneven, unsteady, or slippery working surfaces, floors and transportation
routes whenever it is possible.

8.7 BASIC RULES FOR PREVENTIVE ACTIONS IN PRACTICE

A risk for disorders of the musculoskeletal system appears if the load and the functional capacity of
the worker are not in balance. With regard to maintenance and promotion of health, the following
points must be considered:
 There is a need for a weighed balance between physical activity and recovery.

 Movement should be preferred to static holding. The aim should be a combination of active
periods with higher load and periods of relaxation.

 Overload should be avoided. Effective measures for preventing overload are reducing the
required forces and repetitions.

 Manual handling should be avoided. If avoidance is not completely possible, it should be


restricted by applying ergonomic and organizational measures; the employees should be
educated and trained so that they can help to minimize the overall risks.

 Too low load should be avoided. An appropriate load for the organs of locomotion is
essential in order to keep up their functional ability.
 The individual "favorable load" can vary from person to person depending on functional
abilities and individual resources.

The primary aim of ergonomics is the adaptation of working conditions to the capacity of the
worker. High human capabilities of the employees should not be misused as a pretense for
maintaining poorly designed conditions of work or work environment. Thus, it is important to take
into account influencing factors such as age, gender, the level of training, and the state of knowledge
in an occupation. The working conditions should be arranged in such a way that there is no risk from
physical load for anyone at the workplace.

Fundamental points influencing the physical load functions of an employee at worksite are:
 requirements of work with respect to body positions and postures of the employee
 design of working area
 configuration of body supports
 light and visual requirements
 arrangement of controls and displays
 movement sequences of operations
 design of work-rest regimen
 type of energetic loads with respect to force, repetition, and duration of work
 Magnitude of mental loads by increasing the latitude and control of work or job enrichment.

A secondary way is to develop the capacity of the humans to the work by training and vocational
adjustment. The possibility of development of human abilities while executing work should not be
the pretense for keeping a poorly designed condition of work or the work environment. The
selection of workers according to individual capacity should be limited to exceptional situations.
Successful prevention of work-related health risks requires a scheduled and stepwise procedure:
 analysis of the working conditions
 assessment of the professional risk factors
 consideration/provision of measures for diminishing the risk factors by ergonomic design of
the workplace (prevention in the field of working conditions
 introduction of measures for the diminution of the risk factors by influencing the behavior of
employees (prevention in the field of behavior)
 coordination of the prevention measures with all subjects involved
 discussion of alternative prevention approaches
 specific and scheduled application of the prevention approaches
 Control and assessment of the results.

8.8 SUMMARY

Disorders of the musculoskeletal system are a major cause of absence from work and lead,
therefore, to considerable cost for the public health system. Health problems arise if the mechanical
workload is higher than the load-bearing capacity of the components of the musculoskeletal system
(bones, tendons, ligaments, muscles, etc./). Apart from the mechanically-induced strain affecting
the locomotor organs directly, psychosocial factors such as time-pressure, low-job decision latitude
or insufficient social support can augment the influence of mechanical strain or may induce
musculoskeletal disorders by increasing muscle tension and affecting motor coordination.

A reduction of the mechanical loading on the musculoskeletal system during the performance of
occupational work is an important measure for the prevention of musculoskeletal disorders. The
main risk factors are high forces resulting from lifting and pushing or pulling heavy objects, high
repetition frequency or long duration of force execution, unfavorable posture, static muscle forces
or working on or with vibrating machines. Effective measures for the reduction of forces acting
within or on the skeletal and muscular structures include adopting favorable postures, reducing load
weight, limiting exposure time and reducing the number of repetitions.

Prevention of musculoskeletal disorders can be achieved by engineering controls and appropriate


organizational arrangements. The first-mentioned aspect involves the whole working environment
and deals with the ergonomic design of tools, workplaces, and equipment. The latter concentrates
upon factors such as training, instruction, and work schedule. The primary aim of ergonomic work
design is the adaptation of the working conditions to the capacity of the worker. It is supplemented
by a secondary way, which is based on the development of the persons' capacity to the working
requirements by training and vocational adjustment.

8.9 KEYWORDS

Musculoskeletal Disorders- Musculoskeletal disorders (MSDs) can affect the body's muscles, joints,
tendons, ligaments, and nerves
Mechanical loading- The contribution of mechanical forces--i.e., pressure, friction, and shear--to the
development of pressure ulcers.

Monotonous repetitive tasks- Monotonous, repetitive worktasks are tasks of same fashion, which
creates the pain in joints

Postures- A neutral spine or good posture refers to the "three natural curves [that] are present in a
healthy spine. From the anterior/posterior view the 33 vertebrae in the spinal column should appear
completely vertical
UNIT 9 THERMAL ENVIORNMENT

Objectives
After going through this unit,you will be able to:
 balance the design of workplace using thermal balance approach.
 recognize the importance of core temperature and skin temperature at workplace.
 consider conduction and insulation properties of work environment.
 differentiate the comfort standards in theory and practice.
 design heating systems, considering the essential components.
Structure
1.1 Introduction
1.2 Physiological measurements
1.3 Thermal Balance
1.4 Thermal Indices
1.5 Heating Systems
1.6 Summary
1.7 Keywords

9.1 INTRODUCTION
The main objective of controlling the thermal environment in relation to humans is to match
activities to response to optimize health, comfort, safety and performance.

Several factors interact within the thermal environment. Ultimately, it is the individual's
physiological and psychological responses which indicate whether a particular combination of
these factors produce too much heat gain or loss and results in unacceptable physiological,
psychological, or subjective states.

The values providing criteria of unacceptable strain are based upon research that has shown that the
physical and psychological performance of an increasing proportion of the population will be less
effective as these criteria are exceeded in extreme conditions.

Human beings are homeotherms(An organism that maintains its body temperature at a constant
level, usually above that of the environment, by its metabolic activity. and able to maintain a
constant internal temperature within arbitrary limits of ±2°C despite much larger variations in
ambient temperature. In a neutral environment at rest, deep body temperature may be kept
within a much narrower band of control (±0.3°C). The main physiological adjustments involve
changes in heat production, especially by shivering or voluntary muscular activity during physical
exertion, and alterations in heat loss by vasomotor changes which regulate heat flow to the skin
and increased evaporative heat loss by sweating. Evaporation of sweat from the body surface
provides man with one of the most effective methods of heat loss in the animal kingdom. Even so,
temperature extremes can only be tolerated for a limited period depending on the degree of
protection provided by shelter, the availability of clothing insulation in the cold and fluid
replacement in the heat. By behavioral responses, human beings can avoid the effects of such
extremes without recourse to excessive sweating or shivering. Repeated exposure to heat (and to
a lesser extent to cold) can result in acclimatization, which involves reversible physiological
changes that improve the ability to withstand the effects of temperature stress. A permanent
adaptation to climate can occur by the natural selection of beneficial changes in body form and
function ascribable to inherited characteristics.

9.2 PHYSIOLOGICAL MEASUREMENTS


Core Temperature
The deep body or core temperature is normally maintained within a narrow range around
37°C. Core temperature represents a composite temperature of the deep tissues, but' even in
the core, temperature is not uniform because organs such as the liver and active muscles
have a higher rate of heat production than other deep tissues.

Conventionally, core temperature is measured by a thermometer placed in the mouth (BS 691:
1987). Errors arise if there is mouth breathing or talking during measurement, or if hot or cold
drinks have been taken just previously, or if the tissues of the mouth are affected by cold or
hot external environments. The rectal temperature is a slowly equilibrating but more reliable
measurement of deep body temperature and on average about 0.5°C higher than mouth
temperature. Cold blood from chilled legs and warm blood from active leg muscles will affect
the rectal temperature. The temperature of the urine is a reliable measure of core temperature
providing it is possible to void a volume of 100 ml or more. For continuous measurement of
core temperature, thermoelectric devices may be placed in the ear or the temperature in the
intestine may be monitored by telemetry using a temperature-sensitive transmitter in the form
of a pill that can be swallowed.

The internal temperature of warm-blooded animals including man does not stay strictly
constant during a day even when keeping constant the generation of heat from food intake
and physical activity. In humans it may be 0.5-1.0°C higher in the evening than in the early
morning due to an inherent circadian temperature rhythm. Another natural internal
temperature variation occurs in women at the time of ovulation when core temperature rises
by O.l-0.4°C until the end of the luteal (post-ovulation) phase of the menstrual cycle.

Skin Temperature
Across the shell of the body, from the skin surface to the superficial layers of muscle, there is
a temperature gradient which varies according to the external temperature, the region of the
body surface, and the rate of heat conductance from the core to the shell. When an
individual is thermally comfortable, the skin of the toes may be at 25°C, that of the upper
arms and legs at 31°C, the forehead temperature near 34°C while the core is maintained at
37°C. Average values for skin temperature can be obtained by applying thermistors or
thermocouples to the skin and using weighting factors for the different representative areas.
Regional variations can be visualized and recorded more comprehensively by infra-red
thermograph.
Other Measurements
Severalotherphysiologicalparametersapartfromcoreandshelltemperaturesareofvalueininterpr
etingthecomponentsofthermalstrain.Theseincludeperipheralbloodflowtomeasurevasomotorc
hanges,sweatloss,shiveringresponses,cardiovascularresponsessuchasheartrate,bloodpressure
andcardiacoutputandmetabolicheatproduction.Thechoiceofmeasurementswilldependlargely
onthenatureofthethermalstress,therequirementsandlimitssetbythoseresponsibleforthehealth
andsafetyofemployees,
andthedegreeofacceptanceofthemethodsbythesubjects.Assessmentofthephysiologicalstraini
napersonsubjectedtothermalstressisusuallyrelatedtotwomeasurementsofphysiologicalfunctio
n-thecoretemperatureandtheheartrate.Tolerancelimitsforwork
inadversetemperatureconditionsarecommonlybasedonacceptable'safe'levelsofthesetwofunct
ions-
intheheatalimitof38.0°Ccoretemperatureand180beatsperminuteheartrateinnormalhealthyad
ults,andinthecold36.0°Ccoretemperature.Closemonitoringandgreaterrestrictionoftheselimits
mayneedtobeappliedtoolderworkers andunfitpersonnel.

In field studies, techniques for measuring physiological parameters include telemetry


procedures for ambulatory monitoring. A comparison of some of the standard methods
available for measuring core temperature, skin temperature, heart rate and total body
sweat loss, with an appraisal of their technical limitations and the discomfort and risks they
involve, can be found in ISO 9886 (1992).

9.3 THERMAL BALANCE


Thebody's
coretemperatureremainsconstantwhenthereisanequilibriumbetweeninternalheatproductionandh
eatlossfromthesurface.Thermalbalanceis expressedinthe formofanequation
M±W= ± K ± C ± R-E ± S
where M is the rate of metabolic heat production, W the external work performed by or on the
body, K, C and R the loss or gain of heat by conduction, convection and radiation, E the
evaporative heat loss from the skin and respiratory tract and S the rate of change in the store of
body heat(= 0 at thermal equilibrium).

Respiratory heat loss (RES)


) occurs in cool environments because expired air is warmer and has a

higher absolute humidity than inspired air. For a person expending energy at 400 W.m-2 in an air
)
temperature of minus 10°C, the RES will be about 44 W (25 W.m-2 . For normal indoor activities
(seated/standing) in 20°C ambient temperature, the heat loss by respiration is small (2 to 5 W.m-
2) and hence is sometimes neglected.

MetabolicHeatProduction(M)
The human body may be a chemical engine, and foods with different energy content, the fuel.
At rest, some of the chemical energy of food is transformed into mechanical work eg in the
heart beat and respiratory movements. This accounts for less than I 0% of the energy
produced at rest, the remainder being used in maintaining ionic gradients in the tissues and in
chemical reactions in the cells, tissues, and body fluids. All this energy is ultimately lost from
the body in the form of heat and the balance of intake and loss maintained during daily
physical activity. In general, energy intake from food balances energy expenditure, except in
those cases where body weight is changing rapidly. In the absence of marked weight changes,
measurement of food consumption may be used in assessing habitual activity or energy
expenditure, though in practice, energy balance is only achieved over a period of more than
one week.

Energy released in the body by metabolism can be derived from measurements of oxygen
consumption using indirect calorimetry (see BS EN 28996: 1994; Determination of metabolic
heat production). The value of metabolic heat production in the basal state with complete
physical and mental rest is about 45 W.m-2 (ie per m2 of body surface area) for an adult male
of30 years and 41 W.m-2 for a female of the same age. Maximum values are obtained during
severe muscular work and may be as high as 900 W, m2 for brief periods. Such a high rate can
seldom be maintained and performance at 400-500 W.m-2 is very heavy exercise but an overall
rate that may be continued for about one hour. Metabolic heat is largely determined by
muscle activity during physical work but may be increased at rest in the cold by involuntary
muscle contractions during shivering.

In the heat balance equation given previously, M - W is the actual heat gain by the body during
work, or M + W when negative work is performed. In positive work, some of the metabolic
energy appears as external work so that the actual heat production in the body is less than the
metabolic energy produced. With negative work eg. 'braking' while walking downstairs, the
active muscle is stretched instead of shortening so that work is done by the external
environment on the muscles and appears as heat energy. Thus, the total heat liberated in the
body during negative work is greater than the metabolic energy production.

Conduction(K)andInsulation(I)
Heat is conducted between the body and static solids or fluids with which it is in contact. The
rate at which heat is transferred by conduction depends on the temperature difference
between the body and the surrounding medium, the conductance (k) and the area of contact.
This can be expressed as

K= k (t1 –t2)
where K is the heat loss in watts per surface area of the body (W.m-2 ,) t1 and h the
temperatures of the body and environment respectively (0C), and k is a constant, conductance
(W.m-2.oC-1). In considering conductance at the body surface it is usually more convenient to
refer to insulation (I), the reciprocal of k, where I is a measure of resistance to heat flow (m2
.°C.W1 ).

For the human body there are three different components of I. Icl is the insulation of the
tissues affecting the flow of heat from the core at a temperature t∞ to the skin at temperature
tsk, and lcl and Ia the insulation of clothing and air affecting the heat flow from the skin to air
at temperature ta. .

An arbitrary unit of insulation, the clo, is used for assessing the insulation value of clothing. By
definition 1.0 clo is the insulation provided by clothing sufficient to allow a person to be
comfortable when sitting in still air at a temperature of 21°C. 1.0 clo is equivalent to an lcl of
0.155 m2 .°C.W1 . Examples of typical clothing insulation values are given in Table 9.1.
Table 9.1: Basic insulation values, L∞ of a range of clothing ensembles (adapted from Fanger,
1970)

Forfull listingsofensemblesandindividualgarmentsseeBSISO9920:1995

Clothingensemble lei(clo)
Nude 0
Shortsonly 0.1
Lightsummerclothing 0.5
Typicalindoorclothing 1.0
·Heavybusinesstypesuit 1.5
Businessclothes overcoatplushat 2.0
Polarweathersuit 3to4

When a person is fully vasoconstrictor, Itl is about 0.6 clo. When fully vasodilator at rest in the
heat Itl falls to 0.15 clo and when exercising hard in the heat it may fall to about 0.075 clo.
These figures show that increasing tissue insulation by vasoconstriction can play only a small
part relative to clothing in protecting an individual against cold, but decreased tissue insulation
significantly helps the loss of heat in a hot environment. The amount of subcutaneous fat is an
important variable determining cooling rate by tissue insulation and it is especially effective in
cold water immersion. The thermal conductance of 10 mm thickness of freshly excised
humanfat tissue is reported to be 16.7 W.m-2. °C-1 (Beckman, Reeves and Goldman, 1966).
The thickness and distribution of the subcutaneous layer of fat differs from person to person,
but for a mean thickness of 10 mm the insulation value would be equivalent to 0.96 clo.

Convection(C)
Normally, the surface temperature of a person is higher than that of the surrounding air so
that heated air close to the body will move upwards by natural convection as colder air takes
its place. The expression for heat exchange by convection is similar to that for conduction and
is given by:

c=hc(tl-t2)
Where C is the convection loss per unit area, he is the convective
. heat transfer coefficient and
t1 and t2 the temperature of the body surface and the air respectively. The value of he
depends on the nature of the surrounding fluid and how it is flowing. Natural (free) convection
applies in most cases when the relative air velocity is < 0.1 m. s-1

The transfer coefficient depends then on the temperature difference between clothing (te1)
and air (t,) (in °C) as given (in units of W.m-2. oc-l) by

0.25
hc=2.38(tcl-ta)
Relative air velocity is increased when the arms or legs are moved through the environment as
for example in walking. The convective heat transfer coefficient is increased further when air
movement induced by a fan or draught causes forced convection. For forced convection over a
range of air speeds up to 4 m.s-1 the best practical value
forthemean.convectiveheattransfercoefficient given by
0.5
hc=8.3(v)
Where v =air velocity in m.s-1
.

Radiation (R)

Radiant heat emission from a surface depends on the absolute temperature T (in Kelvin, K ie °C
+ 273) of the surface to the fourth power ie proportional to T4. The radiation transfer, R, (in
W.m-2 ) between similar objects 1 and 2 is then given by the expression
= ( 1 − 2 )

Where cr is the Stefan-Boltzmann constant (5.67x 10-8 W.m-2.K-4) and e is the emissivity of
the objects.
An approximation is often permissible where the rate of heat transfer between surfaces is
related to their temperature difference and the first power of this difference may then be used
in the same form as that for heat transfer by conduction and convection
= ℎ ( 1 − 2)

Where R is radiant heat transfer per unit area, h, the radiant heat transfer coefficient and t 1
and h the temperatures of the two surfaces. h, depends on the nature of the two surfaces,
their temperature difference and the geometrical relationship between them.
For many indoor situations the surrounding surfaces are at a fairly uniform temperature and
the radiant environment may be described by the mean radiant temperature. The radiant heat
exchange between the body surface area (clothed) and surrounding surfaces in W.m-2 is given
by
4 4
R= feff fcl[(tcl+273) -(tr+273) ]
where E is the emissivity of the outer surface of the clothed body; feff is the effective radiation
area factor ie the ratio of the effective radiation area of the clothed body to the surface area of
the clothed body; fcl is the clothing area factor i.e. the ratio of the surface area of the clothed
body to the surface area of the nude body; tcl is the clothing surface temperature (°C) and tr
the mean radiant temperature (°C).
The value of feff is found by experiment to be 0.696 for seated persons and 0.725 for standing
persons. The emissivity for human skin is close to I.0 and most types of clothing have an
emissivity of about 0.95. These values are influenced by color for short wave radiation such as
solar radiation.

The mean radiant temperature t is defined as the temperature of uniform surrounding surfaces
which will result in the same heat exchange by radiation from a person as in the actual
environment. The mean radiant temperature is estimated from· the temperature of the
surrounding surfaces weighted according to their relative influence on a person by the angle
factor between the person and the radiating surface, tr is therefore dependent on both a
person's posture and their location in a room.
Evaporation(E)
At rest in a comfortable ambient temperature an individual loses weight by evaporation of
water diffusing through the skin (insensible cutaneous water loss) and from the respiratory.
Passages (RES)Total insensible water loss in these conditions is approximately 30 g.h-1

Water diffusion through the skin will normally result in a heat loss equal to approximately 10
w.m-2.

The latent heat of vaporization of water is 2453 kJ.kg- 1 at 20°C and a sweat rate of I
liters per hour will dissipate about 680 W. This value of heat loss is only obtained if all the
sweat is evaporated from the body surface; sweat that' drips from the body is not providing
effective cooling.
Evaporation is expressed in terms of the latent heat taken up by the environment as the result
of evaporative loss and the vapor pressure difference which constitutes the driving force for
diffusion

E=hc(psk-pa)

2 )
WhereEistherateofheatlossbyevaporationperunitareaofbodysurface(W.m- ,h.the mean
evaporation coefficient and Psk and Pa the partial pressure of water vapor at the skin surface
and in the ambient air (Kpa)
The direct determination of the mean evaporation coefficient (hc) is based on measurement of
the rate of evaporation from a subject whose skin is completely wet with sweat. Since the
production of sweat is not even over the body surface this requires that total sweat rate must
exceed evaporative loss by a considerable margin - a state that is difficult to maintain for any
length of time.

Air movement (v) and body posture are also important in making the measurement, and in
surveying the results of various researchers, Kerslake (1972) recommended that for practical
purposes h. (in units of W.m-2.kPa-1) be represented by
hc=124(v)0.5

for v in the range 0.1 to 5.0 m.s-1

If the actual rate of evaporation E1 is less than the maximum rate possible Emax at the
prevailing hc, psk and pa, the ratio E1/Emax can be used as a measure of skin wittedness. The
skin surface may be considered as a mosaic of wet and dry areas. With wittedness value of 0.5
the rate of evaporation achieved would be equivalent to half the skin surface being covered
with a film of water, the other half being dry. For insensible cutaneous water loss, the value is
about 0.06 and at the maximum value 1.0 the skin is fully wet.

Heatstorage(S)
The specific heat of the human body is 3.5 kJ.kg- 1 If a 65 kg individual has a change in mean
.
body temperature of 1oc over a period of 1 h, the rate of heat storage is 230 kJ.h-\ or 64 W. In
the equation for thermal balance, S can be either positive or negative, but in determining
storage the difficulty is to assess the change in mean body temperature. The change in mean
deep body temperature alone is not acceptable because of the different weightings
contributed by the core and shell. ·

Variousformulaehavebeensuggestedtocombinemeasurementsofskinandcoretemperaturetogiv
eameanbodytemperaturee.g.
0.90 t core...+ 0.10 t skin in hot conditions and

0.67 t core...+ 0.33 t,skin in cold conditions.

The volume of the warm core during vasoconstriction in cold surroundings is effectively
reduced thereby altering the weighting coefficients.
InterpretationofCoreTemperature
The limiting values of core temperature which are regarded as 'safe' in adverse temperature
conditions eg. in the heat, 38.0°C is usually stated as the body temperature at which work
should cease. In practice, there are usually two principal situations which lead to raised core
temperature.

ExtremeHeat Stress
Extreme heat stress due to hot environmental conditions when little physical work is performed
(environmental heat stress). Core temperature (!c) might eventually reach 38°C when mean skin
temperature (tsk) has risen to perhaps 35°C or higher. Under these conditions, maximum sweating
(Emax) is achieved as the result of the combined central nervous drive from high tc. and high tsk.

Exertional Heat Strain


In work situations in cooler climates, physical activity often provides the main stimulus to
increasing the to (exertional heat strain). Under these circumstances, tc may also reach 38°C
because of a high internal heat production, but the mean skin temperature may not rise higher
than, say, 28°C because of skin cooling brought about by sweating in a cool environment. In this
case the drive to produce Emax may not have been reached and this may safely allow a limiting tc
of 38.5 to 39.0°C.

9.4 THERMALINDICES

A useful tool for describing, designing and assessing thermal environments is the thermal index. The
principle is that factors that influence human response to thermal environments are integrated to
provide a single index value, The aim is that the single index value varies as human response varies
and can be used to predict the effects of the environment.
The principal factors that influence human response to thermal environments are:
 air temperature,
 radiant temperature,
 air velocity,
 humidity and
 The clothing and activity of the individual.
A comprehensive thermal index will integrate these factors to provide a single index value.
Numerous thermal indices have been proposed for assessing heat stress, cold stress, and thermal
comfort. The indices can be divided into three types, rational, empirical, and direct.

Rational Thermal Indices


Rational thermal indices use the principle of heat balance which has been employed widely in
methods for assessing human response to hot, neutral, and cold environments. If a body is to remain
at a constant temperature, then the heat inputs to the body need to be balanced by the heat
outputs.
Heat transfer can take place by:
 Conduction (K)
 convection (C)
 radiation (R)
 Evaporation (E)
In the case of the human body an additional heat input to the system is the metabolic heat
production (M) generated within the body.

Using the above, the following body heat equation can be proposed
M ± K± C± R-E = S
If the net heat storage (S) is zero, then the body can be said to be in heat balance and hence
internal body temperature can be maintained. The analysis requires the values represented in
this equation to be calculated from knowledge of the physical environment, clothing, activity,
etc. Rational thermal indices use heat transfer equations (and sometimes mathematical
representations of the human thermoregulatory system) to ‘predict’ human response to thermal
environments.

A comprehensive mathematical and physical appraisal of the heat balance equation represents
the approach taken by Fanger (1970) which is the basis oflSO Standard BS EN ISO 7730: 1995
'Moderate thermal environments- Determination of the PMV and PPD indices and specification
of the conditions for thermal comfort'. The purpose of this Standard is to present a method for
predicting the thermal sensation, Predicted Mean Vote (PMV), and the degree of discomfort
(thermal dissatisfaction), Predicted Percentage Dissatisfied (PPD), of people exposed to
moderate thermal environments and to specify acceptable thermal environmental conditions for
comfort.

BS EN ISO 7730: 1995 provides a method of assessing moderate thermal environments using the
PMV thermal comfort index, Fanger (1970). The PMV is the Predicted Mean Vote of a large
group of persons, if they had been exposed to the thermal conditions under assessment, on the
+3 (hot) to -3 (cold) through 0 (neutral) scales.
The PMV is calculated from:
 the air temperature
 mean radiant temperature
 humidity and air velocity of the environment and
 Estimates of metabolic rate and clothing insulation

The PMV equation involves the heat balance equation for the human body and additional
conditions for thermal comfort. The PPD index is calculated from the PMV and provides the
Predicted Percentage of thermally dissatisfied persons. The Annex of the Standard gives
recommendations that the PMV should lie between -0.5 and +0.5, giving a PPD of less than 10%.
Tables and a computer program are provided in BS EN ISO 7730 to allow ease of calculation and
efficient use of the standard. This rational method for assessing moderate environments allows
identification of the relative contribution different components of the thermal environment
make to thermal comfort (or discomfort) and hence can be used in environmental design. It is
important to remember that this Standard standardizes the method and not the limits. The
recommendations made for thermal comfort conditions are produced in an annex which is for
information and not part of the standard.

Heat balance is not a sufficient condition for thermal· comfort. In warm environments sweating
(or skin wittedness), and in cold environments mean skin temperature, must be within limits for
thermal comfort. Rational predictions of the body's physiological state can be used with
empirical equations which relate mean skin temperature, sweat rate and skin wittedness to
comfort. Recommendations for limits to air movement, temperature gradients, etc. are given in
BS EN ISO 7730.

Empirical Indices
Empirical thermal indices are based upon data collected from human subjects who have been
exposed to a range of environmental conditions. Examples are the Effective Temperature (ET)
and Corrected Effective Temperature (CET) Scales. These scales were derived from subjective
studies on US marines; environments providing the same sensation were allocated equal ET/CET
values. These scales consider dry bulb/globe temperature, wet bulb and air movement; two
levels of clothing are considered (egChrenko, 1974).

For this type of index, the index must be 'fitted' to values which experience predicts will provide
‘comfort’.

DirectIndices
Direct indices are measurements taken on a simple instrument which responds to similar
environmental components to those to which humans respond. For example, a wet, black globe
with a thermometer placed at its centre will respond to air temperature, radiant temperature,
air velocity and humidity. The temperature of the globe will therefore provide a simple thermal
index which with experience of use can provide a method of assessment of hot environments.
Other instruments of this type include the temperature of a heated ellipse and the integrated
value of wet bulb temperature, air temperature and black globe temperature (WBGT).

An engineering approach employing what is known as the dry resultant temperature (CIBSE,
1987) is to use simply the equilibrium temperature of a 100 mni globe thermometer placed in
the environment. The temperature of the globe approximates to an average of the air
temperature and mean radiant temperature. The index needs to be corrected for air movement,
greater than 0.1 m.s-\ and assumes that relative humidity lies in the range 40-60%.

SelectionofAppropriateThermalIndices
A first step in the selection of a thermal index is to determine whether a heat stress index,
comfort index or cold stress index is required. There are numerous thermal indices, and
most will provide a value which will be related to human response (if used in the appropriate
environment). An important point is that experience with the use of an index should be
gained in a particular industry. A practical approach is to gain experience with a simple direct
index; this can then be used for day to day monitoring. If more detailed analysis is required a
rational index can be used (again experience should be gained in a particular industry) and if
necessary subjective and objective measurements can be taken.
Thermal Comfort/discomfort
Thermal discomfort can be divided into whole-body discomfort and local thermal discomfort
(ie of part of the body). Thermal comfort is subjective and has been defined by ASHRAE as
'that condition of mind which expresses satisfaction with the thermal environment'. Comfort
indices relate measures of the physical environment to the subjective feelings of sensation
comfort or discomfort.

It is generally accepted that there are three conditions for whole-body thermal comfort.
These are that the body should be in heat balance, and that the sweat rate (or skin
wittedness) and the mean skin temperature are within limits for comfort. Rational indices
such as the predicted mean vote (PMV- BS EN ISO 7730) use the criteria to allow predictions
of thermal sensation. In many instances a simple direct index, such as the temperature of a
black globe, will be sufficient. The dry resultant temperature is related to this.

BS EN ISO 7730 considers issues relating to local thermal discomfort, such as vertical
temperature gradients and radiation asymmetry. Recommended values are given in the
Annex of the Standard to limit these effects. Discomfort caused by draught is also examined.
Draught is defined as an unwanted local cooling of the body caused by air movement. A
method for predicting the percentage of people affected by draught is provided in terms of
air temperature, air velocity and turbulence intensity (i.e. a measure of the variation of air
movement with time). The model applies over a specified range of thermal conditions and
for people performing light, mainly sedentary activity, with a thermal sensation for the
whole body close to neutral. Guidance is provided on how to determine acceptable
thermal conditions for comfort based on the methods provided in the Standard.

SubjectiveJudgmentScales
BS ISO 10551: 1995 'Assessment of the influence of the thermal environment using
subjective judgment scales' provides a set of specifications on direct expert assessment of
subjective thermal comfort/discomfort expressed by persons subjected to thermal stress.
The methods supplement physical and physiological methods of assessing thermal loads.

Subjective scales are useful in the measurement of subjective responses of persons exposed
to thermal environments. They are particularly useful in moderate environments and can be
used independently or to complement the use of objective methods (eg. thermal indices).
This Standard presents the principles and methodology behind · the construction and use of
subjective scales and provides examples of scales which can be used to assess thermal
environments. Examples of scales are presented in several languages.

Scales are divided into the five types illustrated by example questions:
Perceptual - How do you feel now?
Affective- How do you find it?
Thermal preference - How would you prefer to be?
Personal acceptance - Is the environment acceptable/unacceptable?
Personal tolerance - Is the environment tolerable?

The principle of the Standard is to provide background information to allow ergonomists and
others to construct and use subjective scales as part of the assessment of thermal
environments. Examples of the construction, application and analysis of subjective scales are
provided in the Annex to the standard.

Comfort Standards; Theory and Practice


Thermal comfort has been defined as 'that condition of mind which expresses satisfaction
with the thermal environment'. It is therefore a psychological phenomenon and not a
physiological state. It will be influenced by individual differences in mood, personality,
culture, and other individual, organizational, and social factors. It is not surprising, therefore,
that methods for predicting thermal comfort conditions will never be perfect. Whether
standard methods are, or will ever be, adequate, or universally acceptable and what their
appropriate form should be, are continually debated topics.
An important issue to be addressed is whether present standardized methods for
determining thermal comfort conditions are sufficient for practical application or, if methods
are to be established for worldwide use, a new approach or philosophy is required. Some
researchers have called into question both the philosophy and accuracy of current
international standards, methods, and recommended limits. It is through such questioning
that standards will be improved.

Thermal comfort has been the subject of much international research over many years. A
great deal is known about principles and practice and some of the knowledge has been
incorporated into international standards. These include BS EN ISO 7730. 'Moderate thermal
environments - calculation of the PMV and PPD thermal comfort indices', and the American
Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) Standard 55-1992,
'Thermal environmental conditions for human occupancy'. Such has been the acceptance
and use of the standards that the observer could conclude that all questions concerning
thermal comfort standards have been answered, and that neither laboratory nor field
research is required.

However, many studies have demonstrated that knowledge is not complete and that
problems have not been solved. People in buildings suffer thermal discomfort, and this is not
a minor problem for the occupants of the buildings or for those interested in productivity
and the economic consequences of having an unsatisfied workforce. Is the problem because
standards have not been correctly updated or are, they not correctly used and there is a
presentation and training issue? Maybe all of these apply. Field studies provide practical
data but when designing buildings to provide thermal comfort; can we improve upon
current standards? Are the standards applicable universally or do individual factors, regions
and culture, for example, greatly influence comfort requirements? Are people more
adaptable and/or less predictable than the standards suggest (depending on circumstances).
Are we consuming more capital and running cost resources in buildings to provide
recommended levels of thermal comfort than is really needed? If current standards are
being brought into question, what can be used instead; guidance of some kind is required
for building designers, owners and operators.

Control
Much attention has been concentrated in recent years on the ‘quality’ of internal
environments, particularly offices (eg WHO, 1984; HSE, 1995). Buildings themselves and
active control systems can influence conditions detrimentally (e.g. Youle, 1986). Those
factors which are likely to influence the thermal conditions occurring within a space or
building are described here. This is by no means a comprehensive treatment but represents
typical problem areas. Any one, or combination, of these factors may require attention to
improve thermally unsatisfactory conditions.

It is likely that a range of personnel will be required for such investigations, in particular
those responsible for the design, operation and maintenance of mechanical services
plant (i.e. building services engineers).

Building Fabric
The fabric of a building can influences thermal conditions in several ways:
 Poor thermal insulation will result in low surface temperatures in winter, and
high values in summer, with direct effect on radiant conditions. Variations
and asymmetry of temperature will also be influenced.

 Single glazing will take up a low internal surface temperature when it is cold
externally causing potential discomfort and radiation asymmetry; The effects
are reduced (but not eliminated) with double glazing.

 Cold down draughts from glazing (due to cold window surfaces) can be
counteracted by the sitting of heat emitters under glazing, or by installing
double or triple glazing.

 Direct solar gain through glazing is a major source of discomfort. This can be
reduced/controlled by for example:
- modification of glazing type (e.g. 'solar control glass' or applied
tinted film)

- use of internal blinds (preferably of a reflecting color); this provides


localized protection for individuals, but most of the solar heat gain
still enters the space

- use of external solar shading devices (by far the most effective method).
It should be noted that the glazing itself may absorb heat and therefore rise in
temperature causing an increase in mean radiant temperature. This is particularly
relevant with ‘solar control glasses. Also use of such glass often means that internal
natural lighting levels are reduced, leading to more use of artificial lighting and
associated heat gain into the space, which could contribute to thermal discomfort.
 Unwanted air movement (and local low air temperatures) in winter time can
arise from poor window and fabric seals, external doors, etc. Draught-
proofing techniques can be employed to reduce this effect.

 The sitting and nature of internal partitions can affect local conditions. If the
interior of a space has high thermal mass (thermal capacity) then temperature
fluctuations are reduced.

9.5 HEATING SYSTEMS


A range of factors needs to be checked to ensure that a heating system is designed and
functioning appropriately. Advice from suitably qualified engineers may be required. Examples
are:
 Overall output from central boiler plant or local output from heat emitters in individual
spaces needs to match the building and its requirements.

 The position of heat emitters can assist in counteracting discomfort eg siting of radiators
under windows counteracts cold window surfaces and cold down draughts.

 Poor siting can lead to radiation asymmetry and increased draughts.


Noise (e.g. from fan units and grilles) can be a contributing nuisance factor.

 The heat output from emitters should be appropriately controlled. Control systems may
vary from a. simple thermostat in one room to multiple sensing systems and computer
control throughout the building.

Ventilation systems (heating only)


When assessing a system pertinent points to note are:
 Identify air input grilles/diffusers to the room and check volume flow, velocity, circulation
and distribution of supply air.

 Check the value of the supply air temperature. Note that increased air movement may
mean higher air temperature requirements for given comfort conditions.

 Look for air temperature gradients, particularly arising from air distribution patterns. If the
supply temperature is too high, buoyancy effects may lead to large temperature gradients,
and poor supply air distribution.

 Air volumes: if ventilation is providing heating, then air volume flow (and air temperature)
must be sufficient to counteract heat losses in the space.

 Ensure local adjustments in one area do not adversely affect adjacent areas or overall
plant operation.

 Low values of relative humidity may result during winter heating. Humidification may be
provided in central plant. If so, check that it is operating and being controlled correctly.

Air conditioning systems (heating, cooling and humidity control)


'Full air conditioning systems' can be very complex and sophisticated and it is likely that
specialist expertise is required in their assessment. The types of point to be questioned or
require checking are:
What is the principle of operation? There are many types e.g.:
 Fixed air volume, variable temperature.

 Variable air volume (often referred to as VAV), fixed temperature; Fan assisted terminals.

 Fan coil unit - local control; Induction units.

 Twin duct; Chilled ceilings, chilled beams; Displacement ventilation; floor air supply.

These issues fall into the domain of the building services engineer - but the basic
principle of operation and control needs to be established.

If conditions are not satisfactory, check for under- or over- capacity in both main
plant and local emitters. This is particularly relevant with cooling systems, which
usually have little capacity in hand.
Also check, or have checked, the operation of local valves to heater and chiller
batteries; these may jam, Jet-by or operate incorrectly, e.g. in reverse to that
expected.

Assess the temperature and velocity of air leaving grilles. Are the values likely to
lead to local discomfort, and is there sufficient air distribution?
Adjustments made for summer conditions (eg. enhancing air movement) may
lead to discomfort in winter (and vice versa). Also, the air distribution pattern is
likely to differ for cooling and heating modes.

Establish whether relative humidity is being controlled (humidification and/or


dehumidification). Measure values. Check control logic and sensors and whether
plant is in operation/operational. Humidity sensors are prone to drift and
malfunction.

Control systems (heating, cooling, humidity, and airflow)


Again, many questions will need to be asked to establish the principle of operation of control
systems, and whether they are performing as intended. Much of this will necessitate
consultation with other experts.

 What is the mode of control? Space sensors, return air sensors? How many sensors are
involved? Do they operate independently, or do they average detected readings?

 Are sensors suitably positioned to control the occupied space? Are they responding
primarily to air or surface temperatures?

 Are sensors set at appropriate control values, eg. return air sensor control set point
should be higher than the room temperature required. What are the set-point values?

 Control may be fully automatic, localized or operated by individuals.

 The type of control provided may influence 'perceived' comfort eg. adjustment of a
thermostat may induce a perceived improvement of conditions even if the thermostat is
disconnected and has no effect on control.

 Check functioning and calibration etc. of sensors, particularly relative humidity, and duct air

 Pressure sensors (in variable air volume, VAV, systems).

 Plant may be controlled by an 'Energy/Building Management System’. Functional logic


needs to be established.

Overall plant controlstarts up times, temperature values, etc. are often controlled by such
systems or by a localized optimum start and stop controller. Check set-point values and
operational logic.

Plant Maintenance
Plant must be checked and maintained on a regular basis to ensure connect operation. Daily
inspection is recommended for large plant. Air handling systems (including ductwork) require
periodic inspection and cleaning.

 Plant should be fully documented so that the nature of operation, and system and control
logic can be ascertained.

 Maintenance and condition monitoring records should be kept and be available for
inspection. Building management systems are of value here, although sometimes too
much information becomes available.

 There may be difficulty in obtaining all the explanations/answers required concerning the
building and its services. In this case the services and expertise of outside consultants may
be required to assess the systems.

 It may be necessary to recommend complete reassessment and re-commissioning of the


mechanical systems (although this will be an expensive process).

9.6 SUMMARY
Manual handling injuries are a major occupational health problem. The risk factors associated
with manual handling in hot and cold environments were identified as a gap in knowledge under
the Health and Safety Executive’s priority programme for musculoskeletal disorders (MDS’s). At
present the guidance does not offer specific guidance regarding manual handling in non-neutral
thermal environments other than to say that extremes of temperature and humidity should be
avoided. The thermal environments are defined as manual handling events that occur within
sub-ranges of a 0°C to 40°C range. Here cold environment is defined as between 0°C – 10°C (44%
– 60% relative humidity) and a hot environment is defined as between 29°C – 39°C (25% - 72%
relative humidity).

9.7 KEYWORDS

Thermal Balance- is the condition of mind that expresses satisfaction with the thermal
environment and is assessed by subjective evaluation

Thermal Indices- comfort indicators, some more important than others, many attempts have
been made to devise indices which combine some or all of these variables into one value which
can be used to evaluate how comfortable people fee

Predictive mean vote- The PMV index predicts the mean response of a larger group of people
according to the ASHRAE thermal sensation scale where

+3 hot

+2 warm
+1 slightly warm

0 neutral

-1 slightly cool

-2 cool

-3 cold

The PMV index is expressed by P.O. Fanger as

PMV = (0.303 e-0.036M + 0.028) L (1)

where

PMV = Predicted Mean Vote Index

M = metabolic rate

L = thermal load - defined as the difference between the internal heat production and the heat
loss to the actual environment - for a person at comfort skin temperature and evaporative heat
loss by sweating at the actual activity level.

Conduction- insulation- In heat transfer, conduction (or heat conduction) is the transfer
of heat energy by microscopic diffusion and collisions of particles or quasi-particles within a
body due to a temperature gradient.
UNIT 10 STANDARD DATA AND FORMULAS
Objectives
After going through this unit,you will be able to:
 Construct the formula from empirical data
 Analyze the data from tabular form
 Calculate the cutting time, completion time for different operations
 Use the standard data in day to day and atworkplace environment
 Use the data in nomograms (nomograph, alignment chart, is a graphical calculating device, a
two-dimensional diagram designed to allow the approximate graphical computation of a
function) and plots for simplicity
Structure
10.1 Introduction
8.2 Standard Time Data Development
8.3 Tabular Data
8.4 Using Nomograms and Plots
8.5 Formula Construction from empirical data
8.6 Plot data and compute variable expressions
8.7 Analytical formulas
8.8 Standard Data usage
8.9 Summary
8.10 Keywords

10.1 INTRODUCTION

Standard time data (or elemental standard data) are developed for groups of motions that are
commonly performed together, such as drilling a hole or painting a square foot of surface area.
Standard time data can be developed using time studies or predetermined leveled times. After
development, the analyst can use the standard time data instead of developing an estimate for the
group of motions each time they occur.

Typically, the use of standard time data improves accuracy because the standard deviations for
groups of motions tend to be smaller than those for individual basic motions. In addition, their use
speeds standard development by reducing the number of calculations required. Estimate
development using standard time data is much like using predetermined leveled times except that
groups of motions are estimated as a single element instead of individual body motions.

Standard time data are elemental times obtained from time studies that have been stored for later
use. The principle of applying standard data was established many years ago by Frederick W. Taylor,
who proposed that each elemental time be properly indexed so that it could be used to establish
future time standards. When we speak of standard data today, we refer to all the tabulated element
standards, plots, nomograms, and tables that allow the measurement of a specific job without the
use of a timing device.

Standard data can have several levels of refinement: (i) Motion (ii) Element (iii) Task

The more refined the standard data element, the broader its range of usage. Thus, motion standard
data have the greater application, but it takes longer to develop such a standard than either element
or task standard data. Element standard data are widely applicable and allow the faster
development of a standard than motion data.

A time study formula is an alternate and, typically, simpler presentation of standard data, especially
for variable elements. Formula construction involves the design of an algebraic expression that
establishes a time standard in advance of production by substituting known values peculiar to the
job for the variable elements.

10.2 STANDARD TIME DATA DEVELOPMENT

To develop standard time data, analysts must distinguish constant elements from variable elements.
 A constant elementis one whose time remains approximately the same, cycle after cycle.
 A variable element is one whose time varies within a specified range of work.

Thus, the element “start machine” would be a constant, while the element “drill 3/8-inch diameter
hole” would vary with the depth of the hole, the feed, and the speed of the drill.
Standard data are indexed and filed as they are developed. Also, setup elements are kept separate
from elements incorporated into each piece time, and constant elements are separated from
variable elements. Typical standard data for machine operation would be tabulated as follows:
Setup
 Constants
 Variables

Each piece
 Constants
 Variables

Standard data are compiled from different elements in time studies of a given process over a period.
In tabulating standard data, the analyst must be careful to define the endpoints clearly. Otherwise,
there may be a time overlap in the recorded data.

For example, in the element “out stock to stop” on the bar feed No. 3 Warner & Swasey turret lathe,
the element could include reaching for the feed lever, grasping the lever, feeding the bar stock
through the collet to a stock stop located in the hex turret, closing the collet, and reaching for the
turret handle.

Then again, this element may involve only the feeding of bar stock through the collet to a stock stop.
Since standard data elements are compiled from a great number of studies taken by different time
study observers, the limits or end points of each element should be carefully defined.

Figure 10-1 illustrates a form for summarizing data taken from an individual time study to develop
standard data on die-casting machines.

Element a is “pick up small casting,” element b is “place in leaf jig,” c is “ close cover of jig,” d “
position jig,” e “ advance spindle,” and so on. These elements are timed in groups as follows:

a+b+c = element1= 0.070 min = A (1)

b+c+d = element 3 = 0.067 min = B (2)

c+d+e = element 5 = 0.073 min = C (3)

d+e+a = element 2 = 0.061 min = D (4)

e+a+b = element 4 = 0.068 min = E (5)

First, we add these five equations:

3a+3b+3c+3d+3e=A+B+C+D+E

Then, let A+B+C+D+E = T

3a+3b+3c+3d+3e=T=0.039 min

and a+b+c+d+e = 0.339/3 = 0.113

Therefore
A+d+e = 0.113

Then d+e = 0.113 min-0.07 min = 0.043 min

Since c+d+e = 0.073 min

C= 0.073 min-0.043 min= 0.018 min

Likewised+e+a = 0.061

And a= 0.061-0.043= 0.018 min

Substituting in equation 1, we get: b=0.070-(0.03+0.018) = 0.022

Substituting in equation 2, we see that:

d=0.067-(0.022+0.03)=0.015 min

Substituting in equation 2, we arrive at:

E=0.073-(0.015+0.03)=0.028 min
DIE-CASTING MACHINE

Part no. …………… Machine no. & Type……………….Operator…………….Date……..

Or ………No..of Parts in tote pan…………Method of placing ……. Totalwt of fish, parts, gate
parts in tote pan….
……No of parts per shot…………. Liquid metal/ plastic metal……. Chill…..Skim……Drain…
Capacity in Lbs…………………………………………………….Describe Greasing
………………………………………………………………………………………………………
Describe loosening of part…………………………………………………………………………
Describe Location…………………………………………………………………………………
Elements Time End Points
Get Metal in holding pot ………. All waiting time while metal is being poured in
pot
Chill metal ………. From time operator starts adding cold metal to
liquid metal in pot until operator stops adding
cold metal to liquid metal in pot.
Skim Metal ………. From time operator starts skimming until all
scum has been removed
Get Ladle ful of metal ………. From time ladle starts to dip down into metal
until ladleful of metal reaches edge of machine
or until ladle starts to tip for draining
Drain metal ………. From time ladle starts to tip for draining coal
ladleful reaches edge of machine
Pour ladle of metal in machine ………. From time ladleful of metal reaches % of
machine until foot starts to trip press
Trip press ………. From time foot starts moving toward pedal until
knees starts downward

Data taken from time study

10.3 TABULAR DATA

For example, when developing standard data times for machine elements, the analyst may need to
tabulate horsepower requirements for various materials in relation to depth of cut, cutting speeds,
and feeds.

To avoid overloading existing equipment, the analyst should have information on the workload being
assigned to each machine for the conditions under which the material is being removed.

For example, in the machining of high-alloy steel forgings on a lathe capable of a developed
horsepower of 10, it would not be feasible to take a 3/8-inch depth of cut while operating at a feed
of 0.011 inch per revolution and a speed of 200 surface feet per minute. Tabular data, either from
the machine tool manufacturer or from empirical studies indicate a horsepower requirement are
found conditions. Consequently, the work would need to be planned for a feed of 0.009 inch at a
speed of 200 surface feet; this would only require a horsepower rating of 8.7. Such tabular data are
best stored, retrieved, and accumulated into a final standard time using commercially available
spreadsheet programs (e.g., Microsoft Excel).

Horsepower Requirements of Turning High-Alloy Steel Forgings for Cuts 3/8-inch and ½-inch
Deep at Varying Speeds and Feeds

Surface 3/8-in depth cut (feeds, in/rev) ½-in depth cut (feeds, in/rev)
feet

0.009 0.011 0.015 0.018 0.020 0.022 0.009 0.011 0.015 0.018 0.020 0
.
0
2
2

150…. 6.5 8.0 10.9 13.0 14.5 16.0 8.7 10.6 14.5 17.3 19.3 2
1
.
3

175…. 8.0 9.3 12.7 15.2 16.9 18.6 10.1 12.4 16.9 20.2 22.5 2
4
.
8

200…. 8.7 10.6 14.5 17.4 19.3 21.3 11.6 14.1 19.3 23.1 25.7 2
8
.
4

225…. 9.8 11.9 16.3 19.6 21.7 23.9 13.0 15.9 21.7 26.1 28.9 2
1
.
8

250…. 10.9 13.2 18.1 21.8 24.1 26.6 14.5 17.7 24.1 29.0 32.1 3
5
.
4

275….. 12.0 14.6 19.9 23.9 26.5 29.3 15.9 19.4 26.5 31.8 35.3 3
9
.
0

300….. 13.0 16.0 21.8 26.1 29.0 31.9 17.4 21.2 29.0 34.7 38.6 4
2
.
5

400….. 17.4 21.4 29.1 34.8 38.7 42.5 23.2 28.2 38.7 46.3 51.5 5
6
.
7

10.4 USING NOMOGRAMS AND PLOTS

Because of space limitations, tabularizing values for variable elements is not always convenient. By
plotting a curve or a system of curves in the form of an alignment chart, the analyst can express
considerable standard data graphically on one page.

For example, if the problem is to determine the production in pieces per hour to turn 5 linear inches
of a 4-inch diameter shaft of medium carbon steel on a machine utilizing 0.015-inch feed per
revolution and having a cutting time of 55 percent of the cycle time, the answer could be readily
determined graphically.
illustrates a plot of forming time in hours per hundred pieces for a certain gage stock over a range
of sizes expressed in square inches.
(Source: Aeronautical Engineer's Data, 2011)

Each of the 12 points in this plot represents a separate time study. The plotted points indicate a
straight-line relationship, which can be expressed as a formula:

Standard time = 50.088 + 0.00038 size


Forming time for different stocks
(Source: Aeronautical Engineer's Data, 2011)

Disadvantages of Using Nomograms and Plots


First, it is easy to introduce an error in reading from the plot, because of the amount of
interpolation usually required.
Second, there is the chance of outright error through incorrect reading or misalignment ofthe
intersections on the various scales.

10.5 FORMULA CONSTURCTION FROM EMPERICAL DATA


Steps in Formula Construction are:
 Identify Variables
 Analyze Elements and Collect Data
 Plot Data and Compute Variable Expressions
 Check for Accuracy and Finalize

The first and most basic step in formula construction is identifying the critical variables
This process includes separating those that are independent from those that are dependent and also
determining the range of each variable. For example, a formula might be developed for curing
bonded rubber parts between 2 and 8 ounces in weight. The independent variable is the weight of
the rubber, while the dependent variable is the time to cure. The range for the dependent variable
would be 2 to 8 ounces, while the dependent variable of time would have to be quantified from
studies.

After the initial identification is finished, the next step is collecting data for the formula.

This step involves gathering previous studies with standardized work elements that are applicable to
the desired formula, as well as taking new studies, to obtain a sufficiently large sample to cover the
range of work for the formula.

Note: It is important that like elements in the different studies have consistent endpoints.

The number of studies needed to construct a formula is influenced by the range of work for which
the formula is to be used, the relative consistency of like constant elements in the various studies,
and the number of factors that influence the time required to perform the variable elements.

At least 10 studies should be available before a formula is constructed. If fewer than 10 are used, the
accuracy of the formula may be impaired through poor model fits.
The more studies used, the more data will be available, and the more normal will be the conditions
reflected.

In summary, the analyst should take note of the maxim “garbage in – garbage out.” The formula will
only be as accurate as the data used to construct it.

10.6 PLOT DATA AND COMPUTE VARIABLE EXPRESSIONS

Next, the data are posted to a spreadsheet for analysis of the constants and variables. The constants
are identified and combined, and the variables analysedso as to have the factors influencing time
expressed in an algebraic form. By plotting a curve of time versus the independent variable, the
analyst may reduce potential algebraic relationships.
For example, plotted data may take several forms: a straight line, nonlinear increasing trend, a
nonlinear decreasing trend, or no obviously regular geometric form.

If a straight line, then the relationship is quite straightforward:

y=a+bx

With the constants a and b determined from least-squares regression analysis. If the plot shows a
nonlinear increasing trend, then power relationships of the form x2, x3, xn, or ex should be attempted.

For nonlinear decreasing trends, negative power or negative exponentials should be attempted. For
asymptotic trends, log relationships or negative exponentials of the form:

y = 1 – e-x should be attempted.

Note that adding additional terms to the model will always produce a better model with a higher
percentage of the variance in the data explained. However, the model may not be statistically
significantly better, that is, statistically there is no difference in the quality of the predicted value
between the two models.

Furthermore, the simpler the formula, the better it can be understood and applied. The range of
each variable should be specifically identified. The limitations of the formula must be noted by
describing its applicable range in detail. There is a formalized procedure for computing the best
model, termed the general linear test.

It computes the decrease in unexplained variance between the simpler model, termed the reduced
model, and the more complex model, termed the full model.

The decrease in variance is tested statistically and, only if the decrease is significant, the more
complex model is used.

In the element “strike arc and weld,” analysis obtained the following data from 10 detailed studies:
Data for variance testing
Study Number Size of weld Minutes per inch of weld
1 1/8 0.12
2 3/16 0.13
3 1/4 0.15
4 3/8 0.24
5 ½ 0.37
6 5/8 0.59
7 11/16 0.80
8 3/4 0.93
9 7/8 1.14
10 1 1.52

Plotting the data resulted in the smooth curve shown in figure 12-4. A simple linear regression of
the dependent variable “minutes” against the independent variable “weld” yield:

Y= -0.245+1.57 x (1)

With r2 = 0.928 and sum of squares (SSE) = 0.1354

Since figure 10.4 indicates a definite non-linear trend to the data, adding a quadratic component to
the model seems reasonable. Regression now yields:

Y= 0.1-0.178 x +1.61 x2 (2)

With r2 = 0.993 and sum of squares (SSE) = 0.012. The increase in r2 would seem to indicate a
definite improvement in the fit of the model. This improvement can be tested statistically with the
general linear test:

[ ( ) ( )]

F=
SSE(F)/dfF

Where SSE (R) =Sum of squares error for the reduced (i.e. simpler model.

SSE(F)= Sum of Square error for the full (i.e. more complex) model

dfR = Degree of freedom for the reduced model

dfF = Degree of Freedom for the full model

Comparing the two models yields:

[ . . ]

F= = 71.98
0.012/7

Since 71.98 is considerably larger than F (1, 7) = 5.59, the full model is a significantly better model.
The process can be repeated by adding another term with a higher power (e.g x3 ) which yields the
following model:

Y =0.218-1.14 x+3.59 x2-1.16 x3 (3)

With r2 =0.994 and SSE =0.00873. However, in this case, the general linear test does not yield a
statistically significant improvement:
[ . . ]

F= = 2.25
0.00873/6
The F- value of 2.25 is smaller than the critical F (1,6) = 5.99

Interestingly using a simple quadratic model of the form:

Y = 0.0624+1.45 x 2 (4)

With r2 =0.993 and SSE =0.0133 yields the best and simplest model. Comparing: this model (eq 4)
with second model (Eq 2) yields

[ . . ]

F= = 0.758
0.012/7

The F- value is not significant and the extra linear term in x does not yield a better model.

The best fitting quadratic model can be checked by substituting a 1-inch weld to yield

Y= 0.0624 +1.45 (1) 2 = 1.51

This, checks, quite closely with the time study value of 1.52 minutes

Curve plotted on regular coordinate paper takes quadratic form


Source: Aeronautical Engineer's Data, 2011

Check for Accuracy and Finalize

The easiest and fastest way to check the formula is to use it to check existing time studies. Any
marked differences (roughly 5 percent) between the formula value the expected validity, the
analyst should accumulate additional data by taking more stopwatch and/or standard data
studies. The final step in the formula development process is to write theformula report. The
analyst should consolidate all data, calculations, derivations, and applications of the formula and
present this information in a complete report prior to putting the formula into use.

10.7 ANALYTICAL FORMULAS

Standard times can be calculated using analytical formulas found in technical handbooks or from
information provided by machine tool manufacturers. By finding the appropriate feeds and speeds
for different types and thicknesses of materials, analysts can calculate cutting times for different
machining operations.

(1) Drill Press Work (2) Lathe Work (3) Milling Machine Work

Drill Press Work A drillis a fluted end-cutting tool used to originate or enlarge a hole in solidmaterial

Drill Press Work


(Source: Aeronautical Engineer's Data, 2011)

In drilling operations on a flat surface, the axis of the drill is at 90 degrees to the surface being
drilled.

Since the commercial standard for the included angle of drill points is 118 degrees, the lead of the
drill may be readily found through the following expression:
r
=
tan A

where:

l = Lead of drill.

r = Radius of drill.

tan A = Tangent of ½ the included angle of the drill.

To illustrate, calculate the lead of a general-purpose drill 1-inch in diameter:

= 0.5/ tan 59°

= 0.5/1.6643

l = 0.3 inch lead

After determining the total length that the drill must move, we divide this distance by the feed ofthe
drill in inches per minute, to find the drill cutting timein minutes.

Drill Travel Distance


(Source: Aeronautical Engineer's Data, 2011)

Distance L Indicate the drill must travel when drilling through (illustration at left) and when drilling
blind holes (illustration at right) (Lead of drill is shown by distance).

Drill speed is expressed in feet per minute (fpm), and feed in thousandths of an inch per revolution.
To change the feed into inches per minute when the feed per revolution and the speed in feet per
minute is known, the following equation can be used:

Fm = 3.82fsf/d

where:
Fm = Feed in inches per minute.

f = Feed in inches per revolution. , Sf = Surface feet per minute.

d = Diameter of drill in inches

For example, to determine the feed in inches per minute of a 1-inch drill running at a surface speed
of 100 feet per minute and a feed of 0.013 inch per revolution, we have

. ( . )( )
= = 4.97 inches per minute

To determine how long it would take for this 1-inch drill running at the same speed and feed to drill
through 2 inches of a malleable iron casting, we use the equation:

= /

Where T= Cutting time in minutes, L= Total length drill must move

Fm = Feed in inches per minute which should yield

= 2( ℎ ) + 0.3 ( )/(4.97)

= 0.464 minutes cutting time

 The cutting time thus calculated does not include an allowance, which must be added to
determine the standard time.

 The allowance should include time for variations in material thickness and for tolerance in
setting the stops, both of which affect the cycle cutting time.

 Personal and unavoidable delay allowances should also be added to arrive at an equitable
standard time.

 Not all speeds may be available on the machine being used.

 For example, the recommended spindle speed for a given job might be 1,550 rpm, but the
machine may be capable of running only 1,200 rpm. In that case, 1,200 rpm should be used
as the basis for computing standard times.

Lathe Work

Many variations of machine tools are classified as lathes. These include

(i) Engine lathe (ii) Turret lathe (iii) Automatic lathe


Lathe Work
(Source: Aeronautical Engineer's Data, 2011)
All of these lathes are used primarily with stationary tools or with tools that translate over the
surface to remove material from the revolving work, which includes
• Forgings
• Castings, or
• Bar Stock
Factors that alter speeds and feeds:
• the condition and design of the machine tool
• the material being cut
• the condition and design of the cutting tool
• the coolant used for cutting
• the method of holding work, and
• The method of mounting the cutting tool.
As in drill press work, feeds are expressed in thousandths of an inch per revolution, and speeds in
surface feet per minute.
To determine the cutting time for L inches of cut, the length of cut in inches is divided by the feed in
inches per minute, or:

Where
T= Cutting time in minutes, L= Total length of cut, Fm= Feed in inches per minute

3.82
=

Where
f = Feed in inches per rev.

Sf = Speed in surface feet per min.

d = Diameter of work in inches

Milling Machine Work

Milling refers to the removal of material with a rotating multiple-toothed cutter. While the
cutter rotates, the work is fed past the cutter. This differs from a drill press for which the work is
usually stationary.
Milling Machine work
(Source: Aeronautical Engineer's Data, 2011)

In milling work, as in drill press and lathe work, the speed of the cutter is expressed in surfacefeet
per minute. Feeds or table travel are usually expressed in thousandths of an inch per tooth.

To determine the cutter speed in revolutions per minutefrom the surface feet per minuteand the
diameter of the cutter, use the following expression:

3.82
=

Where:

Nr = Cutter speed in rev. per min.

Sf = Cutter speed in ft. per min.

d = Outside diameter of cutter in mins.

To determine the feed of the workin inches per minuteinto the cutter, use the expression:

Where:

Fm = Feed of the work into the cutter in inches per minute.

f = Feed of cutter in inches per tooth.

nt= Number of cutter teeth.

Nr = Cutter speed in revolutions per minute.

The number of cutter teeth suitable for a particular application may be expressed as:

Where:

Ft = Chip thickness

nt = Number of cutter teeth

Fm = Feed of the work into the cutter in inches per minute

To compute the cutting time on milling operations, the analyst must take into consideration the lead
of the milling cutter when figuring the total length of cut under power feed.
This can be determined by triangulation, as illustrated in Figure 10.9, which shows the slab-milling of
a pad.

Slab- milling a casting 8 inches in length


(Source: Aeronautical Engineer's Data, 2011)

In this case, to arrive at the total length of the work (8 inches). By knowing the diameter of the
cutter, you can determine AC as being the cutter radius, and you can then calculate the height of the
right triangle ABC by subtracting the depth of cut BE from the cutter radius AE, as follows:

BC = √AC2 – AB2

In preceding example, suppose we assume that the cutter diameter is 4 inches and that it has 22
teeth. The teeth per tooth are 0.008 inch, and the cutting speed is 60 feet per minute. We can
compute the cutting time by using the equation:

L
T=
Fm

where:

T = Cutting time in minutes.

L = Total length drill must move.

Fm = Feed in inches per minute

Then, L would be equal to (8 inches 1 BC) and

BC = √4-3.06 = 0.975

Therefore,

L = 8.975

Fm = fntNr
Fm = (0.008) (22) Nr

Or

3.82 3.82(60)
= = = 57.3
4

Then

Fm= (0.008)(22)(57.3)

= 10.1 inches per minute

8.975
= − 0.888
10.1

10.8 STANDARD DATA USAGE

For easy reference,


 Constant standard dataelements should be tabulated and filed under the machine or
the process.

 Variable datacan either be tabulated or expressed as a curve or equation, and then


filed under the facility or operation class.

In some instances, for which standard data are broken down to cover a given machine and class of
operation, it may be desirable to combine constants with variables and tabularize the summary. This
quick-reference data expresses the time allowed to perform a given operation completely.Table 10.3
illustrates standard data for a given facility and operation class for which elements have been
combined.

Standard Data for Blanking and Piercing Strip Stock Hand Feed with Piece Automatically
Removed on Toledo 76 Punch Press
L (Distance in inches) T (time in hours per hundred hits
1 0.075
2 0.082
3 0.088
4 0.095
5 0.103
6 0.110
7 0.117
8 0.123
9 0.130
10 0.137

10.9 SUMMARY

If organizations are to remain competitive, it is imperative that they develop a consistent plan to
limit operating cost. A part of that plan should include an effective measurement tool. Standard
data provides such a tool, particularly in manufacturing environments where controlling labor cost is
vital to profitability.

10.10 KEYWORDS

Standard data- Standard time data (or elemental standard data) are developed for groups of
motions that are commonly performed together, such as drilling a hole or painting a square foot of
surface area

Nomograms- nomograph, alignment chart or abaque, is a graphical calculating device, a two-


dimensional diagram designed to allow the approximate graphical computation of a function

Plot data- A plot is a graphical technique for representing a data set, usually as a graph showing the
relationship between two or more variables

Standard time-standard time is the result of synchronizing clocks in different geographical locations
within a time zone to the same time rather than using the local meridian as in local mean
time or solar time

Tabular Data- Aggregate information on entities presented in tables

Accuracy and variable expressions- The simplified expressions of data


UNIT 11 OCCUPATIONAL NOISE ENVIRONEMNT
Objectives
After completion of this unit, you should be able to:
 measure the occupational noise.
 design the noise prevention and control strategies.
 access the impact of noise on health outcomes.
 calculate the attributable fraction to control the noise.
 estimate the uncertainty in diseaseburden for occupational noise.
Structure
11.1 Introduction
11.2 The Risk Factor and its health outcome
11.3 Health outcomes to include in the burden of disease assessment
11.4 Exposure Indicator
11.5 Estimating relative risks for health outcomes, by exposure level
11.6 Estimating the attributable fraction and the disease burden
11.7 Uncertainty in exposure estimates
11.8 Policy implications
11.9 Summary
11.10 Keywords

11.1 INTRODUCTION
Physically,thereisnodifferencebetweensoundandnoise.Soundisasensory
perceptionandnoisecorrespondstoundesiredsound.Byextension,noiseisanyunwarranteddisturba
ncewithinausefulfrequencyband(NIOSH,1991).Noiseispresentineveryhumanactivity,andwhenass
essingitsimpactonhumanwell-
beingitisusuallyclassifiedeitherasoccupationalnoise(i.e.noiseintheworkplace),oras
environmentalnoise,whichincludesnoiseinallothersettings,whetheratthecommunity,residential,o
rdomesticlevel(e.g.traffic,playgrounds,sports,music).Thisguideconcernsonlyoccupationalnoise;th
ehealtheffectsofenvironmentalnoisearecoveredinaseparatepublication(deHollanderetal.,2004).

High levelsofoccupationalnoise remainaprobleminall


regionsoftheworld.IntheUnitedStatesofAmerica(USA),forexample,morethan30millionworkersare
exposedtohazardousnoise(NIOSH,1998).InGermany,4−5millionpeople(12−15%o heworkforce)ar
eexposedtonoiselevelsdefinedas hazardousbyWHO
(WHO,2001).Althoughnoiseisassociatedwithalmosteveryworkactivity,someactivitiesareassociate
dwithparticularlyhighlevelsofnoise,themostimportantof
whichareworkingwithimpactprocesses,handlingcertaintypesofmaterials,andflyingcommercialjets
.OccupationsathighestriskforNIHLincludethosein
manufacturing,transportation,mining,construction,agriculture,andthemilitary.

Thesituationisimprovingindevelopedcountries,asmorewidespreadappreciationof
thehazardhasledtotheintroductionofprotectivemeasures.Datafordevelopingcountriesarescarce,b
utavailableevidence suggests
thataveragenoiselevelsarewellabovetheoccupationallevelrecommendedinmanydevelopednation
s(Suter,2000;WHO/FIOH,2001).Theaveragenoiselevelsindevelopingcountriesmaybeincreasingbec
auseindustrializationisnotalwaysaccompaniedbyprotection.

Therearethereforeseveralreasonstoassesstheburdenofdiseasefromoccupationalnoiseatcountryo
rsub national levels. Occupational noise isa
widespreadriskfactor,withastrongevidencebaselinkingittoanimportanthealthoutcome(hearinglos
s). Itisalsodistinctfromenvironmentalnoise,inthatitisbydefinitionassociatedwiththeworkplace
andisthereforetheresponsibilityofemployersaswellasindividuals.Anassessmentoftheburdenofdise
aseassociatedwithoccupationalnoisecanhelpguidepolicyandfocusresearchonthisproblem.Thisisp
articularlyimportantinlightofthefactthatpolicyandpracticalmeasurescanbeusedtoreduceexposure
tooccupationalnoise(WHO/FIOH,2001).

11.2 THE RISK FACTOR AND ITS HEALTH OUTCOMES

Measuring Noise Levels


Thereareavarietyofmetricsforquantifyingnoiselevels,themostusefulofwhichformeasuringsou
ndasahealthhazardisdescribedindeHollanderetal.(2004).In
general,thesemetricsarebasedonphysicalquantities,whichare“corrected”toaccountforthesen
sitivityofpeopletonoise.Thesecorrectionsdependonthenoisefrequencyandcharacteristics(imp
ulse,intermittentorcontinuousnoiselevels),andthesourceofnoise.Thefollowingmeasuresarem
ostrelevantforassessing occupationalnoiselevels.

SoundPressureLevel
Thesoundpressurelevel(L)isameasureoftheairvibrationsthatmakeupsound.
Becausethehumanearcandetectawiderangeofsound
pressurelevels(from20µPato200Pa),theyaremeasuredonalogarithmicscalewithunitsofdec
ibels(dB)toindicatetheloudnessofasound.

SoundLevel
Thehumanearisnotequallysensitivetosoundsatdifferentfrequencies.Toaccountfortheperceive
dloudnessofasound,aspectralsensitivityfactorisusedtoweightthesoundpressurelevelatdif
ferentfrequencies(A-filter).TheseA-
weightedsoundpressurelevelsareexpressedinunitsofdB(A).

EquivalentSoundLevel
Whensoundlevels
fluctuateintime,whichisoftenthecaseforoccupationalnoise,theequivalentsoundlevelisdet
erminedoveraspecifictime. Inthisguide,theA-
weightedsoundlevelisaveragedoveraperiodoftime (T) and is
designatedbyLAeq,T.Acommonexposureperiod,T,inoccupationalstudiesandregulationsis8
h,andtheparameterisdesignatedbythesymbol,LAeq, 8h.

Disease Outcomes Related to the Risk Factor


Ingeneral,thehealthconsequences ofa givenlevel of occupationalnoise are likelytobe
similar,regardlessofthecountryorregioninwhichthe
exposureoccurs.Asinglereviewhasthereforebeencarriedoutofallwell-
designedepidemiologicalstudiesthatlinkoccupationalnoiseexposuretohealthoutcomes,regard
lessofwherethestudywasconducted.

Thereviewoftheliteratureindicatesthatnoisehasaseriesofhealtheffects,inadditiontohearingim
pairment(Table 11. 1).
Someofthese,suchassleepdeprivation,areimportantinthecontextofenvironmentalnoise,butar
elesslikelytobeassociatedwithnoiseintheworkplace.Otherconsequencesofworkplacenoise,su
chas annoyance,hypertension,disturbanceofpsychosocialwell-
being,andpsychiatricdisordershavealsobeendescribed(deHollanderetal.,2004).

Foroccupationalnoise,thebestcharacterizedhealthoutcomeishearingimpairment.Thefirsteffe
ctsofexposuretoexcessnoisearetypicallyanincreaseinthethreshold of hearing (thresholdshift),
asassessed byeudiometry.Thisisdefinedasa
changeinhearingthresholdsofanaverage10dBormoreat2000,3000and4000Hzineitherear(poor
erhearing)(NIOSH,1998). NIHLismeasuredbycomparingthethreshold ofhearing ata specified
frequencywith a specifiedstandard ofnormal hearing
andisreportedinunitsofdecibelhearingloss(dBHL).

ThresholdshiftistheprecursorofNIHL,themainoutcomeofoccupationalnoise.Itcorrespondstoa
permanentincreaseinthethresholdofhearingthatmaybeaccompaniedbytinnitus.Becauseheari
ngimpairmentisusuallygradual,theaffectedworkerwillnotnoticechangesinhearingabilityuntilal
argethresholdshifthas occurred.Noise-
inducedhearingimpairmentoccurspredominantlyathigherfrequencies(3000−6000Hz),withthel
argesteffectat4000Hz.Itisirreversibleandincreasesinseveritywithcontinuedexposure.
TheconsequencesofNIHLinclude:
 socialisolation.
 impairedcommunicationwithcoworkersandfamily.
 decreasedabilitytomonitortheworkenvironment(warningsignals,equipmentsounds).
 increasedinjuriesfromimpairedcommunicationandisolation.
 anxiety,irritability,decreasedself-esteem.
 lostproductivity.
 Expensesforworkers’compensationandhearingaids.

The strength of the evidence for disease outcomes


Themechanismslinkingoccupationalnoisetothehealthoutcomesdescribedinthefollowingsecti
onarerelativelydirect andareunlikelytobespecifictoparticular
countriesorregions.Therefore,althoughitisusefultoobtainlocaldataonthestrengthofrelationshi
ps,otherstudiesareusuallyrelevantforassessingthestrengthofevidenceforcausality.

Evidenceisusuallyassessedonthegroundsofbiologicalplausibility,strengthandconsistencyofass
ociation,independenceofconfoundingvariablesandreversibility(Hill,1965).Fromareviewoftheli
terature,deHollanderetal.(2004)concludedthatpsychosocialwell-
being,psychiatricdisorders,andeffectsonperformanceareplausibleoutcomes,butareonlyweakl
ysupportedbyepidemiologicalevidence.Otherplausibleoutcomesincludebiochemicaleffects,i
mmunesystemeffects,andbirth-
weighteffects,butagainthereislimitedevidencetosupporttheseoutcomes.

Assessment of Reported Responses to Occupational Noise Exposure

Outcome Evidenceb Observation threshold (dB(A))

Performance limited
Biochemical effects limited
Immune effects limited
Birthweight limited
Annoyance sufficient <55

Hypertension sufficient 55−116


Hearing loss (adults) sufficient 75
(unborn children) sufficient <85
(Source: adaptedfromHCN (1999) andde Hollander et al. (2004))
Evidencedescribes thestrengthofevidence fora causalrelationship betweennoiseexposureand the
specified health endpoint.

Thereisstrongerevidenceofnoise-basedannoyance,definedas“afeelingof
resentment,displeasure,discomfort,dissatisfactionoroffencewhichoccurswhen
noiseinterfereswithsomeone’sthoughts,feelingsordailyactivities”(Passchier-
Vermeer,1993).Noiseannoyanceisalwaysassessedatthelevelofpopulations,usingquestionnai
res.Thereisconsistentevidenceforannoyanceinpopulations
exposedformorethanoneyeartosoundlevelsof37dB(A),andsevereannoyanceatabout42dB(A)
.StudieshavebeencarriedoutinWesternEurope,Australia,andtheUSA,buttherearenocompara
blestudiesindevelopingcountries.Thereislittledoubtthatannoyancefromnoiseadverselyaffect
shumanwell-being.

Arecentmeta-
analysisreviewedtheeffectsofoccupationalandenvironmentalnoiseonavarietyofcardiovascul
arrisks,includinghypertension,useofanti-
hypertensiondrugs,consultationwithageneralpractitionerorspecialist,useofcardiovascular
medicines,anginapectoris,myocardialinfarctionandprevalenceofischemicheartdisease(vanKe
mpenetal.,2002).Theanalysisshowedanassociationwithhypertension,butonlylimited
evidenceforanassociation
withtheotherhealthoutcomes.Reasonsforthelimitedevidenceincludedmethodologicalweakn
esses,
suchaspoor(retrospective)exposureassessment,poorlycontrolledconfoundingvariables,ands
electionbias(suchasthe“healthyworker”effect,wherethestudiedpopulationsexcludetheleast
healthyindividuals,whomayalreadybeabsentfromworkthroughdisability).

Themeta-analysisshowed
inconsistenciesamongindividualstudies,andsummaryrelativeriskswerestatisticallysignificanti
nonlyalimitednumberofcases.Overall,thecausallinkisplausible,andthemeta-
analysisprovidessupportforfurtherinvestigationofcardiovasculareffectsinthefuture.However,
theevidencebasewasnotconsideredtobestrongenoughforinclusionin themeta-
analysis.Consequently,cardiovasculareffectswerenotincludedintheGlobalBurdenofDiseasest
udy,andmethodsforestimatingthe cardiovasculareffectsofnoisewerenotdefined(Concha-
Barrientosetal.,2004).Thisguidedoes
notthereforeprovideinformationforassessingthecardiovasculareffectsofnoiseatnationalorlo
callevels.
Incontrast,itisgenerallyacceptedthatthelinkbetweenoccupationalnoiseand
hearinglossisbiologicallyobvious
(i.e.thereisaclearmechanisticpathwaybetweenthephysicalpropertiesofnoiseanddamagetothe
hearingsystem).Thelinkisalsosupportedbyepidemiologicalstudiesthatcomparedtheprevalenc
eofhearinglossindifferentcategoriesofoccupations,orinparticularlynoisyoccupations(e.g.Arndt
etal.,1996;Waitzman&Smith,1998;Hessel,2000;Palmer,Pannett&Griffin,2001).
ThestudiesshowedastrongassociationbetweenoccupationalnoiseandNIHL,aneffectthatincrea
sedwiththedurationandmagnitudeofthenoiseexposure.For example,theriskfor“blue-
collar”constructionworkerswas2to>3.5-foldgreaterthanthatfor“white-
collar”workersinotherindustries(Waitzman&Smith,1998) (Table 11.2). Although other factors
may also contribute to hearing loss, such as exposureto vibrations, ototoxic drugs and
some chemicals, the association
withoccupationalnoiseremainsrobustafteraccountingfortheseinfluences.Thereisalsoepidemi
ologicalevidenceforaneffectofhighlevelsofoccupationalexposureon
hearinglossinunbornchildren(e.g.Lalande,Hetu&Lambert,1986),buttherewas
notconsideredtobeenoughinformationtocalculateassociatedimpactsfortheGlobalBurdenofDi
seasestudy,.

Prevalence ratio for occupational noise-induced hearing impairment

Populationgrou Prevalence
Country Source Definitionof hearing impairment
p 95%CIb
ratio

Germany Arndt et Greaterthan105dBHLat2,3or4kHz(corresp Carpenters 1.77 1.48−2.12


al. (1996) ondsto>35 dBHL).

Unskilledworke 1.75 1.47−2.09


rs

Plumbers 1 1.19−1.85
.
Painters 1.20 0.96−1.49

Plasterers 1 1.05−1.59
.
Blue- 1.00
collarworkers

Overall 1 1.29−1.82
.
Canada Hessel Greaterthan105dBHLat2,3or4kHz(corres Plumbers 2.91 c
NA
(2000) pondsto>35 dBHL).
Boilermakers 3.88 NA

Electricians 1.46 NA

White- 1.00 NA
collarworkers

Great Palmer, Severe:wearingahearingaid,orhavinggre Male 2.90 NA


d Pannett atdifficultyinboth
Britain
&Griffin earsinhearingconversationinaquietroom
(2001) (equivalentto>45
dBHL).
Female 1.80 NA

Moderate a n d worse: reported moderate


difficulty in hearing
conversation ina quiet room (equivalent to Male 3.60 NA
45dBHL).

Female 2.90 NA

a the data are taken from all available studies. b CI = confidence interval. c NA
= not available in the original study.

d Prevalence ratios are based on self-reported hearing impairment. Prevalence of “ever


employed in a noisy job” was compared against “never exposed in a noisy job”. A noisy job
was defined as one “where there was a need to shout to be heard”.

11.3 HEALTH OUTCOMES TO INCLUDEIN THE BURDEN OF DISEASE


ASSESSMENT
Theselectionofahealthoutcomeshouldbemadeprincipallyonthestrengthoftheevidenceofcausality
andontheavailabilityofinformationforquantifyinghealthimpacts.Itisalsoimportantthatthehealthou
tcomehasbeenassessedwithinthestudypopulation
orcanreasonablybeextrapolatedfromotherpopulations.Thereareseveral possible sources for
health statistics, including national health statistics, a national burden of disease study, or “prior
estimates” provided by WHO.
Depending on the aim of the study, it may be preferable to assess disease burden in terms of
attributable disease incidence, or overall disease burden, using summary measures of
population health such as DALYs (Murray, Salomon & Mathers, 2000). This will allow the health
burden to be compared for different geographical areas, and with the health burden from other
risk factors. A goal of burden of disease assessments is to maximize the compatibility of
frameworks for assessing the burden of disease for risk factors. Using the same framework
promotes this goal by ensuring that the same method is used to measure the incidence and
severity of disability associated with each disease.

Applying these criteria, it is clear that NIHL should be included in any national assessment, as it
is strongly supported by epidemiological evidence, and is one of the health outcomes often
assessed in national health statistics and as part of WHO burden of disease assessments. It is
generally most straightforward to exclude outcomes such as annoyance, as they are not a
formally defined health outcome per se. Should annoyance cause other health outcomes, such
as hypertension and associated cardiovascular disease, then other outcomes could be
considered. If there is a strong local reason for including such outcomes, then it is possible
either to assess comparative disability weights independently, to take them from other studies
(e.g. de Hollander et al., 2004), or to extrapolate them from similar health outcomes. You should
be aware that an independent assessment of the severity of such outcomes introduces
additional uncertainty when the results are compared with other risk factors or geographical
areas.

This guide follows the previous global assessment of occupational noise, in that only the effects
of occupational noise on NIHL are assessed. Several definitions of hearing impairment are
available in the literature. In the occupational setting, hearing impairment is generally defined
as “a binaural pure-tone average for the frequencies of 1000, 2000, 3000 and 4000 Hz of greater
than 25 dBHL” (NIOSH, 1998; Sriwattanatamma&Breysse, 2000). While this definition is widely
used, it does not correspond to the WHO definition of disabling hearing loss (i.e. with an
associated disability weight and corresponding to a quantifiable burden of disease). This level of
hearing impairment is defined as “permanent unaided hearing threshold level for the better ear
of 41 dBHL or greater for the four frequencies 500, 1000, 2000 and 4000 kHz” (Table 11.3). In
this guide, we describe the steps necessary to calculate a prevalence of hearing loss that
corresponds to the WHO definition, as it is preferable for burden of disease assessments. A
straightforward procedure for convertingbetween the different levels of impairment. This
conversion procedure is supported by large epidemiological studies and should therefore
introduce only a small additional uncertainty into the estimation.

Definition of hearing impairment

Grade of hearing Impairment Audiometric ISOvalueb Performance


0 no impairment <25 dB (better ear) No, or veryslight, hearing problems
Able to hearwhispers
1 slight impairment 26−40 dB (better ear) Able to hear and repeatwords spoken
in normal voiceat 1 m.
2 moderate impairment 41−60 dB (betterear) Able to hear and repeatwords using
raised voice at1 m.
3 severe impairment 61−80 dB (better ear) Able to hear somewordswhen
shouted into better ear

11.4 EXPOSURE INDICATOR


ThemostappropriateexposuremeasurementforoccupationalnoiseistheA-
weighteddecibel,dB(A),usuallyaveragedoveran8-hourworkingday)
Thereisastrongcorrelationbetweenthisparameterandtheabilityofthenoisehazardtodamagehuman
hearing.Itisfrequentlymeasuredintheworkplace
andisalsothemostcommonlyusedepidemiologicalmeasurementofexposure.Exposureisinitiallyme
asuredasacontinuousvariable,andtheoreticallycouldbetreatedassuchinassessingtheburdenofdise
ase.Thisisimpractical,however,asmanysurveysreportexposureaboveandbelowcut-
offvalues,ratherthanasadistribution.Forexample,thefollowingcategoriesarewidelyappliedbecaus
etheycorrespondtoregulatorylimitsindeveloped(usually85dB(A))andmanydeveloping(usually90d
B(A)) countriesforan8-hourday(Hessel&Sluis-
Cremer,1987;Alidrisietal.,1990;Shaikh,1996;Hernandez-
Gaytanetal.,2000;Osibogun,Igweze&Adeniran,2000;Sriwattanatamma&Breysse,2000;Ahmedetal
.,2001):

– minimumnoiseexposure:<85dB(A)

– moderatelyhighnoiseexposure:85−90dB(A)

– highnoiseexposure:>90dB(A)

Determining the distribution of exposure in the population


Themostaccurateassessmentsofhealthimpactsatthenationallevelareobtained
fromlocalexposuredata
sincepopulationexposuredistributionscanvarybetweencountries.Themostusedmethodstoass
esshealthimpactsare:

– areasurveys:noiselevelsaremeasuredatdifferentsitesacrossanarea,suchas
sitesthroughoutafactory.
– dosimetry(dosimetry is the measurement and calculation of the radiation dose
received by matter and tissue resulting from the exposure to indirect and
direct ionizing radiation:aperson’scumulativeexposure to noise overa period of time
is measuredwithadosimeter;
– Engineeringsurveys:noiseismeasuredusingarangeofinstruments.

Ideally,representativedatawillbeavailableontheaveragelevelsofoccupationalnoiseforall
majoroccupationswithinthecountry,eitherfromthepublishedscientificliterature or fromother
sources of data. If such data are not
available,epidemiologicalsurveyscanbecarriedouttodeterminethedistributionofnoiseexposur
ebyoccupation.Inpractice,suchdataoftenwillnotbeavailable,andthedistributionwillhavetobee
stimated fromexistingsourcesofinformation.Todoso,
assumptionswillneedtobemade,whichwillincreasetheuncertaintyoftheestimation,andthissho
uldbemadeexplicitintheresults.

Areasonableestimateoftheexposuredistributioncanbeobtainedbyextrapolatingfromexistingd
ataforstudiesundertakenelsewhere,providedthatthedataarefromsimilaroccupationalenviron
ments. Studieshaveshownthatthemostimportant
determinantofexposurelevelisworkeroccupation.Industry-
specificstudiesintheUSAshowedthat44%ofcarpentersand48%ofplumbersreportedtheyhadap
erceivedhearingloss,and90%ofcoalminershavehearingimpairmentbyage52
years.Also,itisestimatedthat70%ofmalemetal/nonmetalminerswillhavehearing
impairmentbyage60years(NIOSH,1991).Withinanoccupation,severalworkplace-
specificfactorswillalsoinfluencethelevelofexposure.Thesefactors
includethetypeoffacilityandprocess;therawmaterials,machineryandtoolsused;whetherthere
are engineering andwork practicecontrols; and whether
personalprotectivedevicesareusedandproperlymaintained.Thesefactorsarelikelytovary
betweencountries(e.g.personalprotectivedevicesmaybemorecommonlyusedindevelopedcou
ntriesthanindevelopingcountries).Suchfactorsshouldbetakenintoconsiderationwhenestimati
ng thedistributionof exposurefora
workforce,andextrapolationsshouldbemadefromdataforcomparableoccupationsincomparabl
ecountries.

TheGlobalBurdenofDiseasestudyestimatedexposuredistributionsusinganoccupationalcatego
ryapproach,modifiedtoreflectthedifferentnoiseexposuresfor
occupationsindifferenteconomicsubsectors.Thisapproachcanbeappliedatthenationallevel,usi
ngcountrydatawereavailable,orbyextrapolatingfromdatafor
otherstudiesiflocaldataarenotavailable.
Thefirst step isto assess theproportionofworkers ineachoccupationalcategory that is
exposed to at least moderatelyhighoccupationalnoiselevels(>85dB(A

Foreachoccupational category,theproportion ofproductionworkersexposed to high


noiselevels(i.e.>90dB(A))isestimatedfromasurveyofover9
millionproductionworkersintheUSAcarriedoutbythe USA Occupational Safety and Health
Administrationin1981(citedinNIOSH 1991; DHHS, 1986). These figures are shown in bold
font in Table 4. Of the6 063 000 production workers with exposures
atorabove85dB(A),3407000(or56%)wereexposedtonoiselevelsabove90 dB(A). Wetherefore
estimate that among production workers exposed at or above 85
dB(A),halfwereexposedat85–90dB(A),andhalfwere exposed at>90 dB(A).
Exposuresintheremainingoccupationalcategoriesandeconomicsubsectorsareestimatedeither
byextrapolationfromthe mostrelevantsubsectorofthesurveyof
productionworkers(figuresshowninitalics in Table 11.4), orbyexpertjudgment(shownin
normaltypeface).Itisalsoassumed,basedonexpertjudgment,that of the agricultural workers
and sales and service workers exposed at or above 85 dB(A), approximately 70% are
exposed at 85–90 dB(A), and 30% at>90 dB(A). All professional,administrative, and clerical
workers with noiseexposureatorabove85 dB(A)areassumedtobeinthe85–
90dB(A)exposurelevel.

Itmaybenecessarytoadjusttheseproportions,dependingonthecharacteristicsof
thecountryinwhichtheassessmentisundertaken.Indevelopingcountries,becausehearingconser
vationprogramarerare,theglobalassessmentassumedthatonly5%oftheproductionworkerswo
uldbeexposedatthe85–90dB(A)level,and95%
wouldbeexposedatthe>90dB(A)level.Also,95%oftheagriculturalworkers exposedat
orabove85dB(A)areassignedtothe85–
90dB(A)level,becausemechanizationisnotwidespreadincountriesinWHOdevelopingsub-
regions

Thesecondstepconsistsofdefiningtheproportionsofworkersineacheconomicsubsector,byoccu
pationalcategory.Thesedatamaybeavailablefromnationallabouroffices,orfromstatisticsreport
edtotheILO.Thethird stepsimplyconsistsof
multiplyingtheprevioustablestogether(i.e.foreacheconomicsubsector,theproportionofworke
rsineachoccupationismultipliedbytheproportionofworkersintheoccupationexposedtomodera
telyhigh,orhigh,noiselevels).Next,theproportionoftheworkingpopulationineacheconomicsub
sectorisdeterminedby
gender.Inthefifthstep,thesevaluesaremultipliedbytheproportionofworkersintheoccupational
categoryexposedtothespecificnoiselevel.Theseriesof
calculationsisperformedforalleconomicsubsectors,andtheresultssummedtogivetheproportio
nofthetotalworkingpopulationthatisexposedateachnoiselevel.

Thenextstepaccountsforthefactthatnotallthepopulationisinvolvedinformalwork,bydefiningth
eproportionoftheworking-
agepopulationthatiscurrentlyemployed.Thisshouldbedoneseparatelyformalesandfemales.Ac
curacycanbefurtherimprovedbyspecifyinglevelsofemploymentfordifferentagegroupswithinth
eworking-agepopulation.Finally,theoverallpopulationexposureisgivenby
multiplyingtheproportionoftheworkingpopulationexposedateachexposurelevel,bythepropor
tionofthetotalpopulationinwork.

Table 114summarizesthesestepsandthesourcesofdatanecessarytocompletethem
andgivesexamplecalculationsfortheproportion of the male working-age population
intheUSAthatisexposedtomoderatelyhighnoiselevels.Togiveacompleteassessmentoftheexpo
suredistribution,thecalculationswouldberepeatedfor
exposuretohighnoiselevels,andforfemalesaswellasmales.

Assessing Occupational exposure to noise

Step Data Examplefor exposureto moderatelyhigh


noise (85–90 dB(A)), USAmales

Source

1. For each National noise 22% ofworkers in the production


occupationalgroup – exposure occupational category of the
withineacheconomic surveys, or manufacturingeconomic subsector are
subsector, estimate exposed to noise above 85 dB(A).
NIOSH study
the proportion Halfof these (11%) are exposed to noise
ofworkers exposed to (Table 4). levels of 85– 90 dB(A) (the remaining
moderately high (85– half are exposed to>90 dB(A)).
90 dB(A)) and high
(>90dB(A)) levels of
noise.

2. Determine National 12% ofworkers in the manufacturing


thedistribution laboroffices or subsector are in the professional category,
ofoccupations the ILO 13% inadministration, 10% in clerical,4% in
between the nine
sales, 1% inservice, none in agriculture,
economic subsectors.
59% in production

3. Using the tables Derived from 11%× 59%, or6.5% of productionworkers


developed for Step 1 the outputs of inmanufacturingin the USA are exposed to
and Step 2, estimate Steps 1 and 2. noise levelsof 85–90dB(A). Repeating the
the proportion of calculationfor other occupations in
theworking population manufacturing, and summing, gives a
exposed to moderately value of 8.8% for allworkers in
high noise levels for manufacturing exposed to noise levels
each of thenine 85−90dB(A).
economic subsectors,
and sum the results
6. Determine theoverall
87% of males 15–64years old participate
proportion of
in the laborforce.
theworking-age
population, 15−64years
old, in the laborforce,
aswell asthe
corresponding
proportionsfor males
andfemales.

7. Determine theoverall The value of 6.6% for the


Derived from
population proportion of the male laborforce
the outputs
exposurebyadjusting exposed to noise levels of 85–90
of Steps 5 and
the proportionof the dB(A) is adjustedbymultiplying this
6.
laborforceexposed to figure by the participationof males
elevated noise levels in thelaborforce (87%). The result is
for the participation of that 5.7% of themale population
the populationin the 15–64years old is exposed to
labor force. moderately highnoise levels.

Adapted fromConcha-Barrientoset al. (2004)

Estimatesfortheprevalenceofnoiseexposure,determinedusingthedescribedmethod,areshowninT
able11.5.Thefiguresassumethereareequalemploymentratesinallagegroupsoftheworking-
agepopulation.

Proportion of the working-age population in the USA occupationally exposed to noise


levels of 85−90 dB(A) and >90 dB(A), by gendera

Working-agepopulation

Gender exposed labour force exposed

85−90dB(A) >90 dB(A) 85−90dB(A) >90 dB(A)

Males 0.066 0.038 0.870 0.057 0.033

Females 0.054 0.026 0.740 0.040 0.019

a Adapted fromConcha-Barrientoset al. (2004).


11.5 ESTIMATING RELATIVE RISKS FOR HEALTH OUTCOMES,
BYEXPOSURELEVEL
Itmaybepossibletoobtainrelativerisksbyexposurelevelfromtheliterature,or
fromepidemiologicalsurveysinyourownpopulation,orpopulationswithsimilarsocioeconomicandw
orkingconditions.However,asforevidenceofcausality,thereislittlereasontobelievethattherelativeri
sksofhearinglossshoulddifferbetweencountries,sothatinmostcasesitwillbemorestraightforwarda
ndprobablymoreaccurate,touserelativerisksbasedonallpreviousstudies.

Aswithotherdiseaseburdens,themajorchallengeinestimatingrelativerisksfor
NIHLisinconvertingdifferentmeasuresofhearinglossintoasinglestandardizeddefinitionforassessing
exposure,asisdoneinthemethodpresentedhere.As
outlinedinSection3,thecriteriausedbyWHOtodefinedisablinghearingimpairmentisdifferentfromth
ecriteriausedbymostofthestudiesintheoccupationalfield,soanadjustmentofthepublishedrelativeri
skvaluesisusuallynecessarytocalculateburdenofdiseaseinDALYs.Again,theconversionprocedurede
scribedinStep2belowshouldbeequallyapplicableinallcountries.

Aproceduretoestimatetheincreaseinriskassociatedwithdifferentexposurelevelshasbeendefinedin
theGlobalBurdenofDiseasestudy(Cocha-Barrientosetal.,2004).Inrife,the mainstepsare:

Estimatetheexcessriskfordifferentlevelsofexposure,andfordifferentages.
ThedatacanbeobtainedfromalargestudycarriedoutintheUSA(Princeetal.,1997).Thestudyuse
sanaverageof1000,2000,3000and4000Hz,andahearingloss>25dBHLtodefinehearingimpairm
ent.Excessriskisdefinedas
thepercentageofworkerswithahearingimpairmentinthepopulationexposedtooccupationaln
oise,minusthepercentageofpeopleinanunexposedpopulationwhohaveahearingimpairmentt
hatisthenaturalresultofaging.MoststudiesfollowtheNIOSHpracticeofmeasuringtheoutcome
as“materialhearingimpairment”(i.e.atthelevelof25dB).

Adjustthehearinglevels.AcorrectionfactorcanbeusedtoadjusttheexcessrisksmeasuredusingtheNI
OSHdefinitionofthethreshold,tothelevelatwhichWHOdefinesanassociateddisabilityweightingforb
urdenofdiseasecalculations(>41dB;Concha-
Barrientosetal.,2004).Inthisguide,weuseacorrectionfactorof 0.446,which istheratio
ofthenumberofexcesscasesat>40dBdividedbythenumberofexcesscasesat>25dB(NIOSH,1991).

Estimaterelativeriskbyage.
Therelativeriskvaluesbyagecanbeestimatedusingtheformula:relativerisk=1+(excessrisk/expected
risk).TheexpectedriskintheGlobalBurdenofDiseasestudyisbasedonastudyoftheprevalenceof
hearinglossasafunctionofageintheadultpopulationofGreatBritain(Davis, 1989).

ThefinalrelativerisksofhearinglossatvariousexposurelevelsdefinedbythisprocedurearegiveninTab
le 11.6. Unlessthereisstrongevidencethattherelativerisks
aredifferentinyourcountryofinterest,thenitisadvisabletousethesevalues.

Relative risks for hearing loss by sex, age group and level of
occupational exposure

Sex Exposure level 15−29 30−44 45−59 60−69 70−79 80+

Male <85 dB(A) 1.00 1.00 1.00 1.00 1.00 1.00


Male 85−90dB(A) 1.96 2.24 1.91 1.66 1.66 1.66
Male >90 dB(A) 7.96 5.62 3.83 2.82 2.82 2.82
Female <85 dB(A) 1.00 1.00 1.00 1.00 1.00 1.00
Female 85−90dB(A) 1.96 2.24 1.91 1.66 1.66 1.66
Female >90 dB(A) 7.96 5.62 3.83 2.82 2.82 2.82

Adapted fromConcha-Barrientoset al. (2004)

11.6 ESTIMATING THE ATTRIBUTABLE FRACTION


ANDTHEEASE BURDEN
Theburdenofdiseasecausedbyexposuretooccupationalnoiseisgivenby
combiningthefollowinginformation:

– theproportionofpeopleexposedtothedefinednoiselevels.
– therelativeriskofdevelopingNIHLforeachexposurelevel.
– thetotaldiseaseburden(incidenceornumberofDALYs)fromNIHLwithinthecountry,obtainedfro
mothersources(e.g.Prüss-Üstünetal.,2003).

Calculating theAttributable Fraction


Thefirststepinthecalculationistodeterminethefractionofthetotalburdenof
NIHLinthestudypopulationthatisattributabletotheriskfactor,bycombiningtheexposuredistribut
ionandrelativeriskinformation.Theattributablefraction(AF),
alsocalledtheimpactfractioninthiscontext,isgivenbythefollowingformula(Prüss-
Üstünetal.,2003):

AF= ΣPiRRi-1
ΣPiRRi

where:
Pi = proportionofthepopulationineachexposurecategory,i(i.e.

Punexposed,Plowexposure,Phighexposure).

RRi=relativeriskatexposurecategory,i,comparedtothereferencelevel(=1.0foranunexp
osedpopulation).

Forexample,thefractionofNIHLintheUSAmalepopulation15−29yearsoldthatisattributabletooccup
ationalnoiseisgivenby:

AF= (91%×1)+(5.7%×1.96)+(3.3%×7.96)–1
(91%×1)+(5.7%×1.96)+(3.3%×7.96)
=22%
11.7 UNCERTAINTY

Therearetwoprincipalsourcesofuncertaintyinthediseaseburdenestimatesfor occupationalnoise.

Uncertainty in exposure estimates


Inthemethodusedinthisguide,themainuncertaintiesareinestimatesoftheproportionofpeopleineac
hoccupationalgrouporeconomicsubsectorthatisexposedtothespecifiedlevelofnoise.Thishasbeenth
oroughlyassessedforoneoccupationintheUSA andextrapolatedtootheroccupationsandsub
regionsbasedonexpertjudgment.However,theassumptionsusedtomaketheseextrapolations
havenotbeentested,andlocalsurveyswouldhelptoreducetheuncertaintyaround
theestimates.Insomecountries,theremayalsobeuncertaintyaroundthedistributionoftheworkingpo
pulationbetweenoccupationalgroupsoreconomicsubsectors.Thisuncertaintycanbereducedbyusin
gthemostrecentdatafromauthoritativesources(e.g.statisticsfromtheMinistryofLabour).

Uncertainty in relative risk estimates


Uncertainty in relative risk estimates may arise from the original epidemiological studies, and
includes errors in exposure estimates, confounding factors, and in measurement of hearing loss.

Errors in exposure estimates may arise because most studies of the association between noise
and hearing impairment are retrospective measurements of the hearing sensitivities of
individuals, correlated with their noise exposure over an extended period (typically, many years).
Noise exposure often varies over time, so that it may be difficult to measure the precise level
that the subject has experienced, particularly if they have been subject to intermittent
exposures. Uncertainty is also introduced by variation in the subject (e.g. previous audiometric
experience, attention, motivation, upper respiratory problems, and drugs). However, well-
designed epidemiological studies (of the type used to define the relative risks in this guide)
should account for the most important confounding factors (e.g. age and sex), and ensure that
the relative risks are reasonably accurate. The large populations of the studies used to calculate
the relative risks in this guide, and the consistency of results between studies, suggest that
thedatacloselyapproximatetherisksofnoiseexposure.

Someadditionaluncertaintyinthemethodusedinthisguidecomesfromadjustinghearinglossmeasure
mentsmadeatdifferentthresholds(e.g.25dBLand41dBL).Theuncertaintyshouldberelativelysmall,as
theadjustmentisbasedonalargesamplesize,butitcouldbereducedfurtherifmorestudiesmeasurehea
ringlossatboth25dBLand41dBL.
11.8 POLICY IMPLICATIONS

Estimatesoftheincidenceofoccupationally-
relatedNIHLinyourcountryorstudypopulation,orofthenumberofattributing
DALYs,willprovidequantitativeinformationontheimportanceoftheprobleminthestudyarea,andcanhelp
tomotivateinterventionstoreducetheserisksandassociatedhealthimpacts.NIHLis,atpresent,incurablea
ndirreversible.Itispreventable,however,anditis essential
thatpreventiveprogrambeimplemented.Thefollowingrecommendationsoneffectivehazardprevention
andcontrolmechanismsarebasedonGoelzer(2001).
Hearingconservationprogramshouldnotbeisolatedefforts,butshouldbeintegratedintotheoverallhazard
preventionandcontrolprogramfortheworkplace.
Hazard prevention and control programs require:

 political will and decision-making.


 commitment from top management, with a clear and well-circulated policy.
 commitment from workers.
 well-defined goals and objectives.
 adequate human and financial resources.
 technical knowledge and experience.
 adequate implementation of the program and competent management.
 multidisciplinary teams.
 communication mechanisms.
 monitoring mechanisms (indicators).
 Continuous program improvement
Within an overall hazard prevention and control program, specific noise- prevention and control
strategies usually involve the following elements:

 the work process (including tools and machinery): for example, install quieter
equipment, promote good maintenance.

 the workplace: for example, use noise enclosures or acoustic equipment.

 the workers: for example, set up work practices and other administrative controls on
noise exposures, and provide eudiometry tests and hearing protection, and workers’
education programs

Control measures should be realistically designed to meet the needs of each situation, and the
different options should be considered in view of factors such as effectiveness, cost, technical
feasibility, and socio-cultural aspects. Control interventions should follow the following hierarchy:
control the noise source → control the noise propaga on → control noise at the worker level.

The priority is to reduce noise through technical measures. When engineering controls are not
applicable or are insufficient, exposure to noise can be reduced through measures such as
introducing hearing protection for workers. The protective equipment must be properly selected,
worn and maintained. Administrative controls can also be used. These are changes in the work
schedule, or in the order of operations and tasks. For example, the time spent in a noisy
environment can be limited (in addition to wearing hearing protection), and noisy operations can be
performed outside the normal shift, or during a shift with very few workers (wearing hearing
protection), or at a distant location. Some better-known measures for reducing noise, such as noise
enclosures and personal protective equipment, may be too expensive, impractical, inefficient or
unacceptable to workers, particularly in hot jobs or climates. Approaches to prevention should be
broadened, with proper consideration of other control options, particularly options for source
control, such as substituting materials and modifying processes, as well as for good work practices.
Finally, it should be recognized that in developing countries a large proportion of the population
works in the informal sector. A major challenge is to extend occupational hazard prevention and
control programs to this section of the population.

11.9 SUMMARY

Estimates of the proportions of populations exposed in each sub region were based on the
distribution of the Economically Active Population1 into nine economic subsectors. The estimates
took into account the proportion of workers in each economic subsector with exposure to the risk
factor, and the workers were partitioned into high and low exposure levels. Turnover accounted for
previous exposures. The primary data sources for estimating exposures included the World Bank
(World Bank, 2001), the International Labor Organization (ILO, 1995, 2001, 2002), and literature on
the prevalence and level of exposure.

The exposure variable used in this analysis is a direct measure of the risk factor (occupational
exposure to noise is the causative agent of NIHL). As global data on the frequency of occurrence,
duration and intensity of noise exposure do not exist, it was necessary to model this exposure for
workers employed in various occupational categories. The theoretical minimum is based on
expected background levels of noise and is consistent with national and international standards.
Most experts agree that levels below 80 dB(A) result in minimal risk of developing hearing loss

11.10 KEYWORDS
Sound level- Sound pressure or acoustic pressure is the local pressure deviation from the ambient
(average, or equilibrium) atmospheric pressure, caused by a sound wave. A sound level
meter or sound meter is an instrument which measures sound pressure level, commonly used
in noise pollution studies for the quantification of different kinds of noise

Noise prevention- To avoid hazardous noise from the workplace whenever possible and using
hearing protectors in those situations where dangerous noise exposures

Occupational Noise- Occupational noise has been a hazard linked to heavy industries such as ship-
building and associated only with noise-induced hearing loss (NIHL)

Attributing fraction- It is the fractionofthetotalburdenof NIHL

Hearing impairment- hearing impairment, or hearing loss is a partial or total inability to hear.
UNIT 13 CREATING HIGH PERFORMANCE WORK SYSTEM
Objectives
After going through this unit,you will be able to:
 discuss the underlying principles of high-performance work systems.
 identify the components that make up a high-performance work system.
 describe how the components fit together and support strategy.
 recommend processes for implementing high-performance.
 explain how the principles of high-performance work systems apply to small and medium
sized organizations.
Structure
13.1 Introduction
13.2 Principle of shared information and knowledge development
13.3 The principle of performance –reward linkage and egalitarianism
13.4 Anatomy of high- performance work systems
13.5 Fit it all together
13.6 Implementing the System
13.7 Navigating the transition to high-performance work systems
13.8 Outcomes of high-performance work systems
13.9 Summary
13.10 Keywords

13.1 INTRODUCTION

A high-performance work system (HPWS) can be defined as a specific combination of HR practices,


work structures, and processes that maximizes employee knowledge, skill, commitment, and
flexibility. Although some noteworthy HR practices and policies tend to be incorporated within most
HPWSs, it would be a mistake for us to focus too much, or too soon, on the pieces themselves. The
key concept is the system. High-performance work systems are composed of many interrelated parts
that complement one another to reach the goals of an organization, large or small.
Organizations face several important competitive challenges such as adapting to global business,
embracing technology, managing change, responding to customers, developing intellectual capital,
and containing costs. Some very important employee concerns that must be addressed, such as
managing a diverse workforce, recognizing employee rights, adjusting to new work attitudes, and
balancing work and family demands. We now know that the best organizations go beyond simply
balancing these sometimes competing demands; they create work environments that blend these
concerns to simultaneously get the most from employees, contribute to their needs, and meet the
short-term and long-term goals of the organization.

Linkages to
Strategy System Design
Outcomes
Work Flow The Implementation
HRM Practices Process  Organizational
Principles of
Support Technology  Employees
High
Involvement
Developing High –Performance Work system
th
(Source: George and Scott Managing human Resource 14 Edition (02-2006))

The notion of high-performance work systems was originally developed by David Nadler to capture
an organization’s “architecture” that integrates technical and social aspects of work. Edward Lawler
and his associates at the Center for Effective Organization at the University of Southern California
have worked with Fortune 1000 corporations to identify the primary principles that support high-
performance work systems. There are four simple, but powerful principles as follows:
 Shared information
 Knowledge development
 Performance–reward linkage
 Egalitarianism
In many ways, these principles have become the building blocks for managers who want to create
high-performance work systems

13.2 PRINCIPLE OF SHARED INFORMATION AND KNOWLEDGE


DEVELOPMENT

The principle of shared information is critical for the success of empowerment and involvement
initiatives in organizations. In the past, employees traditionally were not given—and did not ask
for—information about the organization. People were hired to perform narrowly defined jobs with
clearly specified duties, and not much else was asked of them. One of the underlying ideas of high-
performance work systems is that workers are intimately acquainted with the nature of their own
work and are therefore in the best position to recognize problems and devise solutions to them.
Today organizations are relying on the expertise and initiative of employees to react quickly to
incipient problems and opportunities. Without timely and accurate information about the business,
employees can do little more than simply carry out orders and perform their roles in a relatively
perfunctory way. They are unlikely to understand the overall direction of the business or contribute
to organizational success.

On the other hand, when employees are given timely information about business performance,
plans, and strategies, they are more likely to make good suggestions for improving the business and
to cooperate in major organizational changes. They are also likely to feel more committed to new
courses of action if they have input in decision making. The principle of shared information typifies a
shift in organizations away from the mentality of command and control toward one more focused on
employee commitment. It represents a fundamental shift in the relationship between employer and
employee. If executives do a good job of communicating with employees and create a culture of
information sharing, employees are perhaps more likely to be willing (and able) to work toward the
goals for the organization. They will “know more, do more, and contribute more.” At FedEx Canada,
at every single station across Canada, company officers and managing directors meet with
employees at 5:30 a.m. and 10:00 p.m. to review the business data and answer questions.

Knowledge development is the twin sister of information sharing. As Richard Teerlink, former CEO of
Harley-Davidson, noted, “The only thing you get when you empower dummies is bad decisions
faster.” Throughout this text, we have noted that the number of jobs requiring little knowledge and
skill is declining whiles the number of jobs requiring greater knowledge and skill is growing rapidly.
As organizations attempt to compete through people, they must invest in employee development.

This includes both selecting the best and the brightest candidates available in the labor market and
providing all employees opportunities to continually hone their talents.

High-performance work systems depend on the shift from touch labor to knowledge work.
Employees today need a broad range of technical, problem solving, and interpersonal skills to work
either individually or in teams on cutting edge projects. Because of the speed of change, knowledge
and skill requirements must also change rapidly. In the contemporary work environment, employees
must learn continuously. Stopgap training programs may not be enough. Companies such as
DaimlerChrysler and Roche have found that employees in high-performance work systems need to
learn in “real time,” on the job, using innovative new approaches to solve novel problems. Likewise,
at Ocean Spray’s Henderson, Nevada, plant, making employees aware of the plant’s progress has
been a major focus. A real-time scoreboard on the Henderson plant floor provides workers with
streaming updates of the plant’s vital stats, including average cost per case, case volumes filled,
filling speeds, and injuries to date. When people are better informed, they do better work. “We
operate in real time and we need real-time information to be able to know what we have achieved
and what we are working towards,” says an Ocean Spray manager. (See Case Study 1 at the end of
the chapter for more on Ocean Spray’s HPWS initiative.

13.3 THE PRINCIPLE OF PERFORMANCE –REWARD LINKAGE AND


EGALITARIANISM
A time-tested adage of management is that the interests of employees and organizations naturally
diverge. People may intentionally or unintentionally pursue outcomes that are beneficial to them
but not necessarily to the organization as a whole. A corollary of this idea, however, is that things
tend to go more smoothly when there is some way to align employee and organizational goals.
When rewards are connected to performance, employees naturally pursue outcomes that are
mutually beneficial to themselves and the organization. When this happens, some amazing things
can result. For example, supervisors don’t have to constantly watch to make sure that employees do
the right thing. But in fact, employees may go out of their way—above and beyond the call of duty,
so to speak—to make certain that co-workers are getting the help they need, systems and processes
are functioning efficiently, and customers are happy. At Clearwater Seafood’s, a global Canadian
company of 2200 employees based in Bedford, Nova Scotia, nearly all employees participate in a
bonus plan based on the volume of food that is packaged.

Connecting rewards to organizational performance also ensures fairness and tends to focus
employees on the organization. Equally important, performance-based rewards ensure that
employees share in the gains that result from any performance improvement. For instance, Lincoln
Electric has long been recognized for its efforts in linking employee pay and performance.

People want a sense that they are members, not just workers, in an organization. Status and power
differences tend to separate people and magnify whatever disparities exist between them. The “us
versus them” battles that have traditionally raged between managers, employees, and labor unions
are increasingly being replaced by more cooperative approaches to managing work. More egalitarian
work environments eliminate status and power differences and, in the process, increase
collaboration and teamwork. When this happens, productivity can improve if people who once
worked in isolation from (or in opposition to) one another begin to work together. Nucor Steel has
an enviable reputation not only for establishing an egalitarian work environment but also for the
employee loyalty and productivity that stem from that environment. Upper levels of management
do not enjoy better insurance programs, vacation schedules, or holidays. In fact, certain benefits
such as Nucor’s profit-sharing plan, scholarship program, employee stock purchase plan,
extraordinary bonus plan, and service awards program are not available to Nucor’s officers at all.
Senior executives do not enjoy traditional perquisites such as company cars, corporate jets,
executive dining rooms, or executive parking places. On the other hand, every Nucor employee is
eligible for incentive pay and is listed alphabetically on the company’s annual report.

Moving power downward in organizations—that is, empowering employees—frequently requires


structural changes. Managers often use employee surveys, suggestion systems, quality circles,
employee involvement groups, and/or union–management committees that work in parallel with
existing organizational structures. In addition, work flow can be redesigned to give employees more
control and influence over decision making. At Old Home Foods, one of the few independent
exclusively cultured dairy product manufacturers in North America, all employees are involved in the
decision-making process of the business. “It’s part of the Old Home Foods culture,” says owner Peter
Arthur “P.A.”Hanson.“To be a successful independent, you need to empower your employees and
let them know they are critical to success.” Job enlargement, enrichment, and self-managing work
teams are typical methods for increasing the power of employees to influence decisions, suggest
changes, or act on their own. With decreasing power distances, employees can become more
involved in their work; their quality of work life is simultaneously increased, and organizational
performance is improved.

These four principles—shared information, knowledge development, performance– reward linkage,


and egalitarianism—are the basis for designing high-performance work systems. They also cut across
many of the topics and HR practices we have talked about elsewhere in this textbook. These
principles help us integrate practices and policies to create an overall high-performance work
system.

13.4 ANATOMY OF HIGH- PERFORMANCE WORK SYSTEMS

High-performance work systems frequently begin with the way work is designed. Total quality
management (TQM) and reengineering have driven many organizations to redesign their work flows.
Instead of separating jobs into discrete units, most experts now advise managers to focus on the key
business processes that drive customer value—and then create teams that are responsible for those
processes. Federal Express, for example, redesigned its delivery process to give truck drivers
responsibility for scheduling their own routes and for making necessary changes quickly. Because the
drivers have detailed knowledge of customers and routes, Federal Express managers empowered
them to inform existing customers of new products and services. In so doing, drivers now fill a type
of sales representative role for the company. In addition, FedEx drivers also work together as a team
to identify bottlenecks and solve problems that slow delivery. To facilitate this, advanced
communications equipment was installed in the delivery trucks to help teams of driver’s balance
routes among those with larger or lighter loads.

Similarly, when Colgate-Palmolive opened a plant in Cambridge, Ohio, managers specifically


designed teams around key work processes to produce products such as Ajax, Fab, Dynamo, and
Palmolive detergent. Instead of separating each stage of production into discrete steps, teams work
together in a seamless process to produce liquid detergent, make polyurethane bottles, fill those
bottles, label and package the products, and deliver them to the loading dock.

By redesigning the work flow around key business processes, companies such as Federal Express and
Colgate-Palmolive have been able to establish a work environment that facilitates teamwork, takes
advantage of employee skills and knowledge, empowers employees to make decisions, and provides
them with more meaningful work.
Anatomy of High-Performance Work Systems
th
(Source: George and Scott Managing human Resource 14 Edition (02-2006)

Management Processes and Leadership


Leadership issues arise at several levels with high-performance work systems. At the executive
level there needs to be clear support for a high-performance work environment, for the changes
in culture that may accompany this environment, and for the modification of business processes
necessary to support the change. These concerns will be addressed in more detail shortly in our
discussion of implementation issues. Organizations such as American Express and Reebok
International found that the success of any high-performance work system depends on first
changing the roles of managers and team leaders. With fewer layers of management and a focus
on team-based organization, the role of managers and supervisors is substantially different in an
environment of high-performance work systems. Managers and supervisors are seen more as
coaches, facilitators, and integrators of team efforts.14 Rather than autocratically imposing their
demands on employees and closely watching to make certain that the workers comply,
managers in high-performance work systems share responsibility for decision making with
employees. Typically, the term manager is replaced by the term team leader. And in a growing
number of cases, leadership is shared among team members. Kodak, for example, rotates team
leaders at various stages in team development. Alternatively, different individuals can assume
functional leadership roles when their particular expertise is needed most.

Supportive Information Technologies


Communication and information technologies are yet one more piece that must be added to the
framework of high-performance work systems. Technologies of various kinds create an
infrastructure for communicating and sharing information vital to business performance. Federal
Express, for example, is known for its use of information technology to route packages. Its
tracking system helps employees monitor each package, communicate with customers, and
identify and solve problems quickly. Sally Industries uses information technology to assign
employees to various project teams. The company specializes in animatronics, the combination
of wires and latex that is used to make humanoid creatures such as is found in Disney’s Hall of
Presidents. Artisans employed by Sally Corporation work on several project teams at once. A
computerized system developed by the company helps budget and tracks the employee time
spent on different projects.

But information technologies need not always be so high-tech. The richest communication
occurs face to face. The important point is that high-performance work systems cannot succeed
without timely and accurate communications. (Recall the principle of shared information.)
Typically the information needs to be about business plans and goals, unit and corporate
operating results, incipient problems and opportunities, and competitive threats.

13.5 FITTING IT ALL TOGETHER

Each of these practices highlights the individual pieces of a high-performance work system. This
philosophy is reflected in the mission statement of Saturn Motors, a model organization for HPWS.
Saturn’s mission is to “Market vehicles developed and manufactured in the United States that are
world leaders . . . through the integration of people, technology, and business systems Figure
summarizes the internal and external linkages needed to fit high-performance work systems
together
Achieving Strategic Fit
th
Source: George and Scott Managing human Resource 14 Edition (02-2006)

Ensuring Internal Fit


Internal fit occurs when all the internal elements of the work system complement and reinforce
one another. For example, a first-rate selection system may be of no use if it is not working in
conjunction with training and development activities. If a new compensation program elicits and
reinforces behaviors that are directly opposed to the goals laid out in performance planning, the
two components would be working at cross purposes. This is the true nature of systems.
Changes in one component affect all the other components. Because the pieces are
interdependent, a new compensation system may have no effect on performance if it is
implemented on its own. Horizontal fit means testing to make certain that all the HR practices,
work designs, management processes, and technologies complement one another. The synergy
achieved through overlapping work and human resources practices is at the heart of what makes
a high-performance system effective.
Establishing External Fit
To achieve external fit, high-performance work systems must support the organization’s goals
and strategies. This begins with an analysis and discussion of competitive challenges,
organizational values, and the concerns of employees and results in a statement of the
strategies being pursued by the organization.16 Xerox, for example, uses a planning process
known as “Managing for Results,” which begins with a statement of corporate values and
priorities. These values and priorities are the foundation for establishing three-to-five-year goals
for the organization.

Each business unit establishes annual objectives based on these goals, and the process cascades
down through every level of management. Ultimately, each employee within Xerox has a clear
“line of sight” to the values and goals of the organization so he or she can see how individual
effort makes a difference. Efforts such as this to achieve vertical fit help focus the design of high-
performance work systems on strategic priorities. Objectives such as cost containment, quality
enhancement, customer service, and speed to market directly influence what is expected of
employees and the skills they need to be successful. Terms such as involvement, flexibility,
efficiency, problem solving, and teamwork are not just buzzwords. They are translated directly
from the strategic requirements of today’s organizations. High-performance work systems are
designed to link employee initiatives to those strategies.

13.6 IMPLEMENTING THE SYSTEM


So far, we have talked about the principles, practices, and goals of high-performance work systems.
Unfortunately, these design issues compose probably less than half of the challenges that must be
met in ensuring system success. Much of what looks good on paper gets messy during
implementation. The American Society for Training and Development (ASTD) asked managers and
consultants to identify the critical factors that can make or break a high-performance work system.
The respondents identified the following actions as necessary for success (see Figure 13.4):
 Make a compelling case for change linked to the company’s business strategy.
 Ensure that change is owned by senior and line managers.
 Allocate sufficient resources and support for the change effort.
 Ensure early and broad communication.
 Ensure that teams are implemented in a systemic context.
 Establish methods for measuring the results of change.
 Ensure continuity of leadership and champions of the initiative
Many of these recommendations are applicable to almost any change initiative, but they are
especially important for broad-based change efforts that characterize high-performance work
systems. Some of the most critical issues are discussed next

Build a case Communicate Involve Navigate Evaluation


for change Union Transition

Implementing High- Performance Work systems


(Source: George and Scott Managing human Resource 14th Edition (02-2006)

Building a Business Case for Change


Change can be threatening because it asks people to abandon the old ways of doing things and
accept new approaches that, to them at least, are untested. To get initial commitment to high-
performance work systems, managers must build a case that the changes are needed for the
success of the organization. In a recent study on the implementation of high-performance work
systems, it was found that a member of top management typically played the role of
sponsor/champion and spent a substantial portion of his or her time in that role communicating
with employees about the reasons and approaches to change. Major transformation should not
be left to middle managers. Rather, the CEO and the senior management team need to establish
the context for change and communicate the vision more broadly to the entire organization. For
example, executives at Harley-Davidson tried to institute employee involvement groups without
first demonstrating their own personal commitment to the program. Not surprisingly,
employees were apathetic and, in some cases, referred to the proposed changes as just
“another fine program” put in place by the personnel department. Harley-Davidson executives
learned the hard way that commitment from the top is essential to establish mutual trust
between employees and managers. Similarly, the CEO of a business-consulting company was
adamant that his vice-presidents understand a new initiative and give a short speech at an
introductory session. On the day of the program’s launch, however, the CEO himself did not
show up. The message to the vice-presidents was clear: The CEO didn’t think the change was
important enough to become an active participant. Not surprisingly, the change was never
implemented.

One of the best ways to communicate business needs is to show employees where the business
is today—its current performance and capabilities. Then show them where the organization
needs to be in the future. The gap between today and the future represents a starting point for
discussion. When executives at TRW wanted to make a case for change to high-performance
work systems, they used employee attitude surveys and data on turnover costs. The data
provided enough ammunition to get conversation going about needed changes and sparked
some suggestions about how they could be implemented. Highlights in Exhibit HPWS 13.1 shows
what happened when BMW bought British Land Rover and began making changes without first
talking through the business concerns.

Ironically, in this case, BMW unwittingly dismantled an effective high-performance work system.
Now that Ford owns the company, will things work differently?

Exhibit - Land Rover, BMW, and Ford Crash Head-On


Some years ago, the British Land Rover Company, a leading manufacturer of four-wheel-drive
vehicles, found itself saddled with a notorious reputation for poor quality and productivity. Then
it underwent a fundamental transformation. The company instituted extensive training
(including giving every employee a personal training fund to be used on any subject),
implemented more team-based production methods, reduced the number of separate job
classifications, developed more cooperative relations with organized labour, and began a total
quality program.

As a result of these changes, productivity soared by 25 percent, quality action team’s netted
savings worth millions of dollars, and the quality of products climbed. Operating in a very
competitive environment, Land Rover produced and sold one-third more vehicles. On the basis
of these changes, the company was certified as an “Investors in People—U.K.” designee. This
national standard recognizes organizations that place the involvement and development of
people at the heart of their business strategy.
So far, so good;then BMW bought the company. In spite of massive evidence documenting the
effectiveness of the new management methods and changed culture, BMW began to dictate
changes within a manner of months. Unfortunately, the changes undid the cultural
transformation.

Land Rover never fully recovered under the new management. After losing more than $6 billion,
BMW sold off the company. Land Rover was later purchased by Ford Motor Company. Ford
bought Land Rover and put it under one roof with Volvo, Jaguar, and Aston Martin to create the
Premier Auto Group. Ford has continued to manufacture the Land Rover in England while
improving its quality, but this hasn’t been enough to turn Land Rover around.
In 2003, Land Rover finished near the bottom of the J.D. Power Initial Quality Study, 36th out of
37 brands, and the division has been showing losses. Land Rover’s 8000-strong workforce in
Solihull, England, has been put on notice by group chairman Mark Fields that it needs to alter its
culture and working practices to match those Embraced by Ford. In the Ford production system,
teams of workers are supposed to take charge of and improve quality in their areas. Fields wants
the Solihull plant to operate like a nearby Jaguar plant in Halewood, England, which formerly
built Ford Escorts. The Halewood plant had been notorious for militancy, work stoppages,
absenteeism, and quality problems. Halewood later became Ford’s top factory after a decision
was made to transform into a Jaguar factory and a sweeping series of cultural, productivity and
working practice changes were put into place.
“We’ve taken a very positive set of first steps but there’s a lot of pavement in front of us,” said
Fields about Land Rover.

Sources: Jeffrey Pfeffer, “When It Comes to ‘Best Practices’—Why Do Smart Organizations


Occasionally Do Dumb Things?” Reprinted from Organizational Dynamics, Summer 1996 with
permission from Elsevier; Cordelia Brabbs, “Rover’s White Knight,” Marketing (May 18, 2000):
28; Georg Auer, “Burela to Instill Quality Culture at Land Rover,” Automotive News 75, no. 5902
(November 6, 2000): 32x–32z; Ronald W. Pant, “Land Rover History Lesson,” Truck Trend 8, no. 3
(May–June 2005): 12; Bradford Wernle, “Solihull Must Do ‘a Halewood’ to Survive; Jaguar Plant
Is the Example Land Rover Factory Must Follow,” Automotive News Europe 9, no. 19 (September
20, 2004): 39.

Establishing a Communications Plan


High-performance work systems noted that providing an inadequate communication system is
the most frequent mistake companies make during implementation. While we have emphasized
the importance of executive commitment, top-down communication is not enough. Two-way
communication not only can result in better decisions, it may help to diminish the fears and
concerns of employees. For example, Solectron Corporation, winner of the Baldrige National
Quality Award, tried to implement high-performance work systems to capitalize on the
knowledge and experience of its employees. A pilot program showed immediate gains in
productivity of almost 20 percent after the switch to self-managed teams and team-based
compensation. Although Solectron’s rapid growth of more than 50 percent per year made it
unlikely that middle managers would be laid off, many of them resisted the change to a high-
performance work system. They resented the loss of status and control that accompanied the
use of empowered teams.

If Solectron managers had participated in discussions about operational and financial aspects of
the business, they might not have felt so threatened by the change. Open exchange and
communication at an early stage pay off later as the system unfolds. Ongoing dialogue at all
levels helps reaffirm commitment, answer questions that come up, and identify areas for
improvement throughout implementation. Recall that one of the principles of high-performance
work systems is sharing information. This principle is instrumental to success both during
implementation and once the system is in place.

Involving Union
The autocratic styles of management and confrontational approaches to labor negotiations are
being challenged by more enlightened approaches that promote cooperation and collaboration.
Given the sometimes radical changes involved in implementing high-performance work systems,
it makes good sense to involve union members early and to keep them as close partners in the
design and implementation process.

To establish an alliance, managers and labour representatives should try to create “win–win”
situations, in which all parties gain from the implementation of high-performance work systems.
In such cases, organizations such as Shell and Weyerhaeuser have found that “interest-based”
(integrative) negotiation rather than positional bargaining leads to better relationships and
outcomes with union representatives.

Trust is a fragile component of an alliance and is reflected in the degree to which parties are
comfortable sharing information and decision making. Manitoba Telecom Services has involved
union members in decisions about work practices, and because of this, company managers have
been able to build mutual trust and respect with the union. This relationship has matured to a
point at which union and company managers now design, select, and implement new
technologies together. By working hard to develop trust up front, in either a union or a
nonunion setting, it is more likely that each party will understand how high-performance work
systems will benefit everyone; the organization will be more competitive, employees will have a
higher quality of work life, and unions will have a stronger role in representing employees.

Most labor–management alliances are made legitimate through some tangible symbol of
commitment. This might include a policy document that spells out union involvement, letters of
understanding, clauses in a collective bargaining agreement, or the establishment of joint
forums with explicit mandates. MacMillan Bloedel, a Canadian wood product company now
owned by Weyerhaeuser, formed a joint operations committee of senior management and labor
representatives to routinely discuss a wide range of operational issues related to high-
performance work systems. These types of formal commitments, with investments of tangible
resources, serve two purposes: (1) They are an outward sign of management commitment, and
(2) they institutionalize the relationship so that it keeps going even if key project champions
leave.

In addition to union leadership, it is critical to have the support of other key constituents.
Leaders must ensure that understanding and support are solid at all levels, not just among those
in the executive suite. To achieve this commitment, some organizations have decentralized the
labor relations function, giving responsibility to local line managers and human resources
generalists, to make certain that they are accountable and are committed to nurturing a high-
performance work environment. Nortel Networks, for example, formally transferred
accountability for labor relations to its plant managers through its collective bargaining
agreement with the union. Line managers became members of the Employee Relations Council,
which is responsible for local bargaining as well as for grievance hearings that would formerly
have been mediated by HR. Apart from the commitment that these changes create, perhaps the
most important reason for giving line managers responsibility for employee relations is that it
helps them establish a direct working relationship with the union.
Once processes, agreements, and ground rules are established, they are vital to the integrity of
the relationship. As Ruth Wright, manager of the Council for Senior Human Resource Executives,
puts it, “Procedure is the ‘rug’ on which alliances stand. Pull it out by making a unilateral
management determination or otherwise changing the rules of the game, and the initiative will
falter. Procedure keeps the parties focused, and it is an effective means of ensuring that
democracy and fairness prevail.” In most cases, a “home-grown” process works better than one
that is adopted from elsewhere. Each organization has unique circumstances, and parties are
more likely to commit to procedures they create and own.

13.7 NAVIGATING THE TRANSITION TO HIGH-PERFORMANCE WORK


SYSTEMS
Building commitment to high-performance work systems is an ongoing activity. Perhaps, in fact, it is
never fully completed. And as in any change activity, performance frequently falters as
implementation gets under way. One reason is that pieces of the system are changed incrementally
rather than as a total program. Xerox Corporation found that when it implemented teams without
also changing the compensation system to support teamwork, it got caught in a bad transition. The
teams showed poorer performance than did employees working in settings that supported
individual contributions. Company executives concluded that they needed to change the entire
system at once, because piecemeal changes were detrimental. The other mistake organizations
often make is to focus on either top-down change driven by executives or bottom-up change
cultivated by the employees. Firms such as Champion International, now a part of International
Paper, and ASDA, a low-cost British retailer, are among the many companies that have found that
the best results occur when managers and employees work together. The top-down approach
communicates manager support and clarity, while the bottom-up approach ensures employee
acceptance and commitment.

Building a Transition Structure


Implementation of high-performance work systems proceeds in different ways for different
organizations. In organizational start-ups, managers have the advantage of being able to put
everything in place at once. However, when organizations must be retrofitted, the process may
occur a bit more clumsily. When Honeywell switched to high-performance work systems,
employees attended training programs and participated in the redesign of their jobs while the
plant was shut down to be re-equipped with new technology. When the new plant was
reopened, self-managing teams were put in place and a new pay system was implemented for
the high-performance workforce. Not every organization has the luxury of suspending
operations while changes are put in place. Nevertheless, establishing an implementation
structure keeps everyone on track and prevents the system from bogging down. The structure
provides a timetable and process for mapping key business processes, redesigning work, and
training employees.

Evaluating the Success of the System


Once high-performance work systems are in place, they need to be monitored and evaluated
over time. Several aspects of the review process should be addressed. First, there should be a
process audit to determine whether the system has been implemented as it was designed and
whether the principles of high-performance work systems are being reinforced. Questions such
as the following might be included in the audit:
 Are employees actually working together, or is the term “team” just a label?
 Are employees getting the information they need to make empowered decisions?
 Are training programs developing the knowledge and skills employees need?
 Are employees being rewarded for good performance and useful suggestions?
 Are employees treated fairly so that power differences are minimal?

Second, the evaluation process should focus on the goals of high-performance work systems. To
determine whether the program is succeeding, managers should look at such issues as the
following:
 Are desired behaviors being exhibited on the job?
 Are quality, productivity, flexibility, and customer service objectives being met?
 Are quality-of-life goals being achieved for employees?
 Is the organization more competitive than in the past?

Finally, high-performance work systems should be periodically evaluated in terms of new


organizational priorities and initiatives. Because high-performance work systems are built on key
business processes that deliver value to customers, as these processes and customer
relationships change so too should the work system. The advantage of high-performance work
systems is that they are flexible and, therefore, more easily adapted. When change occurs, it
should be guided by a clear understanding of the business needs and exhibit a close vertical fit to
strategy.

13.8 OUTCOMES OF HIGH-PERFORMANCE WORK SYSTEMS

Organizations achieve a wide variety of outcomes from high-performance work systems and
effective human resources management. We have categorized these outcomes in terms of either
employee concerns such as quality-of-work-life issues and job security or competitive challenges
such as performance, productivity, and profitability. Throughout the text we have emphasized that
the best organizations find ways to achieve a balance between these two sets of outcomes and
pursue activities that improve both.

There are a myriad of potential benefits to employees from high-performance work systems. In high-
performing workplaces, employees have the latitude to decide how to achieve their goals. In a
learning environment, people can take risks; generate new ideas, and make mistakes, which in turn
lead to new products, services, and markets. Because employees are more involved in their work,
they are likely to be more satisfied and find that their needs for growth are more fully met. Because
they are more informed and empowered, they are likely to feel that they have a fuller role to play in
the organization and that their opinions and expertise are valued more. This of course underlies
greater commitment. With higher skills and greater potential for contribution, they are likely to have
more job security as well as be more marketable to other organizations.

If employees with advanced education are to achieve their potential, they must be allowed to utilize
their skills and abilities in ways that contribute to organizational success while fulfilling personal job
growth and work satisfaction needs. High-performance work systems serve to mesh organizational
objectives with employee contributions. Conversely, when employees are underutilized,
organizations operate at less than full performance, while employees develop poor work attitudes
and habits

Several organizational outcomes also result from using high-performance work systems. These
include higher productivity, lower costs, and better responsiveness to customers, greater flexibility,
and higher profitability. Highlights in Exhibit 7.2 provide a sample of the success stories that
American companies have shared about their use of high-performance work systems.

Exhibit - The Impact of High-Performance Work Systems


• Ames Rubber Corporation, a New Jersey-based manufacturer of rubber products and office
machine components, experienced a 48-percent increase in productivity and five straight
years of revenue growth.
• Sales at Connor Manufacturing Services, a San Francisco firm, grew by 21 percent, while new
orders rose 34 percent and the company’s profit on operations increased 21 percent to a
record level.
• Over a seven-year period, Granite Rock, a construction material and mining company in
Watsonville, California, experienced an 88-percent increase in market share, its standard for
on-time delivery grew from 68 to 95 percent, and revenue per employee was 30 percent
above the national average.
• At One Valley Bank of Clarksburg, West Virginia, employee turnover dropped by 48 percent,
productivity increased by 24 percent, return on equity grew 72 percent, and profits jumped
by 109 percent in three years.
• The Tennessee Eastman Division of the Eastman Chemical Company experienced an increase
in productivity of nearly 70 percent, and 75 percent of its customers ranked it as the top
chemical company in customer satisfaction.
• A study by John Paul MacDuffie of 62 automobile plants showed that those implementing
high-performance work systems had 47 percent better quality and 43 percent better
productivity. • A study by Jeff Arthur of 30 steel minimills showed a 34-percent increase in
productivity, 63 percent less scrap, and 57 percent less turnover.
• A study by Mark Huselid of 962 firms in multiple industries showed that high-performance
work systems resulted in an annual increase in profits of more than $3,800 per employee.
(Source: Martha A. Gephart and Mark E. Van Buren, “The Power of High Performance Work Systems,”
Training &
Development 50, no. 10 (October 1996): 21–36.)
Organizations can create a sustainable competitive advantage through people if they focus on four
criteria. They must develop competencies in their employees that have the following qualities:
Valuable: High-performance work systems increase value by establishing ways to increase
efficiency, decrease costs, improve processes, and provide something unique to customers.

Rare: High-performance work systems help organizations develop and harness skills, knowledge,
and abilities that are not equally available to all organizations.

Difficult to imitate: High-performance work systems are designed around team processes
and capabilities that cannot be transported, duplicated, or copied by rival firms.

Organized: High-performance work systems combine the talents of employees and rapidly
deploy them in new assignments with maximum flexibility

These criteria clearly show how high-performance work systems, and human resources management
in general, are instrumental in achieving competitive advantage through people. However, for all
their potential, implementing high-performance work systems is not an easy task. The systems are
complex and require a good deal of close partnering among executives, line managers, HR
professionals, union representatives, and employees. Ironically, this very complexity leads to
competitive advantage. Because high-performance work systems are difficult to implement,
successful organizations are difficult to copy. The ability to integrate business and employee
concerns is indeed rare, and doing it in a way that adds value to customers is especially noteworthy.
Organizations such as Wal-Mart, Microsoft, and Southwest Airlines have been able to do it, and as a
result they enjoy a competitive advantage

13.9 SUMMARY

High-performance work systems are specific combinations of HR practices, work structures, and
processes that maximize employee knowledge, skill, commitment, and flexibility. They are based on
contemporary principles of High-involvement organizations. These principles include shared
information, knowledge development, performance–reward linkages, and egalitarianism.

High-performance work systems are composed of several interrelated components. Typically, the
system begins with designing empowered work teams to carry out key business processes. Team
members are selected and trained in technical, problem-solving, and interpersonal skills. To align the
interests of employees with those of the organization, reward systems are connected to
performance and often have group and organizational incentives. Skill-based pay is regularly used to
increase flexibility and salaried pay plans are used to enhance an egalitarian environment.
Leadership tends to be shared among team members, and information technology is used to ensure
that employees have the information they need to make timely and productive decisions.

The pieces of the system are important only in terms of how they help the entire system function.
When all the pieces support and complement one another, high-performance work systems achieve
internal fit. When the system is aligned with the competitive priorities of the organization, it
achieves external fit as well. Implementing high-performance work systems represents a
multidimensional change initiative. High-performance work systems are much more likely to go
smoothly if a business case is first made. Top-management support is critical, and so too is the
support of union representatives and other important constituents. HR representatives are often
helpful in establishing a transition structure to help the implementation progress through its various
stages. Once the system is in place, it should be evaluated in terms of its processes, outcomes, and
ongoing fit with strategic objectives of the organization. When implemented effectively, high-
performance work systems benefit both the employees and the organization. Employees have more
involvement in the organization, experience growth and satisfaction, and become more valuable as
contributors. The organization also benefits from high productivity, quality, flexibility, and customer
satisfaction. These features together can provide an organization with a sustainable competitive
advantage.
Case Study-1
HPWS Transforms Nevada Plant into One in a Million
In the world of beverage plants, this milestone at Ocean Spray’s Henderson, Nevada, plant is
virtually unheard of—1 million operating hours without a lost-time accident. “This is an
accomplishment that very few in our industry ever achieve,” says Mike Stamatakos, vice-
president of operations for Ocean Spray. “A plant’s safety record is a reflection of how well it
is run. This milestone is an indication that Henderson does most—if not everything—well.”
The fact that Ocean Spray Henderson is one of the safest beverage plants around is no
accident. The plant’s impressive operations milestone is the result of a two and- a-half-year
effort to improve safety awareness, uptime, and overall operations.

When the plant was built in 1994 to serve Ocean Spray customers west of the Rockies,
Ocean Spray had a vision to create a high-performance work system. The goal was to have
an educated and involved workforce that would raise the bar in terms of plant performance
and operations. As part of that effort, in 2001 Henderson managers began a dedicated
environmental health and safety (EHS) program. An early step in the process was bringing in
an occupational therapist to perform a job safety analysis on the plant. The EHS program
ranges from formal computer-based training—required of every employee—to fun
promotions designed to get employees engaged with the safety message. The Ocean Spray
Henderson plant staff is divided into four teams and each is measured on just how well it
performs. A bulletin board posts each team’s days without a recordable accident. A real-
time scoreboard on the plant floor provides workers a streaming update of the plant’s vital
performance statistics. The idea is that an informed worker is a stronger team member. The
plant operates on a just-in-time delivery and shipment schedule that helps keep things
running on time and within budget. Reaching the 1-million-hour milestone was a 25-year
journey. “It’s not just a case of the people in the front office talking the talk. It is the people
on the floor and everyone in the facility walking the walk,” says Jim Colmey, a safety
specialist at the plant.
QUESTIONS
1. What are the key aspects of Ocean Spray’s high-performance work system?
2. Do you think the system achieves both internal and external fit?
3. What other HR practices might the company consider implementing?
Source: Condensed from Andrea Foote, “One in a Million: Ocean Spray Henderson Has Parlayed
Hard
Work and Dedication into a Remarkable Operations Milestone,” Beverage World 122, no. 8 (August
15,2003): 22–29.One

13.10 KEYWORDS
High Performance Work System (HPWS)-combination of HR practices, work structures, and
processes that maximizes employee knowledge, skill, commitment, and flexibility

Transition structure- is the structure of state of an organization when it is in development stage


from one to other

Labor structure- is the structure how labour has been distributed in the facility

Egalitarianism- is a trend of thought that favors equality for particular categories of, or for all, living
entities
UNIT 14 WORK STATION STUDY AND ERGONOMICS
Objectives
After going through this unit,you will be able to:
 design the workplace according to nature of work.
 prepare a checklist for job design assessment.
 estimate the normal working requirements with respect to area and tools.
 recognize the symptoms of occupational overuse syndrome.
 use the tools and diagrams at appropriate place.
 apply ergonomics in each corner of your workplace.
Structure
14.1 Introduction
14.2 Method Study
14.3 Charts and Diagram used in Method study
14.4 Occupational Overuse Syndrome
14.5 Job Design
14.6 Work Measurement
14.7 Summary
14.8 Keywords

14.1 INTRODUCTION

Productivity has now become an everyday watch word. It is crucial to the welfare of industrial firm
as well as for the economic progress of the country. High productivity refers to doing the work in a
shortest possible time with least expenditure on inputs without sacrificing quality and with minimum
wastage of resources.

Work-study forms the basis for work system design. The purpose of work design is to identify the
most effective means of achieving necessary functions. This work-study aims at improving the
existing and proposed ways of doing work and establishing standard times for work performance.
Work-study is encompassed by two techniques, i.e., method study and work measurement.

“Method study is the systematic recording and critical examination of existing and proposed ways of
doing work, as a means of developing and applying easier and more effective methods and reducing
costs.”
“Work measurement is the application or techniques designed to establish the time for a qualified
worker to carry out a specified job at a defined level or performance.”

There is a close link between method study and work measurement. Method study is concerned
with the reduction of the work content and establishing the one best way of doing the job whereas
work measurement is concerned with investigation and reduction of any ineffective time associated
with the job and establishing time standards for an operation carried out as per the standard
method.

14.2 METHOD STUDY

Method study is the technique of systematic recording and critical examination of existing and
proposed ways of doing work and developing an easier and economical method.
Objectives of Method Study
 Improvement of manufacturing processes and procedures.
 Improvement of working conditions.
 Improvement of plant layout and work place layout.
 Reducing the human effort and fatigue.
 Reducing material handling
 Improvement of plant and equipment design.
 Improvement in the utility of material, machines and manpower.
 Standardization of method.
 Improvement in safety standard.

Basic Procedure for Method Study


The basic procedure for conducting method study is as follows:
 Select the work to be studied.
 Record all facts about the method by direct observation.
 Examine the above facts critically.
 Develop the most efficient and economic method.
 Define the new method.
 Install the new method
 Maintain the new method by regular checking.
Select
While selecting a job for doing method study, the following factors are considered:
 Economic factors
 Human factors
 Technical factors
Economic Factors
The money saved because of method study should be sufficiently more. Then only the study
will be worthwhile. Based on the economic factors, generally the following jobs are selected.
 Operations having bottlenecks (which hold up other production activities).
 Operations done repetitively.
 Operations having a great amount of manual work.
 Operations where materials are moved for a long distance.

Human Factors
The method study will be successful only with the co-operation of all people concerned viz,
workers, supervisor, trade unions etc.Workers may resist method study due to
 The fear of unemployment
 The fear of reduction in wages
 The fear of increased work load
Then if they do not accept method study, the study should be postponed.

Technical Factors
To improve the method of work all the technical details about the job should be available.
Every machine tool will have its own capacity. Beyond this, it cannot be improved. For
example, a work study man feels that speed of the machine tool may be increased and HSS
tool may be used. But the capacity of the machine may not permit increased speed. In this
case, the suggestion of the work study man cannot be implemented. These types of
technical factors should be considered.
Record
All the details about the existing method are recorded. This is done by directly observing the
work. Symbols are used to represent the activities like operation, inspection, transport, storage
and delay. Different charts and diagrams are used in recording. They are:
 Operation process chart: All the operations and inspections are recorded.
 Flow process chart
- Man type All the activities of man are recorded
- Material type All the activities of the material are recorded
- Equipment types all the activities of equipment or machine are recorded.
 Two-handed process chart: Motions of both lands of worker are Right hand-Left hand
chart recorded independently.
 Multiple activity charts: Activities of a group of workers doing a single job or the
activities of a single worker operating several machines are recorded.
 Flow diagram: This is drawn to suitable scale. Path of flow of material in the shop is
recorded.
 String diagram: The movements of workers are recorded using a string in a diagram
drawn to scale.
Examine
Critical examination is done by questioning technique. This step comes after the method is
recorded by suitable charts and diagrams. The individual activity is examined by putting several
questions. The following factors are questioned
 Purpose – To eliminate the activity, if possible
 Place – To combine or re-arrange the activities
 Sequence -do-
 Person -do-
 Means – To simplify the activity

The following sequence of questions is used:


 Purpose – What is done?
- Why is it done?
- What else could be done?
- What should be done?
 Place – Where is it being done?
- Why is it done there?
- Where else could it be done?
- Where should it be done?
 Sequence – When is it done?
- Why is it done then?
- When could it be done?
- When should it be done?
 Person – Who is doing it?
- Why does that person do it?
- Who else could do it?
- Who should do it?
 Means – How is it done?
- Why is it done that way?
- How else could it be done?
- How should it be done?
 By doing this questioning
- Unwanted activities can be eliminated
- Number of activities can be combined or re-arranged
- Method can be simplified.

All these will reduce production time.


Develop
The answer to the questions given below will result in the development of a better method.
 Purpose – What should be done?
 Place – Where should it be done?
 Sequence – When should it be done?
 Person – Who should do it?
 Means – How should it be done?

Define
Once a complete study of a job has been made and a new method is developed, it is necessary
to obtain the approval of the management before installing it. The work study man should
prepare a report giving details of the existing and proposed methods. He should give his reasons
for the changes suggested. The report should show

Brief Description of the Old Method


 Brief description of the new method
 Reasons for change
 Advantages and limitations of the new method
 Savings expected in material, labor, and overheads
 Tools and equipment required for the new method
 The cost of installing the new method including
- Cost of new tools and equipment
- Cost of re-layout of the shop
- Cost of training the workers in the new method
- Cost of improving the working conditions

Written standard practice: Before installing the new method, an operator ‘s instructions sheet
called written standard practice is prepared. It serves the following purposes:
 It records the improved method for future reference in as much detail as may be
necessary.
 It is used to explain the new method to the management foreman and
operators.
 It gives the details of changes required in the layout of machine and work places.
 It is used as an aid to training or retraining operators.
 It forms the basis for time studies.
The written standard practice will contain the following information:
 Tools and equipment to be used in the new method.
 General operating conditions.
 Description of the new method in detail.
 Diagram of the workplace layout and sketches of special tools, jigs or fixtures
required.

Install
This step is the most difficult stage in method study. Here the active support of both
management and trade union is required. Here the work study man requires skill in getting along
with other people and winning their trust. Install stage consists of
 Gaining acceptance of the change by supervisor.
 Getting approval of management.
 Gaining the acceptance of change by workers and trade unions.
 Giving training to operators in the new method.
 To be in close contact with the progress of the job until it is satisfactorily executed.
Maintain
The work study man must see that the new method introduced is followed. The workers after
some time may slip back to the old methods. This should not be allowed. The new method may
have defects. There may be difficulties also. This should be rectified in time by the work study
man. Periodical review is made. The reactions and suggestions from workers and supervisors are
noted. This may lead to further improvement. The differences between the new written
standard practice and the actual practice are found out. Reasons for variations are analyzed.
Changes due to valid reasons are accepted. The instructions are suitably modified.

14.3 CHARTS AND DIAGRAMS USED IN METHOD STUDY (TOOLS &


TECHNIQUES)
As explained earlier, the following charts and diagrams are used in method study.
 Operation process chart (or) Outline process chart
 Flow process chart
- Material type
- Operator type
- Equipment type
 Two-handed process chart. (or) Left Hand-Right hand chart
 Multiple activity chart
 Flow diagram
 String diagram

Process Chart Symbols


The recording of the facts about the job in a process chart is done by using standard symbols.
Using of symbols in recording the activities is much easier than writing down the facts about the
job. Symbols are very convenient and widely understood type of short hand. They save a lot of
writing and indicate clearly what is happening.

Operation
A large circle indicates operation. An operation takes place when there is a change in
physical or chemical characteristics of an object. An assembly or disassembly is also an
operation.
When information is given or received or when planning or calculating takes place it is
also called operation.
Example 1.1
Reducing the diameter of an object in a lathe, Hardening the surface of an object by
heat treatment.
Inspection
A square indicates inspection. Inspection is checking an object for its quality, quantity or
identifications.
Example 1.2
Checking the diameter of a rod, Counting the number of products produced.

Transport
An arrow indicates transport. This refers to the movement of an object or operator or
equipment from one place to another. When the movement takes place during an
operation, it is not called transport.
Example 1.3
Moving the material by a trolley
Operator going to the stores to get some tool.

Delay or temporary storage


A large capital letter D indicates delay. This is also called as temporary storage. Delay
occurs when an object or operator is waiting for the next activity.
Example 1.4
An operator waiting to get a tool in the stores, Work pieces stocked near the
machine before the next operation.
1. Process Planning and Cost Estimation

Permanent storage
An equilateral triangle standing on its vertex represents storage. Storage takes place
when an object is stored and protected against unauthorized removal.
Example 1.5
Raw material in the store room

Combined activity
When two activities take place at the same time or done by the same operator or at the
same place, the two symbols of activities are combined.

Example 1.6
Reading and recording a pressure gauge, here a circle inside a square represents the
combined activity of operation and inspection.

Process Chart symbols

14.4 OCCUPATIONAL OVERUSE SYNDROME

The condition Occupational Overuse Syndrome (OOS) is a collective term for a range of conditions,
including injury, characterized by discomfort or persistent pain in muscles, tendons and other soft
tissues.

The Symptoms
It is necessary to distinguish the symptoms of OOS from the normal pains of living, such as
muscle soreness after unaccustomed exercise or activity. OOS pains must also be distinguished
from the pain of arthritis or some other condition. The early symptoms of OOS include:
• Muscle discomfort
• Fatigue
• Aches and pains
• Soreness
• Hot and cold feelings
• Muscle tightness
• Numbness and tingling
• Stiffness
• Muscle weakness.
The Causes
OOS often develops over a period. It is usually caused or aggravated by some types of work. The
same conditions can be produced by activities away from the workplace. The work that may
produce OOS often involves repetitive movement, sustained or constrained postures and/or
forceful movements. The development of OOS may include other factors such as stress and
working conditions. Some conditions that fall within the scope of OOS are well defined and
understood medically, but many are not, and the reasons for their cause and development are
yet to be determined. There are several theories about the causes of OOS. One of these is as
follows and gives a useful picture which leads on to prevention strategies:

Muscles and tendons are nourished by blood which travels through blood vessels inside the
muscle. A tense muscle squeezes on these blood vessels, making them smaller and slowing
the flow of blood. The muscle can store a little oxygen to cope with momentary tension, but
when this is used up the muscle must switch to a very inefficient form of energy production.
This uses the stored energy very quickly, tires the muscle, and leads to a build-up of acid
waste products, which make the muscle hurt. As these wastes build up in the muscle, it
becomes mechanically stiff and this makes it still harder for the muscle to work. The muscle
and tendons can withstand fatigue and are able to recover if they are given a variety of
tasks, and regular rest breaks. It may be the absence of variety and rest breaks that strains
the muscles and tendons beyond their capacity for short-term recovery.
Absence of variety and rest breaks may strain muscles and tendons beyond their capacity for
short-term recovery.
(Source: Occupational Safety and Health Service of the Department of Labor, New Zealand)

Occupational Overuse Syndrome has been known for centuries and affects people in a wide range
of occupations.
(Source: Occupational Safety and Health Service of the Department of Labor, New Zealand)

The People Affected


Occupational Overuse Syndrome can affect people in a wide variety of occupations, including
the following:
• Process workers
• Cleaners
• Machinists
• Kitchen workers
• Keyboard operators
• Clerks
• Meat workers
• Knitters
• Potters
• Musicians
• Carpet layers
• Painters
• Shearers
• Hairdressers
• Typists
• Mail sorters
• Supermarket workers
• carpenters
History shows that OOS has occurred in a variety of occupations. An Italian physician described
OOS in 18th-century scribes and clerks’, while in the 19th century terms such as
“Upholsterer’sHand“and “Fisherwoman’s Finger” are examples of OOS.

Prevention
There are five main areas in which we can prevent Occupational Overuse Syndrome. These are:
• The design of equipment and tasks
• The organization of work
• The work environment
• Training and education
• The development of policies

Prevention is always better than cure, but it is particularly important when dealing with OOS.
This is because of its widespread nature, the difficulty of treatment and its potentially
debilitating consequences. A sample checklist (see Appendix B) is one of a series of seven
covering policy development for OOS, work organization, workplace design, keyboard
workstation design and working technique. It illustrates how the above aspects of work may be
assessed using such checklists.

Design of Equipment and Tasks


Equipment must be designed to accommodate people of differing sizes8-10. Professional help
may be needed, for example from an ergonomist. Tools and equipment should be designed so
the operations can avoid having to hold tense and undesirable postures. The design should allow
the operator’s joints to be comfortable and free from strain. User groups should be consulted at
all stages of equipment purchase and design. Work should allow a person to carry out a variety
of tasks within a single job. This should allow for variations in people’s postures and the way
they use their muscles. Where a task requires a sustained period of repetitive or static activity,
the task should incorporate rest breaks and micro pauses.
14.5 JOB DESIGN

Where possible, job rotation, automation and task modification should be considered to reduce the
effects of sustained postures and repetitive movements,good job design will incorporate a range of
factors, including consultation with the worker, so that there is a match between the individual and
the job.
A good physical match between the person and the workplace is illustrated by:
• The head inclines only slightly forward.
• The arms fall naturally on to the work surface.
• The back is properly supported.
• There is good knee and leg room.

Good workstation design allows the operator’s joints to be comfortable and free from strain
(Source: Occupational Safety and Health Service of the Department of Labor, New Zealand)

14.6 WORK MEASUREMENT

Work measurement is a technique to establish the time required for a qualified worker to carry out a
specified job at a defined level of performance.
Objectives of work measurement
 To reduce or eliminate non-productive time.
 To fix the standard time for doing a job.
 To develop standard data for future reference.
 To improve methods.
Uses of work measurements
 To compare the efficiency of alternate methods. When two or more methods are
available for doing the same job, the time for each method is found out by work
measurement. The method which takes minimum time is selected.

 Standard time is used as a basis for wage incentive schemes.

 It helps for the estimation of cost. Knowing the time standards, it is possible to work out
the cost of the product. This helps to quote rates for tenders.

 It helps to plan the workload of man and machine.

 It helps to determine the requirement of men and machine. When we know the time to
produce one piece and the quantity to be produced, it is easy to calculate the total
requirement of men and machines.

 It helps in better production control. Time standards help accurate scheduling. So, the
production control can be done efficiently.

 It helps to control the cost of production. With the help of time standards, the cost of
production can be worked out. This cost is used as a basis for control.

 It helps to fix the delivery date to the customer. By knowing the standard time, we will
be able to calculate the time required for manufacturing the required quantity of
products.

Techniques of work measurement


The different techniques used in work measurement are
 Stop watch time study.
 Production study.
 Work sampling or Ratio delay study.
 Synthesis from standard data.
 Analytical estimating.
 Predetermined motion time system.

Work Place Ergonomics


Ergons means ‗work ‘and Nomo’s means ‗Natural laws‘. Ergonomics or its American equivalent
‗Human Engineering may be defined as the scientific study of the relationship between man and
his working environments. Ergonomics implies ‗Fitting the job to the worker ‘. Ergonomics
combines the knowledge of a psychologist, physiologist, anatomist, engineer, anthropologist,
and a biometrician

Objectives
The objectives of the study of ergonomics are to optimize the integration of man and machine to
increase work rate and accuracy. It involves
 The design of a work place befitting the needs and requirements of the worker.
 The design of equipment, machinery, and controls in such a manner to minimize
mental and physical strain on the worker thereby increasing the efficiency,
 The design of a conductive environment for executing the task most effectively.

Both work study and Ergonomics are complementary and try to fit the job to the workers;
however, Ergonomics adequately takes care of factors governing physical and mental strains.

Applications
In practice, ergonomics has been applied to several areas as discussed below
 Working environments
 The work places
 Other areas

Working environments
 The environment aspect includes considerations regarding light, climatic conditions (i.e.,
temperature, humidity and fresh air circulation), noise, bad odor, smokes, fumes, etc.,
which affect the health and efficiency of a worker.

 Day light should be reinforced with artificial lights, depending upon the nature of work.

 The environment should be well-ventilated and comfortable.

 Dust and fume collectors should preferably be attached with the equipment’s giving rise
to them.

 Glares and reflections coming from glazed and polished surfaces should be avoided.

 For better perception, different parts or sub-systems of equipment should be colored


suitably. Colors also add to the sense of pleasure.

 Excessive contrast, owing of color or badly located windows, etc., should be eluded.

 Noise, no doubt distracts the attention (thoughts, mind) but if it is slow and continuous,
workers become habituated to it. When the noise is high pitched, intermittent, or
sudden, it is more dangerous and needs to be dampened by isolating the place of noise
and through the use of sound absorbing materials.

Work place layout


Design considerations
 Materials and tools should be available at their predetermined places and close to the
worker.

 Tools and materials should preferably be in the order in which they will be used.

 The supply of materials or parts, if similar work is to be done by each hand, should be
duplicated. That is materials or parts to be assembled by right hand should be kept on
right hand side and those to be assembled by the left hand should be kept on left hand
side.
 Gravity should be employed, wherever possible, to make raw materials reach the
operator and to deliver material at its destination (e.g., dropping material through a
chute).

 Height of the chair and work bench should be arranged in a way that permits
comfortable work posture. To ensure this:-
- Height of the chair should be such that top of the work table is about 50 mm
below the elbow level of the operator.
- Height of the table should be such that worker can work in both standing and
sitting positions.
- Flat foot rests should be provided for sitting workers.
- Figure 14.5 shows the situation with respect to bench heights and seat heights.
- The height and back of the chair should be adjustable.
- Display panel should be at right angles to the line or sight of the operator.

 An instrument with a pointer should be employed for check readings whereas for
quantitative readings, digital type of instrument should be preferred.

 Hand tools should be possible to be picked up with least disturbance or rhythm and
symmetry of movements.

 Foot pedals should be used, wherever possible, for clamping de-clamping and for
disposal of finished work.
- Handles, levers, and foot pedals should be possible to be operated without
changing body position.

 Work place must be properly illuminated and free from glare to avoid eye strain.

 Work place should be free from the presence of disagreeable elements like heat,
smoke, dust, noise, excess humidity, vibrations etc.
Bench and Seat Heights
(Source: Process, Planning and Cost Estimation, 2nd Edition, 2008)
Suggested Work Place Layout
Figure shows a work place layout with different areas and typical dimensions. It shows the
left hand covering the maximum working area and the right hand covering the normal
working area.

Normal working area


It is within the easy reach of the operator.

Normal Working Area


(Source: Process, Planning and Cost Estimation, 2nd Edition, 2008)
Maximum Working Area
It is accessible with full arm stretch. Figure shows work place layout for assembling small
component parts. A-1 is the actual working area and the place of assembly (POA) where four
component parts P-1, P-2, P-3, and P-4 are assembled. Bins containing P-1, P-2, P-3, and P-4
and commonly employed tools (CET) (like screw driver, pliers, etc.) lie in the normal working
area A-2. ORT

Place Layout for an Assembly job


(Source: Process, Planning and Cost Estimation, 2nd Edition, 2008)
Occasionally, required Tools, (ORT) (hammers etc.) lie, in the maximum working area, A- 3.
After the assembly has been made at POA, it is dropped into the cut portion in the work
table – PDA (Place for dropping assemblies) from where the assembly is delivered at its
destination with the help of a conveyer. This work place arrangement satisfies most of the
principles of motion economy.

Other areas
Other areas include studies related to fatigue, losses caused due to fatigue, rest pauses, amount
of energy consumed, shift work and age considerations.

SAMPLE CHECKLIST
Table Risk factors
A “No” answer can indicate an increased risk Tends to Tends to
decrease risk increase risk
of OOS, but all factors should be considered of OOS of oos

Yes No

A Task Specification:

1 Are there clear job descriptions?

2 Are there clear performance specifications7*

3 Do operators get feedback from supervisors about their


performance? *

4 Do supervisors get feedback from operators about their


performance?*

B Task Nature:

1 Does the operator understand what is required in the job?

2 Do operators have some control over their work flows?

3 Does the job have a variety of tasks to avoid monotony?

4 If the job lacks a variety of tasks, is there job rotation?

5 Is the job interesting to the person?*

6 Does the job structure prevent pressures on the individual from


becoming too great?*

7 Is there only one supervisor for the operator?*

C Task Organization:

1 Can operators take regular breaks?

2 Can operators use the micro pause technique?*

3 If any recent changes have been made to work/tasks, was the risk
of OOS taken into consideration?*

D Amount/Rate of Work

1 Does the method of payment avoid systems which may increase the
risk of OOS?*

2 If overtime is worked, is it organized to minimize the risk of OOS?*

3 Are deadlines organized so that workloads remain reasonable?

4 Does the job avoid boredom?

E Organizational Practices:

1 Is work monitoring for discipline purposes avoided?*

2 Is there a mechanism for dealing with seasonal volumes of work?

This checklist may be used to evaluate a particular job in an organization.


* Refers to a note below.
NOTES
A2 A performance specification removes uncertainty and gives everyone concrete goals to aim
for. For example: a 1-3 page document will be done by 4.30 pm if presented by noon, otherwise,
it will be ready by noon the next day.

A3 and A4 Positive feedback on good performance always improves morale, regardless of the
person’s position. Sometimes upward feedback needs to be formalized. B5 Where a job allows
for and/or requires decision-making, creativity, initiative and leads to further learning, workers
are likely to be more involved. B6 Stress is an important feature in the development of OOS. B7 a
person with two supervisors can end up having to meet conflicting deadlines.

C2 the micro-pause technique consists of using a 5-10 second complete relaxation for every
three minutes of work. In line with ergonomic theory, productivity increases may be expected
when the micro-pause technique is carried out properly. Micro-pauses are ineffective unless the
person relaxes fully during them.

C3 Changes which are often associated with the development of OOS are speeding up the work,
the introduction of heavier workloads, overtime or a bonus system of payment, the arrival of a
new supervisor or being assigned to new duties.

Dl Bonus systems and the job and the finish payment method are both likely to increase the risk
of OOS because they may encourage people to work beyond their natural capacity.
D2 Overtime increases the amount of work and decreases the time for recovery.

14.7 SUMMARY
Ergonomics is the study of work in relation to the environment in which it is performed (the
workplace) and those who perform it (workers). Ergonomics is a broad science encompassing the
wide variety of working conditions that can affect worker comfort and health, including factors such
as lighting, noise, temperature, vibration, workstation design, tool design, machine design, chair
design and footwear, and job design, including factors such as shift work, breaks, and meal
schedules. The information in this Module will be limited to basic ergonomic principles for sitting
and standing work, tools, heavy physical work and job design

14.8 KEYWORDS
Method Study- Method study is the process of subjecting work to systematic, critical scrutiny to
make it more effective and/or more efficient. It is one of the keys to achieving productivity
improvement.
Flow Process Chart- The process flow chart provides a visual representation of the steps in
a process.Flow charts are also referred to as Process Mapping or Flow Diagrams
Process Symbols- Process symbol that indicates the actions that take place or the locations where
processes occur.
Ergonomics- Ergonomics (or human factors) is the scientific discipline concerned with the
understanding of the interactions among humans and other elements of a system, and the
profession that applies theoretical principles, data and methods to design in order to optimize
human wellbeing and overall system performance.

You might also like