Operation Management Module
Operation Management Module
Chapter 1
1.1 Introduction
Today companies are competing in a very different environment than they were only a few years ago.
To survive they must focus on quality, time-based competition, efficiency, international perspectives,
and customer relationships. Global competition, e-business, the Internet, and advances in technology
require flexibility and responsiveness. This new focus has placed operations management in the
limelight of business, because it is the function through which companies can achieve this type of
competitiveness. Consider some of today’s most successful companies, such as Wal-Mart, Southwest
Airlines, General Electric, Starbucks, Toyota, FedEx, and Procter & Gamble. These companies have
achieved world-class status in large part due to a strong focus on operations management. In this
module you will learn specific tools and techniques of operations management that have helped these,
and other companies, achieve their success.
The purpose of this course is to help prepare you to be successful in this new business environment.
Operations management will give you an understanding of how to help your organization gain a
competitive advantage in the marketplace. Regardless of whether your area of expertise is marketing,
finance, MIS, or operations, the techniques and concepts in this module will help you in your
business Career. The material in this module will teach you how your company or enterprise can offer
products and services cheaper, better, and faster.
A typical business organization has three major functions: finance, marketing, and operations
management. Figure 1.2.1 illustrates this by showing that the vice presidents of each of these
functions reports directly to the president or CEO of the company. These three functions, and other
supporting business functions—such as accounting, purchasing, human resources, and engineering—
perform different but related activities necessary for the operation of the organization. The functions
must interact to achieve goals and objectives of the organization, and each makes an important
contribution. Finance is the function responsible for managing cash flow, current assets, and capital
investments. Marketing is responsible for sales, generating customer demand, and understanding
customer wants and needs. Most of us have some idea of what finance and marketing are about, but
what does operations management do?
Figure 1.2.1. Organizational chart showing the three major business functions
Operations management (OM) is the business function that plans, organizes, coordinates, and
controls the resources needed to produce a company’s goods and services. Operations management is
a management function. It involves managing people, equipment, technology, information, and many
other resources. Operations management is the central core function of every company. This is true
whether the company is large or small, provides a physical good or a service, and is for profit or not
for profit. Every company has an operations management function. Actually, all the other
organizational functions are there primarily to support the operations function. Without operations,
there would be no goods or services to sell.
The role of operations management is to transform a company’s inputs into the finished goods or
services. Inputs include human resources (such as workers and managers), facilities and processes
(such as buildings and equipment), as well as materials, technology, and information. Outputs are the
goods and services a company produces. Figure1.2.2 shows this transformation process.
At a factory the transformation is the physical change of raw materials into products, such as
transforming leather and rubber into sneakers, denim into jeans, or plastic into toys. At an airline it is
the efficient movement of passengers and their luggage from one location to another. At a hospital it
is organizing resources such as doctors, medical procedures, and medications to transform sick people
into healthy ones.
Operations management is responsible for orchestrating all the resources needed to produce the final
product. This includes designing the product; deciding what resources are needed; arranging
schedules, equipment, and facilities; managing inventory; controlling quality; designing the jobs to
make the product; and designing work methods. Basically, operations management is responsible for
all aspects of the process of transforming inputs into outputs. Customer feedback and performance
information are used to continually adjust the inputs, the transformation process, and characteristics
of the outputs. As shown in Figure 1.2.2, this transformation process is dynamic in order to adapt to
changes in the environment.
For operations management to be successful, it must add value during the transformation process. We
use the term value added to describe the net increase between the final value of a product and the
value of all the inputs. The greater the value added, the more productive a business is. An obvious
way to add value is to reduce the cost of activities in the transformation process. Activities that do not
add value are considered a waste; these include certain jobs, equipment, and processes. In addition to
value added, operations must be efficient. Efficiency means being able to perform activities well, and
at the lowest possible cost. An important role of operations is to analyze all activities, eliminate those
that do not add value, and restructure processes and jobs to achieve greater efficiency.
Organizations can be divided into two broad categories: manufacturing organizations and service
organizations, each posing unique challenges for the operations function. There are two primary
distinctions between these categories. First, manufacturing organizations produce physical, tangible
goods that can be stored in inventory before they are needed. By contrast, service organizations
produce intangible products that cannot be produced ahead of time. Second, in manufacturing
organizations most customers have no direct contact with the operation. Customer contact is made
through distributors and retailers. For example, a customer buying a car at a car dealership never
comes into contact with the automobile factory. However, in service organizations the customers are
typically present during the creation of the service. Hospitals, colleges, theaters, and barber shops are
examples of service organizations in which the customer is present during the creation of the service.
The differences between manufacturing and service organizations are not as clear-cut as they might
appear, and there is much overlap between them. Most manufacturers provide services as part of their
offering, and many service firms manufacture physical goods that they deliver to their customers or
consume during service delivery. For example, a manufacturer of furniture may also provide
shipment of goods and assembly of furniture. On the other hand, a barber shop may sell its own line
of hair careProducts. You might not know that General Motors’ greatest return on capital does not
come from selling cars but rather from post sales parts and service.
The differences between manufacturing and services are shown in Figure 1-3, which focuses on the
dimensions of product tangibility and the degree of customer contact. Pure manufacturing and pure
service extremes are shown, as well as the overlap between them.
Manufacturing Vs Service
It is important to understand how to manage both service and manufacturing operations. However,
managing service operations is of especially high importance. The reason is that the service sector
constitutes a dominant segment of the world economy.
In this section we look at some of the specific decisions that operations managers have to make.
We can think about specific day-to-day decisions, we need to make decisions for the whole company
that are long term in nature. Long-term decisions that set the direction for the entire organization are
called strategic decisions. They are broad in scope and set the tone for other, more specific decisions.
They address questions such as: What are the unique features of our product? What market do we
plan to compete in? What do we believe will be the demand for our product? Short-term decisions
that focus on specific departments and tasks are called tactical decisions. Tactical decisions focus on
more specific day-to-day issues, such as the quantities and timing of specific resources. Strategic
decisions are made first and determine the direction of tactical decisions, which are made more
frequently and routinely. Therefore, we have to start with strategic decisions and then move on to
tactical decisions. This relationship is shown in Figure 1-5. Tactical decisions must be aligned with
strategic decisions, because they are the key to the company’s effectiveness in the long run. Tactical
decisions provide feedback to strategic decisions, which can be modified accordingly. You can see in
the example of Gourmet Wafers how important OM decisions are. OM decisions are critical to all
types of companies, large and small. In large companies these decisions are more complex because of
the size and scope of the organization.
Large companies typically produce a greater variety of products, have multiple location sites, and
often use domestic and international suppliers. Managing OM decisions and coordinating efforts can
be a complicated task, yet the OM function is critical to the company’s success.
To maintain a competitive position in the marketplace, a company must have a long-range plan. This
plan needs to include the company’s long-term goals, an understanding of the marketplace, and a way
to differentiate itself from its competitors. All other decisions made by the company must support this
long-range plan. Otherwise, each person in the company would pursue goals that he or she
considered important, and the company would quickly fall apart.
The functioning of a football team on the field is similar to the functioning of a business and provides
a good example of the importance of a plan or vision. Before the plays are made, the team prepares a
game strategy. Each player on the team must perform a particular role to support this strategy. The
strategy is a “game plan” designed so that the team can win. Imagine what would occur if individual
players decided to do plays that they thought were appropriate.
Certainly the team’s chance of winning would not be very high. A successful football team is a
unified group of players using their individual skills in support of a winning strategy. The same is
true of a business.
The long-range plan of a business, designed to provide and sustain shareholder value, is called the
business strategy.For a company to succeed, the business strategy must be supported by each of the
individual business functions, such as operations, finance, and marketing. Operations strategy is a
long-range plan for the operations function that specifies the design and use of resources to support
the business strategy. Just as the players on a football team support the team’s strategy, the role of
everyone in the company is to do his or her job in a way that supports the business strategy.
The role of operations strategy is to provide a plan for the operations function so that it can make the
best use of its resources. Operations strategy specifies the policies and plans for using the
organization’s resources to support its long-term competitive strategy. Figure 1-6 shows this
relationship.
Remember that the operations function is responsible for managing the resources needed to produce
the company’s goods and services. Operations strategy is the plan that specifies the design and use of
resources to support the business strategy. This includes the location, size, and type of facilities
available; worker skills and talents required; use of technology, special processes needed, special
equipment; and quality control methods. The operations strategy must be aligned with the company’s
business strategy and enable the company to achieve its long-term plan. For example, the business
strategy of FedEx, the world’s largest provider of expedited delivery services, is to compete on time
and dependability of deliveries. The operations strategy of FedEx developed a plan for resources to
support its business strategy. To provide speed of delivery, FedEx acquired its own fleet of airplanes.
To provide dependability of deliveries, FedEx invested in a sophisticated bar code technology to track
all packages.
Figure 1.5.1.1: Relationship between the business strategy and the functional strategy
Harvard Business School professor Michael Porter says that companies often do not understand the
differences between operational efficiency and strategy. Operational efficiency is performing
operations tasks well, even better than competitors. Strategy, on the other hand, is a plan for
competing in the marketplace. An analogy might be that of running a race efficiently, but it may be
the wrong race. Strategy is defining in what race you will win. Operational efficiency and strategy
must be aligned; otherwise you may be very efficiently performing the wrong task.The role of
operations strategy is to make sure that all the tasks performed by the operations function are the right
tasks. Consider a software company that recently invested millions of dollars in developing software
with features not provided by competitors, only to discover that these were features customers did not
particularly want.
A company’s business strategy is developed after its managers have considered many factors and
have made some strategic decisions. These include developing an understanding of what business the
company is in (the company’s mission), analyzing and developing an understanding of the market
(environmental scanning), and identifying the company’s strengths (core competencies). These three
factors are critical to the development of the company’s long-range plan, or business strategy. In this
section, we describe each of these elements in detail and show how they are combined to formulate
the business strategy.
a) Mission
Every organization, from IBM to the Boy Scouts, has a mission. The mission is a statement that
answers three overriding questions:
• What business will the company be in (“selling personal computers,” “operating an Italian
restaurant”)?
• Who will the customers be, and what are the expected customers attributes (“homeowners,” “college
graduates”)?
• How will the company’s basic beliefs define the business (“gives the highest customer service,”
“stresses family values”)?
Following is a list of some well-known companies and parts of their mission statements:
Dell Computer Corporation: “to be the most successful computer company in the world”
Delta Air Lines: “worldwide airlines choice”
IBM: “translate advanced technologies into values for our customers as the world’s largest
information service company”
Lowe’s: “helping customers build, improve and enjoy their homes”
Ryder: “offers a wide array of logistics services, such as distribution management, domestically and
globally”
The mission defines the company. In order to develop a long-term plan for a business, you must first
know exactly what business you are in, what customers you are serving, and what your company’s
values are. If a company does not have a well-defined mission it may pursue business opportunities
about which it has no real knowledge or that are in conflict with its current pursuits, or it may miss
opportunities altogether.
For example, Dell Computer Corporation has become a leader in the computer industry in part by
following its mission. If it did not follow its mission Dell might decide to pursue other opportunities,
such as producing mobile telephones similar to those manufactured by Motorola and Nokia. Although
there is a huge market for mobile telephones, it is not consistent with Dell’s mission of focusing on
computers.
b) Environmental Scanning
A second factor to consider is the external environment of the business. This includes trends in the
market, in the economic and political environment, and in society. These trends must be analyzed to
determine business opportunities and threats. Environmental scanning is the process of monitoring
the external environment. To remain competitive, companies have to continuously monitor their
environment and be prepared to change their business strategy, or long-range plan, in light of
environmental changes.
There are many other types of trends in the marketplace. For example, we are seeing changes in the
use of technology, such as point-of-sale scanners, automation, computer-assisted processing,
electronic purchasing, and electronic order tracking. One rapidly growing trend is e-commerce. For
retailers like The Gap, Eddie Bauer, Fruit of the Loom, Inc., Barnes & Noble, and others, e-
commerce has become a significant part of their business. Victoria’s Secret has even used the Internet
to conduct a fashion show in order to boost sales. Some companies began using e-commerce early in
its development. Others, like Sears, Roebuck, waited and then found themselves working hard to
catch up to the competition.
In addition to market trends, environmental scanning looks at economic, political, and social trends
that can affect the business. Economic trendsinclude recession, inflation, interest rates, and general
economic conditions. Suppose that a company is considering obtaining a loan in order to purchase a
new facility. Environmental scanning could show that interest rates are particularly favorable and that
this may be a good time to go ahead with the purchase.
Political trendsinclude changes in the political climate—local, national, and international—that could
affect a company. For example, the creation of the European Union has had a significant impact on
strategic planning for global companies such as IBM, Hewlett-Packard, and PepsiCo. Similarly,
changes in trade relations with China have opened up opportunities that were not available earlier.
There has been a change in how companies view their environment, a shift from a national to a global
perspective. Companies seek customers and suppliers all over the globe. Many have changed their
strategies in order to take advantage of global opportunities, such as forming partnerships with
international firms, called strategic alliances. For example, companies like Motorola and Xerox want
to take advantage of opportunities in China and are developing strategic alliances to help them break
into that market.
Finally, social trendsare changes in society that can have an impact on a business. An example is the
awareness of the dangers of smoking, which has made smoking less socially acceptable. This trend
has had a huge impact on companies in the tobacco industry. In order to survive, many of these
companies have changed their strategy to focus on customers overseas where smoking is still socially
acceptable, or have diversified into other product lines.
c) Core Competencies
The third factor that helps define a business strategy is an understanding of the company’s strengths.
These are called core competencies. In order to formulate a long-term plan, the company’s managers
must know the competencies of their organization.
Core competencies could include special skills of workers, such as expertise in providing customized
services or knowledge of information technology. Another example might be flexible facilities that
can handle the production of a wide array of products. To be successful, a company must compete in
markets where its core competencies will have value. Table 1-2 shows a list of some core
competencies that companies may have.
Highly successful firms develop a business strategy that takes advantage of their core competencies
or strengths. To see why it is important to use core competencies, think of a student developing plans
for a successful professional career. Let’s say that this student is particularly good at mathematics but
not as good in verbal communication and persuasion. Taking advantage of core competencies would
mean developing a career strategy in which the student’s strengths could provide an advantage, such
as engineering or computer science. On the other hand, pursuing a career in marketing would place
the student at a disadvantage because of a relative lack of skills in persuasion.
Increased global competition has driven many companies to clearly identify their core competencies
and outsource those activities considered non-core. Outsourcing is when a company obtains goods or
services from an outside provider. By outsourcing non-core activities a company can focus on its core
competencies.
Once a business strategy has been developed, an operations strategy must be formulated. This will
provide a plan for the design and management of the operations function in ways that support the
business strategy. The operations strategy relates the business strategy to the operations function. It
focuses on specific capabilities of the operation that give the company a competitive edge. These
capabilities are called competitive priorities. By excelling in one of these capabilities, a company
can become a winner in its market.
These competitive priorities and their relationship to the design of the operations function are shown
in Figure 1.5.3.1. Each part of this figure is discussed next.
Figure 1.5.3.1: Operations strategy and the design of the operations function
a) Competitive Priorities
Operations managers must work closely with
marketing in order to understand the competitive
situation in the company’s market before they can
determine which competitive priorities are important.
There are four broad categories of competitive
priorities:
1. CostCompeting based on cost means offering a
product at a low price relative to the prices of
competing products. The need for this type of
competition emerges from the business strategy. The
role of the operations strategy is to develop a plan for
the use of resources to support this type of
competition. Note that a low-cost strategy can result in
a higher profit margin, even at a competitive price. Also, low cost does not imply low quality. Let’s
look at some specific characteristics of the operations function we might find in a company
competing on cost.
To develop this competitive priority, the operations function must focus primarily on cutting costs in
the system, such as costs of labor, materials, and facilities.
Companies that compete based on cost study their operations system carefully to eliminate all waste.
They might offer extra training to employees to maximize their productivity and minimize scrap.
Also, they might invest in automation in order to increase productivity. Generally, companies that
compete based on cost offer a narrow range of products and product features, allow for little
customization, and have an operations process that is designed to be as efficient as possible.
This serves to minimize costs of scheduling crew changes, maintenance, inventories of parts, and
many administrative costs. Unnecessary costs are completely eliminated: there are no meals, printed
boarding passes, or seat assignments. Employees are trained to perform many functions and use a
team approach to maximize customer service. Because of this strategy, southwest has been a model
for the airline industry for a number of years.
2. Quality:Many companies claim that quality is their top priority, and many customers say that they
look for quality in the products they buy. Yet quality has a subjective meaning; it depends on who is
defining it. For example, to one person quality could mean that the product lasts a long time, such as
with a Volvo, a car known for its longevity. To another person quality might mean high performance,
such as a BMW. When companies focus on quality as a competitive priority, they are focusing on the
dimensions of quality that are considered important by their customers.
Quality as a competitive priority has two dimensions. The first is high-performance design. This
means that the operations function will be designed to focus on aspects of quality such as superior
features, close tolerances, high durability, and excellent customer service. The second dimension is
goods and services consistency, which measures how often the goods or services meet the exact
design specifications. A strong example of product consistency is McDonald’s, where we know we
can get the same product every time at any location. Companies that compete on quality must deliver
not only high-performance design but goods and services consistency as well.
A company that competes on this dimension needs to implement quality in every area of the
organization. One of the first aspects that need to be addressed is product design quality, which
involves making sure the product meets the requirements of the customer. A second aspect is process
quality, which deals with designing a process to produce error-free products. This includes focusing
on equipment, workers, materials, and every other aspect of the operation to make sure it works the
way it is supposed to. Companies that compete based on quality have to address both of these issues:
the product must be designed to meet customer needs, and the process must produce the product
exactly as it is designed.
To see why product and process quality are both important, let’s say that your favorite fast-food
restaurant has designed a new sandwich called the “Big Yuck.” The restaurant could design a process
that produces a perfect “Big Yuck” every single time. But if customers find the “Big Yuck”
unappealing, they will not buy it. The same would be true if the restaurant designed a sandwich called
the “Super Delicious” to meet the desires of its customers. Even if the “Super Delicious” was exactly
what the customers wanted, if the process did not produce the sandwich the way it was designed,
often making it soggy and cold instead, customers would not buy it. Remember that the product needs
to be designed to meet customer wants and needs, and the process needs to be designed to produce
the exact product that was intended, consistently without error.
3. Timeor speed is one of the most important competitive priorities today. Companies in all industries
are competing to deliver high-quality products in as short a time as possible. Companies like FedEx,
LensCrafters, United Parcel Service (UPS), and Dell compete based on time. Today’s customers
don’t want to wait, andCompanies that can meet their need for fast service are becoming leaders in
their industries.
Making time a competitive priority means competing based on all time-related issues, such as rapid
delivery and on-time delivery. Rapid delivery refers to how quickly an order is received; on-time
delivery refers to the number of times deliveries are made on time. When time is a competitive
priority, the job of the operations function is to critically analyze the system and combine or eliminate
processes in order to save time. Often companies use technology to speed up processes, rely on a
flexible workforce to meet peak demand periods, and eliminate unnecessary steps in the production
process.
Practical illustration:
FedEx is an example of a company that competes based on time. The company’s claim is to
“absolutely, positively” deliver packages on time. To support this strategy, the operation function had
to be designed to promote speed. Bar code technology is used to speed up processing and handling,
and the company uses its own fleet of airplanes. FedEx relies on a very flexible part-time workforce,
such as college students who are willing to work a few hours at night. FedEx can call on this part-
time workforce at a moment’s notice, providing the company with a great deal of flexibility. This
allows FedEx to cover workforce requirements during peak periods without having to schedule full-
time workers.
Companies that compete based on flexibility often cannot compete based on speed, because it
generally requires more time to produce a customized product. Also, flexible companies typically do
not compete based on cost, because it may take more resources to customize the product. However,
flexible companies often offer greater customer service and can meet unique customer requirements.
To carry out this strategy, flexible companies tend to have more general-purpose equipment that can
be used to make many different kinds of products. Also, workers in flexible companies tend to have
higher skill levels and can often perform many different tasks in order to meet customer needs.
It is important to know that every business must achieve a basic level of each of the priorities, even
though its primary focus is only on some. For example, even though a company is not competing on
low price, it still cannot offer its products at such a high price that customers would not want to pay
for them. Similarly, even though a company is not competing on time, it still has to produce its
product within a reasonable amount of time; otherwise, customers will not be willing to wait for it.
One way that large facilities with multiple products can address the issue of tradeoffs is using the
concept of plant-within-a-plant (PWP), introduced by well-known Harvard professor Wickham
Skinner. The PWP concept suggests that different areas of a facility be dedicated to different products
with different competitive priorities. These areas should be physically separated from one another and
should even have their own separate workforce. As the term suggests, there are multiple plants within
one plant, allowing a company to produce different products that compete on different priorities. For
example, hospitals use PWP to achieve specialization or focus in a particular area, such as the cardiac
unit, oncology, radiology, surgery, or pharmacy.
Order winners, on the other hand, are the competitive priorities that help a company win orders in
the market. Consider a simple restaurant that makes and delivers pizzas. Order qualifiers might be
low price (say, less than $10.00) and quick delivery (say, under 15 minutes), because this is a
standard that has been set by competing pizza restaurants. The order winners may be “fresh
ingredients” and “home-made taste.” These characteristics may differentiate the restaurant from all
the other pizza restaurants. However, regardless of how good the pizza, the restaurant will not
succeed if it does not meet the minimum standard for order qualifiers. Knowing the order winners and
order qualifiers in a particular market is critical to focusing on the right competitive priorities.
It is important to understand that order winners and order qualifiers change over time. Often when
one company in a market is successfully competing using a particular order winner, other companies
follow suit over time. The result is that the order winner becomes an industry standard, or an order
qualifier. To compete successfully, companies then have to change their order winners to differentiate
themselves. An excellent example of this occurred in the auto industry. Prior to the 1970s, the order
winning criterion in the American auto industry was price. Then the Japanese automobile
manufacturers entered the market competing on quality at a reasonable price. The result was that
quality became the new order winner and price became an order qualifier, or an expectation. Then by
the 1980s American manufacturers were able to raise their level of quality to be competitive with the
Japanese. Quality then became an order qualifier, as everyone had the same quality standard.
Together, the structure and infrastructure of the production process determine the nature of the
company’s operations function.
Over the last decade we have seen an unprecedented growth in technological capability. Technology
has enabled companies to share real-time information across the globe, to improve the speed and
quality of their processes, and to design products in innovative ways. Companies can use technology
to help them gain an advantage over their competitors. For this reason technology has become a
critical factor for companies in achieving a competitive advantage. In fact, studies have shown that
companies that invest in new technologies tend to improve their financial position over those that do
not. However, the technologies a company acquires should not be decided on randomly, such as
following the latest fad or industry trend. Rather, the selected technology needs to support the
organization’s competitive priorities, as we learned in the example of FedEx. Also, technology needs
to be selected to enhance the company’s core competencies and add to its competitive advantage.
Types of Technologies
There are three primary types of technologies. They are differentiated based upon their application,
but all three areas of technology are important to operations managers. The first type is product
technology, which is any new technology developed by a firm. Product technology is important as
companies must regularly update their processes to produce the latest types of products.
A second type of technology is process technology. It is the technology used to improve the process
of creating goods and services. Examples of this would include computer aided design (CAD) and
computer-aided manufacturing (CAM). These are technologies that use computers to assist engineers
in the way they design and manufacture products. Process technologies are important to companies as
they enable tasks to be accomplished more efficiently.
The last type of technology is information technology, which enables communication, processing, and
storage of information. Information technology has grown rapidly over recent years and has had a
profound impact on business. Just consider the changes that have occurred due to the Internet. The
Internet has enabled electronic commerce and the creation of the virtual marketplace, and has linked
customers and buyers. Another example of information technology is enterprise resource planning
(ERP), which functions via large software programs used for planning and coordinating all resources
throughout the entire enterprise. ERP systems have enabled companies to reduce costs and improve
responsiveness, but are highly expensive to purchase and implement. Consequently, as with any
technology, investment in ERP needs to be a strategic decision.
The importance of operations management was not always recognized by business. In fact, following
World War II American corporations were dominated by marketing and finance functions. The
United States had just emerged from the war as the undisputed global manufacturing leader due in
large part to efficient operations. At the same time Japan and Europe were in ruins with their
businesses and factories destroyed. U.S. companies were left to fill these markets: the post-World
War II period of the 1950s and 1960s represented the golden era for U.S. business.
The primary opportunities were in the areas of marketing, to develop the large potential markets for
new products, and in finance, to support the growth. Since there were no significant competitors, the
operations function became of secondary importance, because companies could sell what they
produced. Even the distinguished economist John Kenneth Galbraith was noted as saying: “the
production problem has been solved.”
Then in the 1970s and 1980s things changed. American companies experienced large declines in
productivity growth, and international competition began to be a challenge in many markets. In some
markets such as the auto industry, American corporations were being pushed out. It appeared that
U.S. firms had become lax with the lack of competition in the 1950s and 1960s. They had forgotten
about improving their methods and processes, partly due to the lack of competitive challenge. In the
meantime, foreign firms were rebuilding their facilities and designing new production Methods. By
the time foreign firms had recovered, many U.S. firms found themselves unable to compete. To
regain their competitiveness companies turned to operations management, a function they had
overlooked and almost forgotten about.
The new focus on operations and competitiveness has been responsible for the recovery of many
corporations, and U.S. businesses experienced resurgence in the 1980s and 1990s. Operations became
the function at the core of organizational competitiveness. Although U.S. firms have rebounded, they
are fully aware of continued global competition. Companies have learned that to achieve long-run
success they must place much importance on their operations.
Many events helped shape operations management. We will describe some of the most significant of
these historical milestones and explain their influence on the development of operations management.
Later we will look at some current trends in operations management. These historical milestones and
current trends are summarized in Table 1.6.2.1.
was emerging. In addition, the steam engine led to advances in transportation, such as railroads, that
allowed for a wider distribution of goods.
About the same time, the concept of division of labor was introduced. First described by Adam Smith
in 1776 in The Wealth of Nations, this important concept would become one of the building blocks of
the assembly line. Division of labor means that the production of a good is broken down into a series
of small, elemental tasks, each of which is performed by a different worker. The repetition of the task
allows the worker to become highly specialized in that task. Division of labor allowed higher
volumes to be produced. This, coupled with advances in transportation, enabled distant markets to be
reached by steam-powered boats and railroads.
A few years later, in 1790, Eli Whitney introduced the concept of interchangeable parts. Prior to that
time, every part used in a production process was unique. With interchangeable parts, parts are
standardized so that every item in a batch of items fits equally. This concept meant that we could
move from one-at-a-time production to volume production, for example, in the manufacture of
watches, clocks, and similar items.
B. Scientific Management
Scientific management was an approach to management promoted by Frederick W. Taylor at the
turn of the twentieth century. Taylor was an engineer with an eye for efficiency. Through scientific
management he sought to increase worker productivity and organizational output. This concept has
two key features. First, it is assumed that workers are motivated only by money and are limited only
by their physical ability. Taylor believed that worker productivity is governed by scientific laws, and
that it is up to management to discover these laws through measurement, analysis, and observation.
Workers are to be paid in direct proportion to how much they produce. The second feature of this
approach is the separation of the planning and doing functions in a company, which means the
separation of management and labor. Management is responsible for designing productive systems
and determining acceptable worker output. Workers have no input into this process—they are
permitted only to work.
Many people did not like the scientific management approach. This was especially true of workers,
who thought that management used these methods to unfairly increase output without paying them
accordingly. Still, many companies adopted the scientific management approach. Today many see
scientific management as a major milestone in the field of operations management, and it has had
many influences on operations management. For example, piece rate incentives, in which workers are
paid in direct proportion to their output, came out of this movement. Also, a widely used method of
work measurement, stopwatch time studies, was introduced by Frederick Taylor. In stopwatch time
studies, observations are made and recorded of a worker performing a task over many cycles. This
information is then used to set a time standard for performing the particular task. This method is still
used today to set a time standard for short, repetitive tasks.
The scientific management approach was popularized by Henry Ford, who used the techniques in his
factories. Combining technology with scientific management, Ford introduced the moving assembly
line to produce Ford cars. Ford also combined scientific management concepts with division of labor
and interchangeable parts to develop the concept of mass production. These concepts and innovations
helped him increase production and efficiency at his factories.
Many sociologists and psychologists went to Hawthorne to study these findings, which led to the
human relations movement, an entirely new philosophy based on the recognition that factors other
than money can contribute to worker productivity. The impact of these findings on the development
of operations management has been tremendous. The influence of this new philosophy can be seen in
the implementation of a number of concepts that motivate workers by making their jobs more
interesting and meaningful.
For example, the Hawthorne studies showed that scientific management had made jobs too repetitive
and boring. Job enlargement is an approach in which workers are given a larger portion of the total
task to do. Another approach used to give more meaning to jobs is job enrichment, in which workers
are given a greater role in planning.
Recent studies have shown that environmental factors in the workplace, such as adequate lighting and
ventilation, can have a major impact on productivity. However, this does not contradict the principle
that attention from management is a positive factor in motivation.
D. Management Science
While one movement was focusing on the technical aspects of job design and another on the human
aspects of operations management, a movement called management science was developing that
would make its own unique contribution. Management science focused on developing quantitative
techniques for solving operations problems. The first mathematical model for inventory management
was developed by F.W. Harris in 1913. Shortly thereafter, procedures were developed for statistical
sampling theory and quality control.
World War II created an even greater need for the ability to quantitatively solve complex problems of
logistics control, weapons system design, and deployment of missiles. Consequently, management
science grew during the war and continued to grow after the war was over. Many quantitative tools
were developed to solve problemsin forecasting, inventory control, project management, and other
areas. Management science is a mathematically oriented field that provides operations management
with tools that can be used to assist in decision making. A popular example of such a tool is linear
programming.
F. Just-in-Time
Just-in-time (JIT) is a major operations management philosophy, developed in Japan in the 1980s,
that is designed to achieve high-volume production using minimal amounts of inventory. This is
achieved through coordination of the flow of materials so that the right parts arrive at the right place
in the right quantity; hence the term, just-in-time. However, JIT is much more than the coordinated
movement of goods. It is an all-inclusive organizational philosophy that employs teams of workers to
achieve continuous improvement in processes and organizational efficiency by eliminating all
organizational waste. Although JIT was first used in manufacturing, it has seen use in the service
sector, for example, in the food service industry. JIT has had a profound impact on changing the way
companies manage their operations. It is credited with helping turn many companies around and is
used by companies including Honda, Toyota, and General Motors. JIT promises to continue to
transform businesses in the future.
The importance of this movement is demonstrated by the number of companies joining the ranks of
those achieving ISO 9000 certification. ISO 9000 is a set of quality standards developed for global
manufacturers by the International Organization for Standardization (ISO) to control trade into the
then-emerging European Economic Community (EEC). Today many companies require their
suppliers to meet these standards as a condition for obtaining contracts.
I. Flexibility
Faculty of Business and Economics Page 26
Harambee University College Operation Management
One example of flexibility is mass customization, which is the ability of a firm to highly customize
its goods and services to different customers. Mass customization requires designing flexible
operations and using delayed product differentiation, also called postponement. This means keeping
the product in generic form as long as possible and postponing completions of the product until
specific customer preferences are known.
J. Time-Based Competition
One of the most important trends in companies today is competition based on time. This includes
developing new products and services faster than the competition, reaching the market first, and
meeting customer orders most quickly. For example, two companies may produce the same product,
but if one is able to deliver it to the customer in two days whereas the other delivers it in five days,
the first company will make the sale and win over the customers. Time-based competition requires
specifically designing the operations function for speed.
L. Global Marketplace
Today businesses must think in terms of a global marketplace in order to compete effectively. This
includes the way they view their customers, competitors, and suppliers. Key issues are meeting
customer needs and getting the right product to markets as diverse as the Far East, Europe, or Africa.
Operations management is responsible for most of these decisions. OM decides whether to tailor
products to different customer needs, where to locate facilities, how to manage suppliers, and how to
meet local government standards. Also, global competition has forced companies to reach higher
levels of excellence in the products and services they offer. Regional trading agreements, such as the
North American Free Trade Agreement (NAFTA), the European Union (EU), and the global General
Agreement on Tariffs and Trade (GATT), guarantee continued competition on the international level.
M. Environmental Issues
There is increasing emphasis on the need to reduce waste, recycle, and reuse products and parts.
Society has placed great pressure on business to focus on air and water quality, waste disposal, global
warming, and other environmental issues. Operations management plays a key role in redesigning
processes and products in order to meet and exceed environmental quality standards. The importance
of this issue is demonstrated by a set of standards termed ISO 14000. Developed by the International
Organization for Standardization (ISO), these standards provide guidelines and a certification
program documenting a company’s environmentally responsible actions.
N. Electronic Commerce
Electronic commerce (e-commerce) is the use of the Internet for conducting business activities, such
as communication, business transactions, and data transfer. The Internet developed from a
government network called ARPANET, which was created in 1969 by the U.S. Defense Department.
Since the late 1990s the Internet has become an essential business medium, enabling efficient
communication between manufacturers, suppliers, distributors, and customers. It has allowed
companies to reach more customers at a speed infinitely faster than ever before. It also has
significantly cut costs as it provides direct links between entities.
Electronic commerce can occur between businesses, known as B2B (business to business)
commerce, and makes up the highest percentage of transactions. The most common B2B exchanges
occur between companies and their suppliers, such as General Electric’s Trading Process Network. A
more commonly known type of e-commerce occurs between businesses and their customers, known
as B2C exchange, as seen with on-line retailers such as mazon.com.
E-commerce can also occur between customers, known as C2C exchange, like consumer auction sites
such as eBay. E-commerce is creating virtual marketplaces that continue to change the way
business functions.
Today’s OM environment is very different from what it was just a few years ago. Customers demand
better quality, greater speed, and lower costs. In order to succeed, companies have to be masters of
the basics of operations management. To achieve this many companies are implementing a concept
called lean systems. Lean systems take a total system approach to creating an efficient operation and
pull together best practice concepts. This includes concepts such as just-in-time (JIT), total quality
management (TQM), continuous improvement, resource planning, and supply chain management
(SCM). The need for increasing efficiency has also led many companies to implement large
information systems called enterprise resource planning (ERP). ERP systems are large,
sophisticated software programs used for identifying and planning the enterprise-wide resources
needed to coordinate all activities involved in producing and delivering products to customers.
Applying best practices to operations management is not enough to give a company a competitive
advantage. The reason is that in today’s information age best practices are quickly passed to
competitors. To gain an advantage over their competitors companies are continually looking for ways
to better respond to customers. This requires companies to have a deep knowledge of their customers
and to be able to anticipate their demands. The development of customer relationship management
(CRM) has made it possible for companies to have this detailed knowledge of their customers. CRM
encompasses software solutions that enable the firm to collect customer-specific data. This type of
information can help the firm identify profiles of its most loyal customers and provide customer-
specific solutions. Also, CRM software can be integrated with ERP software to connect customer
requirements to the entire resource network of the company.
Another characteristic of today’s OM environment is the increased use of cross functional decision
making, which requires coordinated interaction and decision making between the different business
functions of the organization. Until recently employees of a company made decisions in isolated
departments, called “functional silos.” Today many companies bring together experts from different
departments into cross-functional teams to solve company problems. Employees from each function
must interact and coordinate their decisions. This requires employees to understand the roles of other
business functions and the goals of the business as a whole, in addition to their own expertise.
Of all the business functions, an operation is the most diverse in terms of the tasks performed. If you
consider all the issues involved in managing a transformation process, you can see that operations
managers are never bored. Who are operations managers and what do they do?
The head of the operations function in a company usually holds the title of vice president of
operations, vice president of manufacturing, V. P., or director of supply chain operations and
generally reports directly to the president or chief operating officer. Below the vice president level are
midlevel managers: manufacturing manager, operations manager, quality control manager, plant
manager, and others. Below these managers are a variety of positions such as quality specialist,
production analyst, inventory analyst, and production supervisor. These people perform a variety of
functions, such as analyzing production problems, developing forecasts, making plans for new
products, measuring quality, monitoring inventory, and developing employee schedules. Thus, there
are many job opportunities in operations management at all levels of the company. In addition,
operations jobs tend to offer high salaries, interesting work, and excellent opportunities for
advancement. Today many corporate CEOs have come through the ranks of operations.
As you can see, all business functions need information from operations management in order to
perform their tasks. At the same time, operations managers are highly dependent on input from other
areas. This process of information sharing is dynamic, requiring that managers work in teams and
understand each other’s roles.
Now that we know the role of the operations management function and the decisions that operations
managers make, let’s look at the relationship between operations and other business functions. As
mentioned previously, most businesses are supported by three main functions: operations, marketing,
and finance. Although these functions involve different activities, they must interact to achieve the
goals of the organization. They must also follow the strategic direction developed at the top level of
the organization. Figure 1-6 shows the flow of information from the top to each business function, as
well as the flow between functions. Many of the decisions made by operations managers are
dependent on information from the other functions. At the same time, other functions cannot be
carried out properly without information from operations. Figure 1.9.1 shows these relationships.
Marketing is not fully capable of meeting customer needs if marketing managers do not understand
what operations can produce, what due dates it can and cannot meet, and what types of customization
operations can deliver. The marketing department can develop an exciting marketing campaign, but if
operations cannot produce the desired product, sales will not be made. In turn, operations managers
need information about customer wants and expectations. It is up to them to design products with
characteristics that customers find desirable, and they cannot do this without regular coordination
with the marketing department.
Financecannot realistically judge the need for capital investments, make-or-buy decisions, plant
expansions, or relocation if finance managers do not understand operations concepts and needs. On
the other hand, operations managers cannot make large financial expenditures without understanding
financial constraints and methods of evaluating financial investments. It is essential for these two
functions to work together and understand each other’s constraints.
An information system (IS) is a function that enables information to flow throughout the
organization and allows OM to operate effectively. OM is highly dependent on information such as
forecasts of demand; quality levels being achieved, inventory levels, supplier deliveries, and worker
schedules. IS must understand the needs of OM in order to design an adequate information system.
Usually, IS and OM work together to needs to be ongoing. IS must be capable of accommodating the
needs of OM as they change in response to market demands. At the same time, it is up to IS to bring
the latest capabilities in information technology to the organization to enhance the functioning of
OM.
Human resource managersmust understand job requirements and worker skills if they are to hire
the right people for available jobs. To manage employees effectively, operations managers need to
understand job market trends, hiring and layoff costs, and training costs.
Figure 1.9.2: information flow between operations and other business functions
Several business trends are currently having a great impact on operations management: the growth of
the service sector; productivity changes; global competitiveness; quality, time, and technological
change; and environmental, ethical, and diversity issues. In this section, we look at these trends and
their implications for operations management.
The service sector of the economy is significant. Services may be divided into three main groups:
government( local, state, and federal);
wholesale and retail sales; and
other services (transportation, public utilities, communication, health, financial services,
real estate, insurance, repair services, business services, and personal services)
The percentage of workforce employed in the service sector is increasing from time to time
(especially in the U.S economy).
b. Productivity changes
What is productivity?
Productivity is the value of outputs (goods and services) produced divided by the values of the input
resources (wages cost of equipment and the like) used or the ratio of outputs (goods and services) to
inputs (e.g. labor and materials). In other words, productivity is a measure of how efficiently inputs
are being converted into outputs. It measures how well resources are used. The more efficiently a
company uses its resources, the more productive it is:
output
Productivity = input
Productivity and competitiveness
Productivity is essentially a scorecard of how effectively resources are used and a measure of
competitiveness. Productivity is measured on many levels and is of interest to a wide range of people.
As we showed in earlier examples, productivity can be measured for individuals, departments, or
organizations. It can track performance over time and help managers identify problems. Similarly,
productivity can be measured for an entire industry and even a country.
The economic success of a nation and the quality of life of its citizens are related to its
competitiveness in the global marketplace. Increases in productivity are directly related to increases
in a nation’s standard of living. That is why business and government leaders continuously monitor
the productivity at the national level and by industry sectors.
c. Global competition
Today businesses accept the fact that, to prosper, they must view customers, suppliers, facility
locations, and competitors in global terms. Most products today are global composites of materials
and services from throughout the world. Strong global competition affects industries everywhere.
Another important trend is that more firms are competing on the basis of time: filling orders earlier
than the competition, introducing new products and services quickly, and reaching the market first.
such as on the internet. In an electronic world, businesses are geographically far from their customers,
and a reputation of trust may become even more important.
One expert suggests a more ethical approach to businesses in which firms:
Have responsibilities that go beyond producing goods and services at a profit;
Help solve important social problems
Respond to a broader constituency than shareholders alone,
Have impacts beyond simple market place transactions, and
Serve a range of human values that go beyond the merely economic.
Environmental issues, such as toxic wastes, poisoned drinking water, poverty, air quality, and global
warming are getting more emphasis. In the past, many people viewed environmental problems as
quality of life issues; in the 2000s, many people see them as survival issues. Interest in a clean,
healthy environment is increasing.
The message is clear: consideration of ethics, workforce diversity, and the environment is
becoming part of every manager’s job. When operating and designing processes, they should
consider integrity, respect for the individual, and respecting the customer along with more
conventional performance measures such as productivity, quality, cost , and profit.
Self-Check
Chapter 2
List and briefly discuss the primary ways that business organizations compete
List five reasons for the poor competitiveness of some companies
Define the term strategy and explain why strategy is important for competitiveness
Contrast strategy and tactics
Discuss and compare organization strategy and operations strategy, and explain why it is
important to link the two
List and briefly discuss the primary ways that business organizations compete
List five reasons for the poor competitiveness of some companies
Define the term strategy and explain why strategy is important for competitiveness
Contrast strategy and tactics
Discuss and compare organization strategy and operations strategy, and explain why it is
important to link the two
2.1 Introduction
In this chapter you will learn what productivity is, why it is important, some ways organization can
improve productivity. You will learn about different ways companies compete, and why some firms
do a very poor job of performing. And you will learn how effective strategies can lead to productive.
Business Strategy is a long range plan and vision. Each individual business function develop needs to
support the business strategy. An organization develops its business strategy by doing environmental
scanning and considering its mission and its core competencies. The role of operations strategy is to
provide a long-range plan for the use of the company’s resources in producing the company’s primary
goods and services. The role of business strategy is to serve as an overall guide for the development
of the organization’s operations strategy.The operations strategy focuses on developing specific
capabilities called competitive priorities. There are four categories of competitive priorities: cost,
quality, time, and flexibility. Technology can be sued by companies to gain a competitive advantage
and should be acquired to support the company’s chosen competitive priorities. Productivity is a
measure that indicates how efficiently an organization is using its resources. Productivity is computed
as the ratio or organizational outputs divided by inputs
2.2 Competitiveness:
Companies must be competitive to sell their goods and services in the marketplace. Competitiveness
is an important factor in determining whether a company prospers, barely gets by, or fails. Business
organizations compete through some combination of their marketing and operations functions.
Marketing influences competitiveness in several ways, including identifyingconsumer wants and
needs, pricing, and advertising and promotion.
1. Identifying consumer wants and/or needs is a basic input in an organization’s decision making
process, and central to competitiveness. The ideal is to achieve a perfect matchbetween those
wants and needs and the organization’s goods and/or services.
2. Price and quality are key factors in consumer buying decisions. It is important to understand the
trade-off decision consumers make between price and quality.
3. Advertising and promotion are ways organizations can inform potential customers about features
of their products or services, and attract buyers.
Operations has a major influence on competitiveness through product and service design, cost,
location, quality, response time, flexibility, inventory and supply chain management, and service.
Many of these are interrelated.
1. Product and service design should reflect joint efforts of many areas of the firm to achieve a
match between financial resources, operations capabilities, supply chain capabilities,and consumer
wants and needs. Special characteristics or features of a product or service can be a key factor in
consumer buying decisions. Other key factors include innovation and the time-to-market for new
products and services.
2. Cost of an organization’s output is a key variable that affects pricing decisions and profits. Cost-
reduction efforts are generally ongoing in business organizations. Productivity is an important
determinant of cost. Organizations with higher productivity rates than their competitors have a
competitive cost advantage. A company may outsource a portion of its operation to achieve lower
costs, higher productivity, or better quality.
3. Location can be important in terms of cost and convenience for customers. Location near inputs
can result in lower input costs. Location near markets can result in lower transportation costs and
quicker delivery times. Convenient location is particularly important in the retail sector.
4. Quality refers to materials, workmanship, design, and service. Consumers judge quality in terms of
how well they think a product or service will satisfy its intended purpose. Customers are generally
willing to pay more for a product or service if they perceive the product or service has a higher
quality than that of a competitor.
5. Quick response can be a competitive advantage. One way is quickly bringing new or improved
products or services to the market. Another is being able to quickly deliver existing products and
services to a customer after they are ordered, and still another is quickly handling customer
complaints.
6. Flexibility is the ability to respond to changes. Changes might relate to alterations in design
features of a product or service, or to the volume demanded by customers, or the mix of products
or services offered by an organization. High flexibility can be a competitive advantage in a
changeable environment.
8.Supply chain management involves coordinating internal and external operations (buyers and
suppliers) to achieve timely and cost-effective delivery of goods throughout thesystem.
9.Service might involve after-sale activities customers perceive as value-added, such as delivery,
setup, warranty work, and technical support. Or it might involve extra attention while work is in
progress, such as courtesy, keeping the customer informed, and attention to details. Service quality
can be a key differentiator; and it is one that is often sustainable. Moreover, businesses rated
highly by their customers for service quality tend to be more profitable, and grow faster, than
businesses that are not rated highly.
10.Managers and workers are the people at the heart and soul of an organization, and if they are
competent and motivated, they can provide a distinct competitive edge by their skills and the
ideas they create. One often overlooked skill is answering the telephone.
How complaint calls or requests for information are handled can be a positive or a negative. If a
person answering is rude or not helpful, that can produce a negative image. Conversely, if calls are
handled promptly and cheerfully, that can produce a positive image and, potentially, a competitive
advantage.
Organizations fail, or perform poorly, for a variety of reasons. Being aware of those reasons can help
managers avoid making similar mistakes. Among the chief reasons are the following:
3. Putting too much emphasis on short-term financial performance at the expense of research and
development.
4. Placing too much emphasis on product and service design and not enough on processdesign and
improvement.
6. Failing to establish good internal communications and cooperation among different functional
areas.
The key to successfully competing is to determine what customers want and then directingefforts
toward meeting (or even exceeding) customer expectations. Two basic issues must beaddressed.
First: What do the customers want? (Which items on the preceding list of the ways to business
organizations compete are important to customers?) Second: What is the best way tosatisfy those
wants?
Operations must work with marketing to obtain information on the relative importance of the various
items to each major customer or target market.
An organization’s mission is the reason for its existence. It is expressed in its mission statement. For a
business organization, the mission statement should answer the question “What business are we in?”
Missions vary from organization to organization, depending on the nature of their business. Mission
statement States the purpose of an organization.
A mission statement serves as the basis for organizational goals, which provide more detail and
describe the scope of the mission. The mission and goals often relate to how an organization wants to
be perceived by the general public, and by its employees, suppliers, and customers.
Goals serve as a foundation for the development of organizational strategies. These, in turn, provide
the basis for strategies and tactics of the functional units of the organization. Organizational strategy
is important because it guides the organization by providing direction for, and alignment of, the goals
and strategies of the functional units. Moreover, strategies can be the main reason for the success or
failure of an organization.There are three basic business strategies: Low cost, Responsiveness, and
Differentiation from competitors.
If you think of goals as destinations, then strategies are the roadmaps for reaching the destinations.
Strategies provide focus for decision making. Generally speaking, organizations have overall
strategies called organizational strategies, which relate to the entire organization. They also have
functional strategies, which relate to each of the functional areas of the organization. The functional
strategies should support the overall strategies of the organization, just as the organizational strategies
should support the goals and mission of the organization.
Tactics are the methods and actions used to accomplish strategies. They are more specific than
strategies, and they provide guidance and direction for carrying out actual operations, which need the
most specific and detailed plans and decision making in an organization. You might think of tactics as
the “how to” part of the process (e.g., how to reach the destination, following the strategy roadmap)
and operations as the actual “doing” part of the process.
It should be apparent that the overall relationship that exists from the mission down to actual
operations is hierarchical.
Example: Rita is a high school student in Southern California. She would like to have a career in
business, have a good job, and earn enough income to live comfortably. A possible scenario for
achieving her goals might look something like this:
Strategy Formulation
Strategy formulation is almost always critical to the success of a strategy. Generally speaking,
strategy formulation takes into account the way organizations compete and a particular organization’s
assessment of its own strengths and weaknesses in order to take advantage of its core competencies
—those special attributes or abilities possessed by an organization that give it a competitive edge. The
most effective organizations use an approach that develops core competencies based on customer
needs as well as on what the competition is doing. Marketing and operations work closely to match
customer needs with operations capabilities.
To formulate an effective strategy, senior managers must take into account the core competencies of
the organizations, and they must scan the environment. They must determine what competitors are
doing, or planning to do, and take that into account. They must critically examine other factors that
could have either positive or negative effects. This is sometimes referred to as the SWOT approach
(strengths, weaknesses, opportunities, and threats). Strengths and weaknesses have an internal focus
and are typically evaluated by operations people. Threats and opportunities have an external focus
and are typically evaluated by marketing people. SWOT is often regarded as the link between
organizational strategy and operations strategy.
In formulating a successful strategy, organizations must take into account both order qualifiers and
order winners. Order qualifiers are those characteristics that potential customersperceive as
minimum standards of acceptability for a product to be considered for purchase.
However, that may not be sufficient to get a potential customer to purchase from the organization.
Order winners are those characteristics of an organization’s goods or services that cause them to be
perceived as better than the competition.
Characteristics such as price, delivery reliability, delivery speed, and quality can be order qualifiers or
order winners. Thus, quality may be an order winner in some situations, but inothers only an order
qualifier. Over time, a characteristic that was once an order winner may become an order qualifier,
and vice versa.
Environmental scanning is the monitoring of events and trends that present either threats or
opportunities for the organization. Generally these include competitors’ activities; changing consumer
needs; legal, economic, political, and environmental issues; the potential for new markets; and the
like.
Another key factor to consider when developing strategies is technological change, which can present
real opportunities and threats to an organization. Technological changes occur in products (high-
definition TV, improved computer chips, improved cellular telephone systems, and improved designs
for earthquake-proof structures); in services (faster order processing, faster delivery); and in
processes (robotics, automation, computer-assisted processing, point of- sale scanners, and flexible
manufacturing systems).The obvious benefit is a competitiveedge; the risk is that incorrect choices,
poor execution, and higher-than-expected operating costs will create competitive disadvantages.
Important factors may be internal or external. The following are key external factors:
1. Economic conditions. These include the general health and direction of the economy, inflation and
deflation, interest rates, tax laws, and tariffs.
2. Political conditions. These include favorable or unfavorable attitudes toward business, political
stability or instability, and wars.
3. Legal environment. This includes antitrust laws, government regulations, trade restrictions,
minimum wage laws, product liability laws and recent court experience, labor laws,and patents.
4. Technology. This can include the rate at which product innovations are occurring, current and
future process technology (equipment, materials handling), and design technology.
5. Competition. This includes the number and strength of competitors, the basis of competition
(price, quality, special features), and the ease of market entry.
6. Markets. This includes size, location, brand loyalties, ease of entry, potential for growth, long-
term stability, and demographics.
The organization also must take into account various internal factors that relate to possible strengths
or weaknesses. Among the key internal factors are the following:
1. Human resources. These include the skills and abilities of managers and workers; special talents
(creativity, designing, problem solving); loyalty to the organization; expertise;dedication; and
experience.
2. Facilities and equipment. Capacities, location, age, and cost to maintain or replace can have a
significant impact on operations.
3. Financial resources. Cash flow, access to additional funding, existing debt burden, and cost of
capital are important considerations.
4. Customers. Loyalty, existing relationships, and understanding of wants and needs are important.
5. Products and services. These include existing products and services, and the potential for new
products and services.
6. Technology. This includes existing technology, the ability to integrate new technology, and the
probable impact of technology on current and future operations.
7. Suppliers. Supplier relationships, dependability of suppliers, quality, flexibility, and service are
typical considerations.
8. Other. Other factors include patents, labor relations, company or product image, distribution
channels, relationships with distributors, maintenance of facilities and equipment, access to
resources, and access to markets.
After assessing internal and external factors and an organization’s distinctive competence, a strategy
or strategies must be formulated that will give the organization the best chance ofsuccess.
The approach, consistent with the organization strategy, that is used to guide the operations function.
Operations strategy is narrower in scope, dealing primarily with the operations aspect of the
organization. Operations strategy relates to products, processes, methods, operating resources,
quality, costs, lead times, and scheduling.
In order for operations strategy to be truly effective, it is important to link it to organization strategy;
that is, the two should not be formulated independently. Rather, formulation of organization strategy
should take into account the realities of operations’ strengths and weaknesses, capitalizing on
strengths and dealing with weaknesses. Similarly, operations strategy must be consistent with the
overall strategy of the organization, and with the other functional units of the organization.
Companies often do not understand the differences between operational efficiency and strategy.
Operational efficiency is performing tasks well, even better than competitors whereas strategy is a
plan for competing in the marketplace. Operations strategy is to ensure all tasks performed are the
right tasks.
A business strategy is developed after taking into many factors and following some strategic
decisions such as;what business in the company in (mission), Analyzing and understanding the
market (environmental scanning), and Identifying the company’s strengths (core competencies).
Operations Strategy is a plan for the design and management of operations functions. Operation
Strategy developed after the business strategy. Operations Strategy focuses on specific capabilities
which give it a competitive edge – competitive priorities.
Four Important Operations Questions: Will you compete on – Cost?, Quality?, Time?, and
Flexibility? You may compete on All ?Some?Tradeoffs?
Competing on Cost?
Competing on Quality?
Competing on Time?
Time/speed one of most important competition priorities. First that can deliver often wins the race
o On-time delivery:
Deliver product exactly when needed every time
Competing on Flexibility?
Company environment changes rapidlyso that company must accommodate change by being
flexible.Such asProduct flexibility:-Easily switch production from one item to another, and easily
customize product/service to meet specific requirements of a customer; Volume flexibility:-Ability to
ramp production up and down to match market demands
Decisions must emphasis priorities that support business strategy, it often required tradeoffs, and it
must focus on order qualifiers and order winners. Which priorities are “Order Qualifiers”?e.g. Must
have excellent quality since everyone expects it. Which priorities are “Order Winners”?e.g. Dell
competes on all four priorities, Southwest Airlines competes on cost, McDonald’s competes on
consistency, FedEx competes on speed, and Custom tailors compete on flexibility.
Technology should support competitive priorities. It has three Applications: product technology,
process technology, and information technology
Positive:- Improve processes, Maintain up-to-date standards, and enable to obtain competitive
advantage
Technology should support competitive priorities, Can require change to strategic plans, and to
operations strategy. Thus it is an important strategic decision
2.6 Productivity
Productivity = Output/Input
Although productivity is important for all business organizations, it is particularly important for
organizations that use a strategy of low cost, because the higher the productivity, the lower the cost of
the output. A productivity ratio can be computed for a single operation, a department, an
organization, or an entire country. In business organizations, productivity ratios are used for planning
workforce requirements, scheduling equipment, financial analysis, and other important tasks.
Productivity has important implications for business organizations and for entire nations.
For nonprofit organizations, higher productivity means lower costs; for profit-based organizations,
productivity is an important factor in determining how competitive a company is. For a nation, the
rate of productivity growth is of great importance. Productivity growth is the increase in productivity
from one period to the next relative to the productivity in the preceding period. Thus,
Previous productivity
Measuring Productivity
Productivity measures can be based on a single input (partial productivity), on more than one input
(multifactor productivity), or on all inputs (total productivity).
The choice of productivity measure depends primarily on the purpose of the measurement. If the
purpose is to track improvements in labor productivity, then labor becomes the obvious input
measure.
Productivity measures
output produced
all inputs used
Total productivity measure =
output output
labor + machines labor + materials
Multifactor productivity measures = or
output
labor + captiatl + energy
or
When we compute productivity for all inputs, such as labor, machines, and capital, we are measuring
total productivity. Total productivity describes the productivity of an entire organization.
For example, let’s say that the weekly dollar value of a company’s output, such as finished goods
and work in progress is $10,200 and that the value of its inputs such as labor, materials, and capital is
$8,600. The company’s total productivity would be computed as follows:
output $10,200
Total productivity = input = $8,600 = 1.186
Often it is much more useful to measure the total productivity of one input variable at a time in order
to identify how efficiently each is being used. When we compute productivity as the ratio of output
relative to a single input, we obtain a measure of partial productivity also called single- factor
productivity. Following are two examples of the calculation of partial productivity:
22 tables
Labor productivity = 2 wor ker s × 8 hours =1.375 tables/ hour
Sometimes we need to compute productivity as the ratio of output relative to a group of inputs, such
as labor and materials. This is a measure of multifactor Productivity. For example, let’s say that
output is worth $382 and labor and materials costs are $168 and $98, respectively. A multifactor
productivity measure of our use of labor and materials would be
output $ 382
Multifactor productivity = labor + materials = $ 168 + $ 98 =1.436
Interpreting productivity measures
Productivity measures must be compared to something, i.e. another year, a different company. Raw
productivity calculations do not tell the complete story unless there are no major structure differences.
Productivity measure provides information on how the firm is doing relative to what is critical to the
firm.
To interpret the meaning of a productivity measure, it must be compared with a similar productivity
measure. For example, if one worker at a pizza shop produces 17 pizzas in 2 hours, the productivity
of that worker is 8.5 pizzas per hour. This number by itself does not tell us very much. However, if
we compare it to the productivity of two other workers, one who produces 7.2 pizzas per hour and
another 6.8 pizzas per hour, it is much more meaningful. We can see that the first worker is much
more productive than the other two workers. But how do we know whether the productivity of all
three workers is reasonable? What we need is a standard. In Chapter two we will discuss ways to set
standards and how those standards can help in evaluating the performance of our workers.
Faculty of Business and Economics Page 50
Harambee University College Operation Management
When evaluating productivity and setting standards for performance, we also need to consider our
strategy for competing in the marketplace—namely, our competitive priorities. A company that
competes based on speed would probably measure productivity in units produced over time.
However, a company that competes based on cost might measure productivity in terms of costs of
inputs such as labor, materials, and overhead. The important thing is that our productivity measure
provides information on how we are doing relative to the competitive priority that is most important
to us.
b. A machine produced 70 pieces in two hours. However, two pieces were unusable.
2 hours
=34 pieces/hour
Calculations of multifactor productivity measure inputs and outputs using a common unit of
measurement, such as cost. For instance, the measure might use cost of inputs and units of the output:
Quantity of production
Note: The unit of measure must be the same for all factors in the denominator.
Example 3: Determine the multifactor productivity for the combined input of labor and machine time
using the following data:
Input
Labor: $1,000
Materials: $520
Overhead: $2,000
= 7 040 units
Measuring service sector productivity is a unique challenge. Traditional measures focus on tangible
outcomes. Service industries primarily produce intangible outcomes, and measuring intangibles is
challenging.
Self-Check
8. A health club has two employees who work on lead generation. Each employee works 40
hours aweek, and is paid $20 an hour. Each employee identifies an average of 400 possible
leads a weekfrom a list of 8,000 names. Approximately 10 percent of the leads become
members and pay a onetimefee of $100. Material costs are $130 per week, and overhead costs
are $1,000 per week. Calculatethe multifactor productivity for this operation in fees generated
per dollar of input.
Solution
Que. No. 7
Labor hours
= 400 cases
Que.No.8
$ 160+$50+$ 320
Que.No.9
= 2.93
Chapter 3
3.1 Introduction
The essence of any organization is the product or service it offers. There is an obvious link between
the design of those products or services and the success of the organization. Organizations that have
well designed products or services are more likely to realize their goals than those with poorly
designed products or services. Hence, organizations have vital stake in achieving good product and
service design.
Product and service design plays a strategic role in the degree to which an organization is able to
achieve its goals. It is a major factor in customer satisfaction, product and service quality, and
production costs. The customer connection is obvious: the main concern of the customer is the
organization’s product or services, which become the ultimate basis for judging the organization. The
quality connection is twofold. Quality is obviously affected not only by design but also, during
production, by the degree to which production conforms to the intent of design. A key factor is
manufacturability, which refers to the ease with which design features can be achieved by production.
Similarly, design affects cost- the cost of materials specified by design and labor and equipment
costs.
Product design – the process of defining all of the company’s product characteristics. Product design
must support product manufacturability (the ease with which a product can be made). Product design
defines a product’s characteristics of:
- Dimensions
Process Selection – the development of the process necessary to produce the designed product.
Organizations become involved in product or service design for a variety of reasons. An obvious one
is to be competitive by offering new products or services. Another one is to make the business grow
and increase in profits. Furthermore, the best organizations try to develop new products or services as
Faculty of Business and Economics Page 55
Harambee University College Operation Management
an alternative to downsizing. When productivity gains result in the need for fewer workers,
developing new products or services can mean adding jobs and retaining people instead of letting
them go.
Sometimes product or service design is actually redesign. This occurs for a number of reasons such as
customer complaints, accidents or injuries, excessive warranty claims, or low demand. The desire to
achieve cost reductions in labor or materials can be a motivating factor.
The objectives of product design and service design differs somewhat, but not as much as you might
imagine. The overall objectives for both are to satisfy the customer while making a reasonable profit.
Beyond that, it is crucial for designers to take into account the capability of the organization to
produce a given product or service. In manufacturing this is referred to as design for manufacturing
(DFM). A more general term that encompasses both manufacturing and service is design for
operations.
Whatever the term the implications is that operations people must be involved early in the design
process to ensure that the design will be compatible with the organization’s capabilities. Furthermore,
some design alternatives are more difficult to work with than others. With some, it is difficult to
achieve a high-quality output. Again, production/operations people can provide the necessary inputs
that make these things apparent before problems arise in production or operations. Likewise, it is
important to involve marketing to ensure that customer requirements will be achieved. In sum, the
key objectives for the designers are to design the products or service that will meet (or exceed)
customer expectations, within cost or budget, that takes into account the capabilities of the operations
and the fact that alternative designs may be more or less difficult to produce or provide.
Idea development: all products begin with an idea whether from; customers, competitors or suppliers.
Idea developments selection affects; Product quality, Product cost, Customer satisfaction, and Overall
manufacturability – the ease with which the product can be made.
Step 1 - Idea Development - Someone thinks of a need and a product/service design to satisfy it:
customers, marketing, engineering, competitors, benchmarking, reverse engineering
Step 2 - Product Screening - Every business needs a formal/structured evaluation process: fit with
facility and labor skills, size of market, contribution margin, break-even analysis, return on sales
Step 3 – Preliminary Design and Testing - Technical specifications are developed, prototypes built,
testing starts.
Step 4 – final Design - Final designs based on test results, facility, equipment, material, & labor
skills defined, suppliers identified.
Ideas for new products and services should be sought from a variety of sources including market
research, customer viewpoints, the organization’s research and development (R&D) department if
one exists, competitors or relevant developments in new technology. Competitors can provide a good
source of ideas and it is important that the organization analyses any new products they introduce to
the market and make an appropriate response. Reverse Engineering is a systematic approach to
dismantling and inspecting a competitor’s product to look for aspects of design that could be
incorporated into the organization’s own product. This is especially prevalent when the product is a
complex assembly such as a car, were design choices are myriad. Benchmarking compares a product
against what is considered the best in that market segment and the making recommendations on how
the product can be improved to meet that standard. Although a reactive strategy, benchmarking can
be useful to organization’s who have lost ground to innovative competitors.
The design process begins with the motivation for design. For a new business or new product, the
motivation may be obvious: to achieve the goals of the organization. For an existing business, in
addition to the general motivation, there are more specific factors to consider, such as government
regulations, the appearance of new technologies that have products or process applications,
competitive pressures and customer needs.
Ultimately, customer is the driving forces for product or service design. Failure to satisfy customers
can result in customer complaints, returns, warranty claims, and so on. Loss of market share becomes
a potential problem if customer satisfaction is not achieved. One of the source for new or improved
design is most obviously customers which trigger the design process to occur.
Some organizations have research and development departments that also generate ideas for new or
improved products or services.
Competitors are another important sources of ideas. By studying a competitor’s products or services,
and how a competitors operates (e.g., pricing policies, return policies, warranties), an organization
can learn a great deal about achieving the design improvements. Beyond that, some companies buy a
competitors newly designed product the moment it appears on the market, using the procedures called
reverse engineering, they carefully dismantle, and inspect the product. This may uncover product
improvements that can be incorporated in their own products. Hence, Reverse engineering is the
process where in dismantling and inspecting competitors’ products to discover products
improvements. Sometimes reverse engineering can lead to a product that is superior to the one being
examined; that is, designers conceive an improved design, which enable them to “leapfrog” the
competition by quickly introducing an improved version of a competitor’s product. This could deny
the competitors some of the rewards that normally accrue to the first to introduce a new product or
feature.
Ideas for new or improved products and services cannot be entertained in a vacuum; production
capabilities must be a basic consideration. In order to assure this, design and production must work
together; design needs to clearly understand the capabilities of production (e.g. equipment, skills, type
of materials, schedules, technologies, special abilities). The design of a product or service must take
into account its cost, its target market, and its function. Manufacturability is a key concern for
manufactured products.
Research and development (R & D) refers to organized efforts that are directed toward increasing
scientific knowledge or product (and process) innovation. The benefits of successful R & D can be
tremendous. Some research leads to patents, with the potential of licensing and royalties. However,
many discoveries are not patentable, or companies don’t wish to divulge details of their ideas
following the patent route. Even so, the first organization to bring a new product or service to the
market generally stands to profit from it before the others can catch up. Early product may be priced
higher because a temporary monopoly exists competitor bring their versions out. On the other hand,
the cost of R & D can be high. Kodak, for example, spends an average of $2 million a day on R & D.
Faculty of Business and Economics Page 58
Harambee University College Operation Management
3.2.1.2Product Screening
The screening process consists of market analysis, economic analysis,and technical analysis.
a. Market analysis
Market analysis consists of evaluating the product concept with potential customers through
interviews, focus groups and other data collection methods. The physical product may be tested by
supplying a sample for customer evaluation. The market analysis should identify whether sufficient
demand for the proposed product exists and its fit with the existing marketing strategy. At a strategic
level the organization can use the product life cycle to determine the likely cost and volume
characteristics of the product. The product life cycle describes the product sales volume over time. In
the early introduction phase production costs are high and design changes may be frequent. However,
there should be little or no competition for the new product and so a premium price can be charged to
customers attracted to innovative products. The growth phase sees a rapid increase in volumes and the
possibility of competitors entering the market. At this stage it is important to establish the product in
the market as firmly as possible in order to secure future sales. Production costs should be declining
as process improvements and standardization takes place. In the mature phase competitive pressures
will increase and it is important that sales are secured through a branded product to differentiate it
from competitors and a competitive price. There should be a continued effort at design improvement
to both product and process. Some products, such as consumer durables, may stay in the mature phase
almost indefinitely, and techniques such as advertising are used to maintain interest and market share.
b. Economic Analysis
Economic Analysis consists of developing estimates of production and demand costs and comparing
them with estimatesof demand. In order to perform the analysis requires an accurate estimate of
demand as possible derived from statisticalforecasts of industry sales and estimates of market share in
the sector the product is competing in. These estimates will be based on a predicted price range for
the product which is compatible with the position of the new product in the market. In order to assess
the feasibility of the projected estimates of product costs in terms of such factors as materials,
equipment and personnel must be estimated. Techniques such as cost/benefit analysis, decision theory
and accounting measures such as net present value (NPV) and internal rate of return (IRR) may be
used to calculate the profitability of a product. Another tool that can be used is the cost-volume-profit
model that provides a simplified representation that can be used to estimate the profit level generated
by a product at a certain product volume.
Assuming all products made are sold then the volume for a certain profit can be given by the
following formula
X= (P + FC)
(SP – VC)
Where;
X = volume (units)
P = profit
FC = fixed costs
SP = selling price
VC = variable costs
When profit = 0 (i.e. selling costs = production costs) this is termed the breakeven point and can be
given by the following formula:
X = FC
SP – VC
The break-even point, the point of no profit and no loss, provides managers with insights into profit
planning.Break-even analysis considers two functions of Q
Graphical Approach
Intersection is break-even
Sensitivity analysis can be done to examine changes in all of the assumptions made
Example: A firm estimates that the fixed cost of producing a line of footwear is $52,000 with a $9
variable cost for each pair produced. They want to know: If each pair sells for $25, how many pairs
must they sell to break-even? If they sell 4000 pairs at $25 each, how much money will they make?
Solution
F $ 52 , 000
Break-even point (Quantity) = Q= = =3250 pairs
SP−VC $ 25−$ 9
Solution
$20 - $12
Example 3:The Carey Company sold 100,000 units of its product at $20 per unit. Variable costs are
$14 per unit (manufacturing costs of $11 and selling expenses of $3). Fixed costs are incurred
uniformlythroughout the year and amount to $792,000 (manufacturing costs of $500,000 and selling
expenses of $292,000). Calculate the break-even point in units and in dollars,
Solution
$20 - $14
c. Technical Analysis
Technical analysis consists of determining whether technical capability to manufacture the product.
This covers such issues as ensuring materials are available to make the product to the specification
required, and ensuring the appropriate machinery and skills are available to work with these
materials. The technical analysis must take into account the target market and so product designers
have to consider the costs of manufacturing and distributing the product in order to ensure it can be
sold at a competitive price. Strategic analysis involves ensuring that the product provides a
competitive edge for the organization, drawing on its competitive strengths and is compatible with the
core business.
Product concepts that pass the feasibility stage enter preliminary design. The specification of the
components of the package requires a product /service structure which describes the relationship
between the components and a bill of materials or list of component quantities derived from the
product structure. The process by which the package is created must also be specified in terms of
mapping out the sequence of activities which are undertaken. This can be achieved with the aid of
such devices as process flow charts.
General performance characteristics are translated into technical specifications. Prototypes are built &
tested (maybe offered for sale on a small scale). Bugs are worked out & designs are refined.
The final design stage involves the use of a prototype to test the preliminary design until a final
design can be chosen. Computer Aided Design (CAD) and Simulation Modeling can be used to
construct a computer-based prototype of the product design.
Service design begins with the choice of service strategy, which determines the nature and focus of
the service, and the target market. This requires an assessment by top management of the potential
market and profitability of a particular service, and an assessment of the organizations ability to
provide the service. Once decisions on the focus of the service and the target market have been made,
the customer requirements and expectations of the target market must be determined. That
information is then used to design the service delivery system (i.e. facilities, processes, and personnel
requirements used to provide the service). Example of possible service delivery system includes mail,
telephone, electronic service (computer, network, fax), and face-to-face contact.
Two key issues in service design are the degree of variations in service requirements and the degree
of customer contacts and customer involvements in delivery system. These have an impact on the
degree to which the service can be standardized or must be customized. The lower the degree of
customer contact and service requirements variability, the more standardized the service can be.
Service design with no contact and little or no processing variability is very much like product design.
Conversely, high variability and high customer contact generally means the service must be highly
customized.
1. Products are generally tangible; services are generally intangible. Consequently service design
often focuses more on intangible factors (eg. Peace of mind, ambiance) than does product design.
2. Services are often produced and received at the same time (e.g. haircut, a car wash). Thus, there
is less latitude in finding and correcting errors before the customer has the chance to discover
them. Consequently, training, process design, and customer relations are particularly important.
3. Service cannot be inventoried. This poses restrictions on flexibility and makes capacity design
very important.
4. Services are highly visible to consumers and must be designed with that in mind; this adds an
extra dimension to process design, one that usually is not present in product design.
5. Some services have low barriers to entry and exit. This places additional burden on service
design to be innovative and cost effective.
6. Location is often important to service design, with convenience as a major factor. Hence, design
of services and choice of location are often closely linked.
A number of methods are available that help to improve the product design process. Five aspects of
product design are as follows
Pro
duct life cycle stages
Introduction
Growth
Maturity
Decline
Facility & process investment depends on life cycle
c. Concurrent Engineering
Concurrent engineering is when contributors to the design effort provide work throughout the design
process as a team. This differs from the traditional design process when work is undertaken separately
within functional areas such as engineering and operations. The problem with the traditional approach
is the cost and time involved in bringing the product to market. In a traditional approach time is
wasted when each stage in the design process waits for the previous stage to finish completely before
it can commence and their may be a lack of communication between functional areas involved in the
different stages of design. This can lead to an attitude of “throwing the design over the wall” without
any consideration of problems that may be encountered by later stages. An example of this is
decisions made at the preliminary design stage that adversely affect choices at the product build stage.
This can cause the design to be repeatedly passedbetween departments to satisfy everyone’s needs,
increasing time and costs. By facilitating communication through the establishment of a project team
problems of this type can be reduced.
d. Robust Design
The more robust a product (or service), the less likely it will fail due to a change in the environment
in which it is used or in which it performed. Hence, the more designers can build robustness into the
product or service, the better it should hold up, resulting in a higher level of customer satisfaction.
Taguchi’s Approach. Japanese engineer Genichi Taguchi approach is based on the robust design.
His premise is that it is often easier to design a product that is insensitive to environmental factors
either in manufacturing or in use, than to control the environmental factors. The central features of
Taguchi approach- and the feature used most often by U.S. companies- is parameter design. This
involves determining the specification setting for both the product and the process that will result in
robust design in terms of manufacturing variations or product deterioration, and conditions during
use. The Taguchi approach modifies the conventional statistical method of experimental design. It
involves determining which factors are controllable and which are not controllable (or are too
expensive to control), and then determining the optimal levels of controllable factors relative to
product performance. The value of this approach is its ability to achieve major advances in product
or process design fairly quickly, using relatively small numbers of experiments.
Critics charge that Taguchi’s methods are inefficient and incorrect, and often lead to non-optimal
solutions. Nonetheless his methods are widely used and have been credited with helping to achieve
Faculty of Business and Economics Page 66
Harambee University College Operation Management
In construction, rather than producing drawings by hand, computer-aided design (CAD) allows the
designer to work on-screen with the details stored in an electronic database. This not only speeds the
production of initial drawings but also greatly facilitates changes to the drawings, which can be a
very lengthy process (which deters changes) when done manually. Once the basic geometric
information has been stored, it is possible for the designer to construct views of what the final
building will look like and even to allow a virtual walk-through. This helps customers to envision
the final product, reducing the changes during construction, since altering a computer model is far
easier and cheaper. Such CAD systems can also improve subcomponent design, since interfaces can
be designed and problems resolved before construction starts. Ensuring that services such as
electricity and heating can be installed without major alteration to structural elements is a benefit.
Having drawn the components on a CAD system and checked their fit with other parts, the geometric
data can be processed through a computer aided manufacturing (CAM) system to generate machine
instructions to make the part. Alternatively, CAD data can be used to produce rapid prototypes,
where the part is made in a resin material. CAD enables a number of tests to take place, including
simulations of loads and stress details on products.
f. Modular Design
Modular design is another form of standardization. Modules represent grouping of components parts
into subassemblies, usually to the point where the individual parts lose their separate identity. One
familiar example of modular design is with easily removed control panels. Computer, too, have
modular parts that can be replaced if they become defective.
One advantage of modular design of equipment compared to non-modular is failure has often easier
to diagnose, and remedy because they are fewer pieces to investigate. The manufacture and assembly
of module generally involve simplifications: fewer parts are involved, so purchasing and inventory
control become more routine, fabrication and assembly operations become more standardized, and
training costs are often less.
The main disadvantage of modular design stem from the decrease in variety: the number of possible
configurations of modules is much less than the number of possible configuration based on individual
components. Another disadvantage that is sometimes encouraged is the inability to disassemble a
module in order to replace a faulty part: the entire module must be scrapped- usually a more costly
procedure.
Process selection refers to the way all organization chooses, to produce it’s goods or provide its
services. Essentially it involves choice of technology and related issues, and it has major implications
for capacity planning, layout of facilities. equipment’s and design of work systems
Make or Buy?
The very first step in process planning is to consider whether to make or buy some or all of a product
or to subcontract some or all of a service. A manufacturer might decide to purchase certain part,
rather than make them; sometimes allparts are purchased, with the manufacturer simply per forming
assembly operations. Many firms contract out janitorial services, and some contract repair services.
If a decision is made' to buy or contract, this lessens or eliminate, the used for process selection. In
make or buy decisions, a number of factors are usually considered:
1. Available capacity.If an organization has available capacity, it often makes sense to produce all
items or perform a service in house. The additional costs would be relatively small compared with
those required tobuy items or subcontract services.
2. Expertise. If a firm lack expertise to do a job satisfactorily, buying might be a reasonable
alternative.
3. Quality considerations. Firms that specialize can usually offer higher quality than an organization
can obtain itself. Conversely special quality requirements or the ability to closely monitor quality
may cause a firm to perform the work itself.
4. The nature of demand. When demand for an item is high and steady, the organization is better of
doing the work itself. However, a wide fluctuation in demand or small orders are better handled by
others who are able to combine orders from multiple sources, which results in higher volume and
tends to offsets individual buyer fluctuations.
5. Cost. Any cost saving achieved from buying or making must be weighed against the preceding
factors. Cost saving might come from the item itself or from transportation cost savings.
Faculty of Business and Economics Page 68
Harambee University College Operation Management
The design of processes is different in all organizations and should be related to the volume and
variety of the demand for the product in the market. In order to assist in selecting the appropriate
process, process designs can be categorized under four process types; project, jobbing, batch, mass
and continuous.
3.5.1. Project
Processes that produce products of high variety and low volume are termed projects. Project
processes are used to make a one-off product to a customer specification. Normally transforming
resources such as staff and equipment that make the product must move or be moved to the location
of the product. Other characteristics of projects are that they may require the coordination of many
individuals and activities, demand a problem-solving approach to ensure they are completed on time
and have a comparatively long duration of manufacture. The timescale of the completion of the
project is an important performance measure. Because each project is unique it is likely that
transforming resources will comprise general purpose equipment which can be used on a number of
projects. Examples of the use of a project process include building construction, interior design and
custom-built furniture.
Jobbing processes are used to make a one-off or low volume product to a customer specification. A
feature of a jobbing process is that the product moves to the location of transforming resources such
as equipment. Thus resources such as staff and equipment can be shared between many products.
Other characteristics of jobbing processes are the use of skilled labour in order to cope with the need
for customization (i.e. variety) and the use of general purpose equipment which is shared between the
products. There tends to be low utilization of equipment in jobbing processes due to the need
toundertake frequent setting up of the machinery when moving from processing one product to
another. Examples of theuse of a jobbing process include bespoke tailors and precision engineers.
Processes that produce products of medium variety and medium volume are termed batch which
denotes that the products are grouped as they move through the design process. In a batch process the
product moves to the location of transforming resources such as equipment and so resources are
shared between the batches. Instead of setting up machinery between each product, as in a jobbing
(Job-shop) process, setups occur between batches, leading to a higher utilization of equipment.
Faculty of Business and Economics Page 69
Harambee University College Operation Management
Because of the relatively high volumes involved in batch it can be cost-effective to use specialized
labour and equipment dedicated to certain product batches. A feature of batch processes is that,
because it is difficult to predict when a batch of work will arrive at a machine, a lack of coordination
can lead to many products waiting for that machine at any one time. These queues of work may
dramatically increase the time the product takes to progress through the process. Examples of the use
of a batch process include book printing, university classes and clothing manufacture.
Processes that produce products of high volume and low variety are termed line or mass processes.
Although there may be variants within the product design the production process will essentially be
the same for all the products. Because of the high volumes of product it is cost effective to use
specialized labour and equipment. A feature of line processes is that the movement of the product
may be automated using a conveyor system and the production process broken down into a number of
small, simple tasks. In order to ensure a smooth flow of product the process times per unit must be
equalized at each stage of production using a technique called line balancing. Because of the low
product variety, setting up of equipment is minimized and utilization of equipment is high.
Examples of the use of a mass process include cars, consumer durables such as televisions and food
items.
3.5.5 Continuous
Processes that operate continually to produce a very high volume of a standard product are termed
continuous. The productsproduced by a continuous operation are usually a continuous flow such as
oil and gas. Continuous processes use a large amount of equipment specialized and dedicated to
producing a single product (such as an oil refinery for example). To make this large investment in
dedicated equipment cost effective continuous processes are often in constant operation, 24 hours a
day. The role of labour in the operation of the processes is mainly one of monitoring and control of
the process equipment with little contact with the product itself. Examples of a continuous process
include water treatment plants, electricity production and steel making.
a. Intermittent processes:
Processes used to produce a variety of products with different processing requirements in lower
volumes. (such as healthcare facility)
b. Repetitive processes:
Processes used to produce one or a few standardized products in high volume. (such as a cafeteria, or
car wash)
c. Product-Process Grid
A key concept in process selection is the need to match product requirements with process
capabilities. The difference between success and failure in production can sometimes be traced to
choice of process. Products range from highly customized to highly standardized. Generally, volume
requirements tend to increase as standardization increases; customized products tend to be low
volume, and standardized products tend to be high volume. These factors should be considered in
determining which process to use.
Certain processes are amenable to low-volume, customized products, while others are more suited to
moderate-variety products, and still others to higher volume, highly standardized products. By
matching product requirements with process choices, producers can achieve the greatest degree of
efficiency in their operations. The table below illustrates the product variety.
a. Automation
Automation is the substitution of machinery for human labor. The machinery includes sensing
and control devices that enable it to operate automatically. A key question in process planning
is whether to automate. If the decision is made to automate, the next question is how much.
Automation can range from factories that are completely automated to a single automated
operation.
Automation offers a number of advantages over human labor. It has low variability; it is
difficult for human to perform a task in exactly the same way, in the same amount of time,
and on a repetitive process. In a production setting variability is determined to quality and to
meeting schedules. Moreover, machines do not get bored or distracted, nor do they go out on
strike, ask for higher wages, or file labor grievances.
Although these are important benefits, a FMS also has certain limitations. One is that this type
of system can handle a relatively narrow range of part variety, so it must be used for a family
of similar parts, which all require similar machining. Also, a FMS requires longer planning
development times than more conventional processing equipment because of its increased
complexity and cost. Furthermore, companies sometimes prefer a gradual approach to
automation, and FMS represents a sizable chunk of technology.
d. Computer-Integrated Manufacturing
Computer-Integrated Manufacturing (CIM) is a system for linking a broad range of
manufacturing activities through an integrating computer system, including engineering
design, flexible manufacturing systems, and production planning and control.
The overall goal of using CIM is to link various parts of an organization to achieve rapid
response to customer orders and/or product changes, to allow rapid production, and to reduce
indirect labor costs.
Self-Check
Chapter 4
E-Commerce and Supply chain management
4.1 Introduction
Supply Chain Management is the management of the interconnection of organizations’ that relate to
each other through upstream and downstream linkages between the processes that produce value to
the ultimate consumer in the form of products and services (Slack et al., 2010). Activities in the
supply chain include sourcing materials and components, manufacturing products, storing products in
warehousing facilities and distributing products to customers. The management of the supply chain
involves the coordination of the products through this process which will include the sharing of
information between interested parties such as suppliers, distributors and customers.
Supply chain is the network of all the activities involved in delivering a finished product/service to
the customer.Sourcing of: raw materials, assembly, warehousing, order entry, distribution, delivery.
Supply chain execution means managing and coordinating the movement of materials, information
and funds across the supply chain. The flow is bi-directional. SCM applications provide real-time
analytical systems that manage the flow of products and information throughout the supply chain
network.Interconnected or interlinked networks, channels and node businesses are involved in the
provision of products and services required by end customers in a supply chain.
Supply Chain Management is the vital business function that coordinates all of the network links. It
coordinates movement of goods through supply chain from suppliers to manufacturers to distributors.
It Promotes information sharing along chain like forecasts, sales data, & promotions.It is the
management of the flow of goods. It includes the movement and storage of raw materials, work-in-
process inventory, and finished goods from point of origin to point of consumption.
Supply Chain Management is the management of the interconnection of organizations that relate to
each other through upstream and downstream linkages between the processes that produce value to
the ultimate consumer in the form of products and services. Activities in the supply chain include
sourcing materials and components, manufacturing products, storing products in warehousing
facilities and distributing products to customers.
SCM draws heavily from the areas of operations management, logistics, procurement, and
information technology, and strives for an integrated approach
Distribution network configuration: the number, location, and network missions of suppliers,
production facilities, distribution centers, warehouses, cross-docks, and customers.
Distribution strategy: questions of operating control (e.g., centralized, decentralized, or
shared); delivery scheme (e.g., direct shipment, pool point shipping, cross docking, direct
store delivery, or closed loop shipping); mode of transportation (e.g., motor carrier, including
truckload, less than truckload (LTL), parcel, railroad, intermodal transport, including trailer
on flatcar (TOFC) and container on flatcar (COFC), ocean freight, airfreight); replenishment
strategy (e.g., pull, push, or hybrid); and transportation control (e.g., owner operated, private
carrier, common carrier, contract carrier, or third-party logistics (3PL)).
Trade-offs in logistical activities: The above activities must be coordinated in order to achieve
the lowest total logistics cost. Trade-offs may increase the total cost if only one of the
activities is optimized. For example, full truckload (FTL) rates are more economical on a cost-
per-pallet basis than are LTL shipments. If, however, a full truckload of a product is ordered
to reduce transportation costs, there will be an increase in inventory holding costs, which may
increase total logistics costs. The planning of logistical activities therefore takes a systems
approach. These trade-offs are key to developing the most efficient and effective logistics and
SCM strategy.
Information: The integration of processes through the supply chain in order to share valuable
information, including demand signals, forecasts, inventory, transportation, and potential
collaboration.
Inventory management: Management of the quantity and location of inventory, including raw
materials, work in process (WIP), and finished goods.
Cash flow: Arranging the payment terms and methodologies for exchanging funds across
entities within the supply chain.
a. Plan
Every company needs a strategy on how to manage the resources in order to achieve their customers
demand for their products and services. The supply chain management is developing a set of metric to
monitor the supply chain so that it can deliver high qualities and values to customers.
b. Source
To create their products, companies need to be very careful when choosing suppliers to deliver their
goods and services needed. The managers need to develop a set of pricing and delivery system in the
supply chain.They can also put processes for managing their goods and goods inventory, for example;
receiving shipments.
c. Make
In manufacturing the supply chain manager should always schedule the activities that are needed for
the production, packaging, testing and preparation for delivery.The most metric-intensive portion of
the supply chain, production output and measure levels.Eg.Internal Functions include – processing
functions i.e. processing, purchasing, planning, quality, shipping
d. Deliver
This part is mainly referred to as logistics by the supply chain management. In this case companies
coordinate receipts of orders, pick carriers to get products to customers and develop a network of
warehouses. Logistics managers are responsible for traffic management and distribution management;
that means, Traffic management – arranging the method of shipment for both incoming and outgoing
products or material, and Distribution management – movement of material from manufacturer to the
customer
e. Return
In many companies this is usually where the problem is – in the supply chain.The planners should
create a flexible and responsible network for receiving a flaw and excess products sent back to them
(from customers).
Supply chain management is a cross-functional approach that includes managing the movement of
raw materials into an organization, certain aspects of the internal processing of materials into finished
goods, and the movement of finished goods out of the organization and toward the end consumer. As
organizations strive to focus on core competencies and becoming more flexible, they reduce their
ownership of raw materials sources and distribution channels. These functions are increasingly being
outsourced to other firms that can perform the activities better or more cost effectively. The effect is
to increase the number of organizations involved in satisfying customer demand, while reducing
managerial control of daily logistics operations. Less control and more supply chain partners led to
the creation of the concept of supply chain management. The purpose of supply chain management is
to improve trust and collaboration among supply chain partners, thus improving inventory visibility
and the velocity of inventory movement.
The bullwhip effect occurs when the demand order variabilities in the supply chain are amplified as
they moved up the supply chain. Distorted information from one end of a supply chain to the other
can lead to tremendous inefficiencies. Companies can effectively counteract the bullwhip effect by
thoroughly understanding its underlying causes.
Bullwhip Effect-the inaccurate or distorted demand information from one end of a supply chain to the
other can lead to tremendous inefficiencies: such as, excessive inventory investment, poor customer
service, lost revenues, misguided capacity plans, inactive transportation, and missed production
schedules. How do exaggerated order swings occur? What can companies do to mitigate them?
2. Order batching
3. Price fluctuation
Every company in a supply chain usually does product forecasting for its production scheduling,
capacity planning, inventory control, and material requirements planning. Forecasting is often based
on the order history from the company's immediate customers.
For example, if you are a manager who has to determine how much to order from a supplier, you use
a simple method to do demand forecasting, such as exponential smoothing. With exponential
smoothing, future demands are continuously updated as the new daily demand data become available.
The order you send to the supplier reflects the amount you need to replenish the stocks to meet the
requirements of future demands, as well as the necessary safety stocks. The future demands and the
associated safety stocks are updated using the smoothing technique. With long lead times, it is not
uncommon to have weeks of safety stocks. Because the amount of safety stock contributes to the
bullwhip effect, it is intuitive that, when the lead times between the resupply of the items along the
supply chain are longer, the fluctuation is even more significant.
Order Batching
In a supply chain, each company places orders with an upstream organization using some inventory
monitoring or control. Demands come in, depleting inventory, but the company may not immediately
place an order with its supplier. It often batches or accumulates demands before issuing an order.
There are two forms of order batching: periodic ordering and push ordering.
In push ordering, a company experiences regular surges in demand. The company has orders
"pushed" on it from customers periodically because salespeople are regularly measured, sometimes
quarterly or annually, which causes end-of-quarter or end-of-year order surges. Salespersons who
need to fill sales quotas may "borrow" ahead and sign orders prematurely.As a result, the surge in
demand is even more pronounced, and the variability from the bullwhip effect is at its highest.
MRP systems are often run monthly, resulting in monthly ordering with suppliers. If the majority of
companies that do MRP or distribution requirement planning (DRP) generate purchase orders at the
beginning of the month (or end of the month), order cycles overlap. Periodic execution of MRPs or
periodic ordering amplifies variability contributes to the bullwhip effect.
Price Fluctuation
When a product's price is low (through direct discount or promotional schemes), a customer buys in
bigger quantities than needed. When the product's price returns to normal, the customer stops buying
until it has depleted its inventory As a result, the customer's buying pattern does not reflect its
consumption pattern, and the variation of the buying quantities is much bigger than the variation of
the consumption rate - the bullwhip effect.
When product demand exceeds supply, a manufacturer often rations its product to customers. In one
scheme, the manufacturer allocates the amount in proportion to the amount ordered. For example, if
the total supply is only 50 percent of the total demand, all customers receive 50 percent of what they
order. Knowing that the manufacturer will ration when the product is in short supply, customers
Faculty of Business and Economics Page 81
Harambee University College Operation Management
exaggerate their real needs when they order. Later, when demand cools, orders will suddenly
disappear and cancellationspour in. This seeming overreaction by customers anticipating shortages
results when organizations andindividuals make sound, rational economic decisions and "game" the
potential rationing.
During the Christmas shopping seasons in 1992 and 1993, Motorola could not meet consumer
demand for handsets and cellular phones, forcing many distributors to turn away business.
Distributors like Air Touch Communications and the Baby Bells, anticipating the possibility of
shortages and acting defensively, drastically over ordered toward the end of 1994. Because of such
overzealous ordering by retail distributors, Motorola reported record fourth-quarter earnings in
January 1995. Once Wall Street realized that the dealers were swamped with inventory and new
orders for phones were not as healthy before, Motorola's stock tumbled almost 10 percent.
Understanding the causes of the bullwhip effect can help managers find strategies to mitigate it.
Indeed, many companies have begun to implement innovative programs that partially address the
effect. Next we examine how companies tackle each of the four causes. We categorize the various
initiatives and other possible remedies based on the underlying coordination mechanism, namely,
information sharing, channel alignment, and operational efficiency. With information sharing,
demand information at a downstream site is transmitted upstream in a timely fashion. Channel
alignment is the coordination of pricing, transportation, inventory planning, and ownership between
the upstream and downstream sites in a supply chain. Operational efficiency refers to activities that
improve performance, such as reduced costs and lead-time. The followings are ways to control the
bullwhip effect.
Ordinarily, every member of a supply chain conducts some sort of forecasting in connection with
itsplanning (e.g., the manufacturer does the production planning, the wholesaler, the logistics
planning, andso on). Bullwhip effects are created when supply chain members process the demand
input from theirimmediate downstream member in producing their own forecasts.
Demand input from the immediate downstream member, of course, results from that member's
forecasting, with input from its own downstream member.
One remedy to the repetitive processing of consumption data in a supply chain is to make demand
data at a downstream site available to the upstream site. Hence, both sites can update their forecasts
with thesame raw data.
Since order batching contributes to the bullwhip effect, companies need to devise strategies that lead
to smaller batches or more frequent resupply.One reason that order batches are large or order
frequencies low is the relatively high cost of placing an order and replenishing it. EDI can reduce the
cost of the paperwork in generating an order. Using EDI, companies such as Nabisco perform
paperless, computer-assisted ordering (CAO), and, consequently, customers order more frequently.
Stabilize Prices
The simplest way to control the bullwhip effect caused by forward buying and diversions is to reduce
boththe frequency and the level of wholesale price discounting. The manufacturer can reduce the
incentivesfor retail forward buying by establishing a uniform wholesale pricing policy.
Situations when a supplier faces a shortage, instead of allocating products based on orders, it can
allocate in proportion to past sales records. Customers then have no incentive to exaggerate their
orders.The sharing of capacity and inventory information helps to alleviate customers' anxiety and,
consequently, lessen their need to engage in gaming. But sharing capacity information is insufficient
when there is a genuine shortage. Some manufacturers work with customers to place orders well in
advance of the sales season. Thus they can adjust production capacity or scheduling with better
knowledge of product demand.
Supply chain management (SCM) is the integration of all activities associated with the flow and
transformation of goods from the raw materials stage through to the end-user, as well as the
associated information flows. Materials and information flow both up and down the supply chain. A
Web-based supply chain management system (WSCMS) is defined as an Internet-enabled SCM
system that integrates networks of suppliers, factories, warehouses, distribution centers and retailers,
through which the whole chain of logistic processes is managed so that faster and more flexible
coordination can be achieved between a company and its customers and suppliers along the supply
chain.
The role of information technology (IT) in SCM has changed dramatically in recent years;
transforming business operations from electronic data interchange (EDI) systems and enterprise
resource planning (ERP) systems to Internet/Intranet for supporting SCM. Due to the popularity and
functionality of the Internet/Intranet, many researchers have realized that benefits can be derived from
applying Internet technology to communications and systems management in supply chains.
Although the Internet/Intranet in SCM can add value in a number of ways such as saving costs,
improving quality, delivery and support, and offering greater competitive advantages, implementing a
WSCMS is much more complex than implementing an EDI or ERP system.
With the rapid growth of IT, many companies are taking advantage of Internet technology to better
manage their supply chains. The Web-based SCM system has provided an alternative means of
managing an ever-increasing number of suppliers and customers and creating the necessary links
among data, information and effective communication. White (1996) pointed out that the combined
use of the Internet with SCM allows customers and suppliers to share mission critical information on
a timely basis to enable effective, real-time decision-making. Web-based SCM applications as
mission-critical business applications that are used by companies to run their businesses, such as
taking customer orders and order management; planning the distribution of inventory and forecasting
demand; accounting; and managing the flow of materials.
Types of E-Commerce
1. B2C (Business to Consumer) –This is most common form of ecommerce. These systems
allow businesses to sell goods and services to consumers via the internet. Group of these
online shop-fronts are called e-malls and are essentially online shopping centers.
2. Consumer-to-consumer (C2C). In this case an individual sells products (or services) to other
individuals.
3. B2B (Business to Business) –
These systems are designed for businesses to collaborate or sell goods and services to each
other.
4. B2B2C (Business to Business to Consumer)
These systems are merelycombinations of B2B and B2C systemsdesigned to manage the
whole supplychain from the consumer through to rawmaterial providers. They are design
toprocess orders from consumers and thenuse this information to place orders withwholesalers
and ultimately manufacturers.
5. G2B or G2C (Government to Business or Government to Consumer)
These systems involve the government providing services to business and consumers. These
services may range from the online provision of information.
6. Mobile commerce (m-commerce).When e-commerce is done in a wireless environment,
such as using cell phones to access the Internet
7. Intra-business (intra-organizational) commerce. In this case an organization uses EC
internally to improve its operations. A special case of this is known as B2E(business to its
employees) EC.
Supply chain management integrates the management of supply and demand. According to the
Council of Supply Chain Management Professionals, it encompasses “the planning and management
of all activities involved in sourcing and procurement, conversion and logistics.” Supply chain
management also covers coordination and collaboration with channel partners, such as customers,
suppliers, distributors and service providers.
Demand Management
Demand management is an essential element in supply chain management, focusing companies and
their partners on meeting the needs of customers, rather than the production process. The lead
company in the supply chain makes partners aware of customers’ needs, encouraging them to
maximize component or supply quality and add value to the finished product. By raising awareness of
customers’ needs and increasing collaboration, companies can improve the competitiveness of the
whole supply chain and increase business opportunities for all members.
Communication
Effective communication helps the entire supply chain improve the efficiency and productivity of its
operations by enabling all members to share the same demand and operational information.
Communication keeps all members informed of developments that affect their contribution to the
supply chain, enabling them to quickly adjust their operations in line with changing demand
conditions. Effective communication also enables members to respond rapidly to new business
opportunities, helping to get new products to market quickly or increasing supply levels following a
successful marketing campaign.
Integration
Integrating supply chain processes helps each member reduce its inventory costs. Suppliers share up-
to-date information on demand to route their products to company’s warehouses for onward shipment
to stores with minimum time in inventory. This reduces company’s costs significantly, enabling them
to offer customers highly competitive pricing. To achieve this level of integration, companies develop
single information networks that enable all members to access and share supply and demand data
securely. The networks are based on open standards, such as Internet Protocol, so all members can
communicate, even if they have different internal networks.
Collaboration
Collaboration in the supply chain strengthens relationships between members, improving teamwork
and helping all members increase their business. Lead companies run business development and
training programs to improve supply chain partners’ market and product knowledge. They also
undertake joint new product development programs with partners contributing specialist knowledge
of components and materials.
Seven factors enable stronger capabilities in supply-chain management and risk management. These
are:
Alignment between partners in the supply chain. Partners can align strategically on key
value dimensions, identification of emerging patterns and advancement toward higher value
propositions.
Upstream and downstream supply-chain integration. Information sharing, visibility and
collaboration with supply-chain partners is a key.
Alignment between internal business functions. Companies can align and integrate their
value-chain functions on strategic, tactical and operational levels.
Complexity management/rationalization. Companies can standardize and simplify
networks and processes, interfaces, product architectures, product portfolios and operating
models.
Data, models and analytics. Intelligence and analytical capabilities can support supply-chain
and risk-management functions.
Which products to produce in-house and which are provided by other supply chain members
Vertical integration – a measure of how much of the supply chain is owned by the
manufacturer
Backward integration – owning or controlling of sources of raw material and component parts
Forward integration – owning or control the channels of distribution
Vertical integration related to levels of insourcing or outsourcing products or services
Outsourcing can be defined as the contracting-out of services that were previously performed in-
house. Outsourcing is a supply strategy often chosen as a means of increasing organizational
efficiency and effectiveness. Although some short-term benefits fororganizations can be achieved
through outsourcing, there is a growing recognition that there may be longer-term costs not fully
assessed by them. Outsourcing can impact on the size, structure, and competitiveness of purchaser
and vendor sectors. Outsourcing also has an effect on employment levels, patterns, and conditions.
Social issues may be affected in respect of growth in earnings inequality because the contracts offered
little scope to compete other than by worsening employees’ terms and conditions of employment.
Risks of outsourcing include losing in-house expertise and knowledge, unintentional loss of control,
and reductions in quality.
Warehouses involved in supply chain distributions and include; Plant warehouses, Regional
warehouses, and Local warehouses. Warehouses can either beGeneral – used for long-term storage or
Distribution – used for short-term storage, consolidation, and product mixing
Partnerships require sharing information, risks, technologies, and opportunities. Impact, intimacy,
and vision are critical to successful partnering. Supply chain distribution requires effective
warehousing operations.Implementing integrated SCM requires: Analyzing the whole supply chain,
Starting by integrating internal functions first, and integrating external suppliers through partnerships.
Manufacturer’s Goals:-Reduce costs, Reduce duplication of effort, Improve quality, Reduce lead
time, Implement cost reduction program, Involve suppliers early, Reduce time to market.
Supplier’s Goals:-Increase sales volume, Increase customer loyalty, Reduce cost, Improve demand
data, and Improve profitability.
In today’s world, supply chain management (SCM) is a key strategic factor for increasing
organizational effectiveness and for better realization of organizational goals such as enhanced
competitiveness, better customer care and increased profitability. The era of both globalization of
markets and outsourcing has begun, and many companies select supply chain and logistics to manage
their operations. Most of these companies realize that, in order to evolve an efficient and effective
supply chain, SCM needs to be assessed for its performance.
Traditional measures include;-Return on investment, Profitability, Market share, and Revenue growth
Additional measures such as Customer service levels (Warranty costs, Products return &
allowance, Cost reductions allowed because of product defects, Company response times, and
Transaction costs), Inventory turns, Weeks of supply, and Inventory obsolescence.
Customer demands for better-quality requires companies to develop ways to measure improvements
Decreased supply chain velocity due to greater distances with greater uncertainty and generally less
efficient.
Chapter 5
5.1 Introduction
TQM is different from the old concept of quality as it focus is on serving customers, identifying the
causes of quality problems, and building quality into the production process. Four categories of
quality cost of prevention, appraisal, internal and external costs. Seven TQM notable individuals
include Walter A. Shewhart, W. Edwards Demings, Joseph M. Juran, Armand V. Feigenbaum, Philip
B. Crosby, Kaoru Ishikawa, and Genichi Taguchi
Seven features of TQM combine to create TQM philosophy; customer focus, continuous
improvement, employee empowerment, use of quality tools, product design, process management,
and managing supplier quality. QFD is a tool used to translate customer needs into specific
engineering requirements. Reliability is the probability that the product will functions as expected.
The MalcomBaldridge Award is given to companies to recognize excellence in quality management.
Back to after World War II when the United States was the only intact plant of industrialized nations.
The United States set the quality standard for the world; with the assumption that whatever we
created, we sold; we were selling everything we made. There really was no pressure to do anything
different.
In the early twentieth century, quality management meant inspecting products to ensure that they met
specifications. In the 1940s, during World War II, quality became more statistical in nature. Statistical
sampling techniques were used to evaluate quality, and quality control charts were used to monitor
the production process. Later on, in a global marketplace it no longer mattered where a product came
from. People want quality and want more quality at less cost.
Edward Deming with the help of so-called “quality gurus,” the concept took on a broader meaning.
Quality began to be viewed as something that encompassed the entire organization, not only the
production process. Since all functions were responsible for product quality and all shared the costs
of poor quality, quality was seen as a concept that affected the entire organization. At that time since
American’s products are have no market problems, they were refused to accept this concept, then this
people said No man is a prophet in his own land, Deming’s work and his original recommendations
on quality were ignored in his homeland before Japanese business imported his ideas and made them
work in Japan.
In1950, Edward Demining invited by Ichiro Ishikawa, to lecture Japanese senior officers who are a
union of Japanese Scientists, Engineers , and government officials devoted to improve Japanese
productivity and enhance their post-war quality of life” about the concept of quality.
Deming encouraged the Japanese to adopt a systematic approach to problem solving. He also
encouraged senior managers to become actively involved in their company's quality improvement
programs. His greatest contribution was the concept that the consumers are the most important part of
a production line. Meeting and exceeding the customers' needs and requirements is the task that
everyone within an organization has to accomplish.
During the period 1955—60, following the visits of Deming and Joseph M. Juran to Japan, the
Company-wide Quality Control (CWQC) movement started to develop.
Kaoru Ishikawa was its leader, and this movement asserts that quality refers to more than product
quality alone. It also takes in after-sales service, quality of management, the company itself, and
human life. Ishikawa also made a significant contribution to the development of Total Quality
Management (TQM).
Until around 1950, Japanese products were perceived worldwide as being very inexpensive, but with
poor quality. By the 1980s, products made in Japan were known all over the world for their high
quality and reliability. In contrast to the specialized approach traditionally used in the United States, a
number of Japanese companies, rebuilding from post-war devastation, adopted an innovative,
integrated approach to achieving quality.
Perhaps, the main reason for the origin of the term TQM could be a substitution in the previously
used term of Total Quality Control (TQC), the word “control” by “management” with the reasoning
that quality is not just a matter of control, it has to be managed. This is reinforced by Deming’s
(1982) view that sampling inspection should be suppressed and also by Crosby (1979) who makes the
point that control is not necessary when a zero defects level is achieved. The term “control” is
sometimes understood as meaning control over the workforces’ activities, and this is clearly not the
aim of TQM.
Quality Gurus
To fully understand the TQM movement, we need to look at the philosophies of notable individuals
who have shaped the evolution of TQM. Their philosophies and teachings have contributed to our
knowledge and understanding of quality today. Their individual contributions are summarized in
Table 5.2.1 below.
The definition of quality depends on the role of the people defining it. Today, there is no single
universal definition of quality. Some people view quality as “performance to standards.”Others view
it as “meeting the customer’s needs” or “satisfying the customer.” Let’s look at some of the more
common definitions of quality.
Conformance to specifications measures how well the product or service meets the targets
and tolerances determined by its designers.
Fitness for use focuses on how well the product performs its intended function or use.
Value for price paid is a definition of quality that consumers often use for product or service
usefulness. This is the only definition that combines economics with consumer criteria; it
assumes that the definition of quality is price sensitive.
Support services provided are often how the quality of a product or service is judged. Quality
does not apply only to the product or service itself; it also applies to the people, processes, and
organizational environment associated with it.
Psychological criteria: - A way of defining quality that focuses on judgmental evaluations of
what constitutes product or service excellence. Similarly, we commonly associate certain
products with excellence because of their reputation
Management
Management is concerned with five basic activities, namely planning, organizing, directing,
controlling, and improvement
There is no universally accepted unique definition of TQM. Some of the definitions are
An approach to improving the effectiveness and flexibility of business as a whole. It is
essentially a way of organizing and involving the whole organization, every department, every
activity, every single person at every level.
An approach for continuously improving the quality of goods and services delivered through
the participation of all levels and function of the organization
Totally integrated approach to produce the best product and service possible through constant
innovation.
A system that puts customer satisfaction before profit. It is a system that comprises a set of
integrated philosophies, tools and processes used to accomplish business objectives by
creating delighted customers and happy employees.
A management philosophy that builds a customer-driven, learning organization dedicated to
total customer satisfaction through continuous improvement in the effectiveness and
efficiency of the organization and its processes.
Total Quality Management (TQM) is a structured system for meeting and exceeding customer
needs and expectations by creating organization-wide participation in the planning and
implementation of breakthrough and continuous improvement processes. It integrates with the
business plan of the organization and can positively influence customer satisfaction and
market share growth. It is also a system of management and a way of working, not a program
that an organization simply sets in motion and then walks away from.
• Plan The first step in the PDCA cycle is to plan. Managers must evaluate the current process and
make plans based on any problems they find. They need to document all current procedures, collect
data, and identify problems. This information should then be studied and used to develop a plan for
improvement as well as specific measures to evaluate performance.
• Do The next step in the cycle is implementing the plan (do). During the implementation process
managers should document all changes made and collect data for evaluation.
• Study The third step is to study the data collected in the previous phase. The data are evaluated to
see whether the plan is achieving the goals established in the plan phase.
• Act The last phase of the cycle is to act on the basis of the results of the first three phases. The best
way to accomplish this is to communicate the results to other members in the company and then
implement the new procedure if it has been successful. Note that this is a cycle; the next step is to
plan again. After we have acted, we need to continue evaluating the process, planning, and
repeating the cycle again.
The first, and overriding, feature of TQM is the company’s focus on its customers.Quality is defined
as meeting or exceeding customer expectations. The goal is to firstidentify and then meet customer
needs. TQM recognizes that a perfectly producedproduct has little value if it is not what the customer
wants. Therefore, we can say thatquality is customer driven. However, it is not always easy to
determine what the customerwants, because tastes and preferences change. Also, customer
expectations oftenvary from one customer to the next. Companies need to continually gather
information by means of focus groups, marketsurveys, and customer interviews in order to stay in
tune with what customers want.
2. Continuous Improvement
Another concept of the TQM philosophy is the focus on continuous improvement.Traditional systems
operated on the assumption that once a company achieved acertain level of quality, it was successful
and needed no further improvements. We tend to think of improvement in terms of plateaus that are
to be achieved, such aspassing a certification test or reducing the number of defects to a certain level.
Traditionally, change for American managers involves large magnitudes, such as major
organizational restructuring. The Japanese, on the other hand, believe that the best and most lasting
changes come from gradual improvements. To use an analogy, they believe that it is better to take
frequent small doses of medicine than to take one large dose.
Continuous improvement, called kaizen by the Japanese, requires that the company continually strive
to be better through learning and problem solving. Because we can never achieve perfection, we must
always evaluate our performance and take measures to improve it.
3. Employee Empowerment
Part of the TQM philosophy is to empower all employees to seek out quality problemsand correct
them. With the old concept of quality, employees were afraid to identifyproblems for fear that they
would be reprimanded. Often poor quality was passed onto someone else, in order to make it
“someone else’s problem.” The new concept ofquality, TQM, provides incentives for employees to
identify quality problems. Employeesare rewarded for uncovering quality problems, not punished.
In TQM, the role of employees is very different from what it was in traditional systems.Workers are
empowered to make decisions relative to quality in the productionprocess. They are considered a vital
element of the effort to achieve high quality. Theircontributions are highly valued, and their
suggestions are implemented. In order toperform this function, employees are given continual and
extensive training in qualitymeasurement tools.
4. Team Approach
TQM stresses that quality is an organizational effort. To facilitatethe solving of quality problems, it
places great emphasis on teamwork. The use ofteams is based on the old adage that “two heads are
better than one.”Using techniquessuch as brainstorming, discussion, and quality control tools, teams
work regularly tocorrect problems. The contributions of teams are considered vital to the success of
thecompany. For this reason, companies set aside time in the workday for team meetings.Teams vary
in their degree of structure and formality, and different types ofteams solve different types of
problems. One of the most common types of teams isthe quality circle, a team of volunteer
production employees and their supervisorswhose purpose is to solve quality problems. The circle is
usually composed of eightto ten members, and decisions are made through group consensus. The
teams usuallymeet weekly during work hours in a place designated for this purpose. Theyfollow a
preset process for analyzing and solving quality problems. Open discussionis promoted, and criticism
is not allowed.
The quality improvement process emphasizes preventing problems by building quality into
the products and services during the design process, This must be done if cycle times are to be
reduced.
Long Range Outlook.
Achieving quality and market leadership requires a long term outlook. The goals, long-term
plans, short-term plans, and measures must be effectively linked together to align the
employees with the corporate resources to meet the goals.
Management by Fact.
Corporations must be managed “by facts not gut feelings. ” The information used for
decisions must be based on reliable data and analysis, and must be linked to customer
satisfaction.
Partnership Development.
Companies should seek to build partnerships with all stakeholders of the company. The
stakeholders include customers, suppliers, employees, stockholders, the community,
universities, and others.
Public Responsibility
The company needs to address areas of corporate citizenship and responsibility. Included in
this value is the sharing of nonproprietary quality related information.
Rewards and recognition
People should be rewarded based on their performance and be motivated to enhance the
performances. This supports the paradigm shift.
You can see that TQM places a great deal of responsibility on all workers. If employees are to
identify and correct quality problems, they need proper training. They need to understand how to
assess quality by using a variety of quality control tools, how to interpret findings, and how to correct
problems. In this section we look at seven different quality tools.
Cause-and-effect diagram
A chart that identifies potential causes of particular quality problems. It is also called Ishikawa or
fishbone chart and Identifies many possible causes for an effect or problem and sorts ideas into useful
categories.The “head” of the fish is the quality problem. The diagram is drawn so that the “spine” of
the fish connects the “head” to the possible cause of the problem. These causes could be related to the
machines, workers, measurement, suppliers, materials, and many other aspects of the production
process. Each of these possible causes can then have smaller “bones” that address specific issues that
relate to each cause. For example, a problem with machines could be due to a need for adjustment,
old equipment,or tooling problems. Similarly, a problem with workers could be related to lackof
training, poor supervision, or fatigue.
Flowchart
Checklists
A checklist is a list of common defects and the number of observed occurrences of these defects. It is
a simple yet effective fact-finding tool that allows the worker to collect specific information regarding
the defects observed.Check sheets: - a structured prepared form for collecting and analyzing data.
Control charts
Charts used to evaluate whether a process is operating within set expectations.Graphs used to study
how a process changes over time.These charts are used to evaluate whether a process is operating
within expectations relative to some measured value.
The chart has a line down the center representing the average value of the variable we are measuring.
Above and below the center line are two lines, called the upper control limit (UCL) and the lower
control limit (LCL). As long as the observed values fall within the upper and lower control limits, the
process is in control and there is no problem with quality. When a measured observation falls outside
of these limits, there is a problem.
Scatter diagrams
Graphs that show how two variables are related to each other. They are particularly useful in
detecting the amount of correlation, or the degree of linear relationship, between two variables For
example, increased production speed and number of defects could be correlated positively; as
production speed increases, so does the number of defects. Two variables could also be correlated
negatively, so that an increase in one of the variables is associated with a decrease in the other. For
example, increased worker training might be associated with a decrease in the number of defects
observed.
The greater the degree of correlation, the more linear is the observations in thescatter diagram. On the
other hand, the more scattered the observations in the diagram, the less correlation exists between the
variables.
Pareto Analysis
Pareto analysis is a technique used to identify quality problems based on their degree of importance.
The logic behind Pareto analysis is that only a few quality problems are important, whereas many
others are not critical. The technique was named after Vilfredo Pareto, a nineteenth-century Italian
economist who determined that only a small percentage of people controlled most of the wealth.
This concept has often been called the 80–20 rule and has been extended to many areas. In quality
management the logic behind Pareto’s principle is that most quality problems are a result of only a
few causes. The trick is to identify these causes.
One way to use Pareto analysis is to develop a chart that ranks the causes of poor quality in
decreasing order based on the percentage of defects each has caused. For example, a tally can be
made of the number of defects that result from different causes, such as operator error, defective
parts, or inaccurate machine calibrations.
Histograms
A histogram is a chart that shows the frequency distribution of observed values of a variableor how
often each different value in a set of data occurs.
Manufacturing organizations produce a tangible product that can be seen, touched, and directly
measured. Examples include cars, CD players, clothes, computers, and food items. Therefore, quality
definitions in manufacturing usually focus on tangible product features.
The most common quality definition in manufacturing is conformance, which is the degree to which a
product characteristic meets preset standards. Other common definitions of quality in manufacturing
include performance—such as acceleration of a vehicle; reliability—that the product will function as
expected without failure; features—the extras that are included beyond the basic characteristics;
durability— expected operational life of the product; and serviceability—how readily a product can
be repaired.It is easy to see how different customers can have different definitions in mind when they
speak of high product quality.
In contrast to manufacturing, service organizations produce a product that is intangible. Usually, the
complete product cannot be seen or touched. Rather, it is experienced.
The intangible nature of the product makes defining quality difficult. Also, since a service is
experienced, perceptions can be highly subjective. In addition to tangible factors, quality of services
is often defined by perceptual factors. These include responsiveness to customer needs, courtesy and
friendliness of staff, promptness in resolving complaints, and atmosphere. Other definitions of quality
Faculty of Business and Economics Page 103
Harambee University College Operation Management
in services include time—the amount of time a customer has to wait for the service; and consistency
—the degree to which the service is the same each time. For these reasons, defining quality in
services can be especially challenging. Dimensions of quality for manufacturing versus service
organizations are shown in Table 5.8.1 below.
The reason quality has gained such prominence is that organizations have gained an understanding of
the high cost of poor quality. Quality affects all aspects of the organization and has dramatic cost
implications. The most obvious consequence occurs when poor quality creates dissatisfied customers
and eventually leads to loss of business.
However, quality has many other costs, which can be divided into two categories. The first category
consists of costs necessary for achieving high quality, whichare called quality control costs. These are
of two types: prevention costs and appraisalcosts. The second category consists of the cost
consequences of poor quality, which are called quality failure costs. These include external failure
costs and internal failure costs.The first two costs are incurred in the hope of preventing the second
two.
Prevention costs are all costs incurred in the process of preventing poor quality from occurring. They
include quality planning costs, such as the costs of developing and implementing a quality plan. Also
included are the costs of product and process design, from collecting customer information to
designing processes that achieve conformance to specifications. Employee training in quality
measurement is included as part of this cost, as well as the costs of maintaining records of
information and data related to quality.
Appraisal costs are incurred in the process of uncovering defects. They include the cost of quality
inspections, product testing, and performing audits to make sure that quality standards are being met.
Faculty of Business and Economics Page 104
Harambee University College Operation Management
Also included in this category are the costs of worker time spent measuring quality and the cost of
equipment used for quality appraisal.
Internal failure costs are associated with discovering poor product quality before the product
reaches the customer site. One type of internal failure cost is rework, which is the cost of correcting
the defective item. Sometimes the item is so defective that it cannot be corrected and must be thrown
away. This is called scrap, and its costs includeall the material, labor, and machine cost spent in
producing the defective product. Other types of internal failure costs include the cost of machine
downtime due to failures in the process and the costs of discounting defective items for salvage value.
External failure costs are associated with quality problems that occur at the customer site. These
costs can be particularly damaging because customer faith and loyalty can be difficult to regain. They
include everything from customer complaints, product returns, and repairs, to warranty claims,
recalls, and even litigation costs resulting from product liability issues. A final component of this cost
is lost sales and lost customers. External failure cansometimes put a company out of business almost
overnight.
The Malcolm Baldrige National Quality Award was established in 1987, when Congress passed the
Malcolm Baldrige National Quality Improvement Act. The award is named after the former Secretary
of Commerce, Malcolm Baldrige, and is intended to reward and stimulate quality initiatives. It is
designed to recognize companies that establish and demonstrate high quality standards. The award is
given to no more than two companies in each of three categories: manufacturing, service, and small
business. Past winners include Motorola Corporation, Xerox, FedEx, 3M, IBM, and the Ritz-Carlton.
To compete for the Baldrige Award, companies must submit a lengthy application, which is followed
by an initial screening. Companies that pass this screening move to the next step, in which they
undergo a rigorous evaluation process conducted by certified Baldrige examiners.
The examiners conduct site visits and examine numerous company documents. They base their
evaluation on seven categories.
The Baldrige criteria have evolved from simple award criteria to a general framework for quality
evaluation. Many companies use these criteria to evaluate their own performance and set quality
targets even if they are not planning to formally compete for the award
6.1 Introduction
Inspecting all the items is very time consuming and costly affair. An alternative to this is statistical
control which is a type of inspection based on probability and mathematical techniques. In many
instances it may be used in place of ordinary inspection procedures. Its objective, like that of
inspection, is to control quality level of product without doing 100 percent inspection.
SQC uses statistical methods to gather and analyze data in the determination and control of quality. It
is based on sampling, probability, and statistical inference, i.e., judging an entire lot by the
characteristics of a sample.
The question often raised is ‘whether a sample always reflects the true characteristics of the
production lot’. The answer is no. However, the sampling may be the only way to estimate the quality
of a lot.
Statistical process control (SPC) is the application of statistical techniques to determine whether the
output of a process conforms to the product or service design. It aims at achieving good quality
during manufacture or service through prevention rather than detection. It is concerned with
controlling the process that makes the product because if the process is good then the product will
automatically be good.
SPC is implemented through control charts that are used to monitor the output of the process and
indicate the presence of problems requiring further action. Control charts can be used to monitor
processes where output is measured as either variables or attributes. There are two types of control
charts: Variable control chart and attribute control chart.
1. Variable control charts: It is one by which it is possible to measures the quality characteristics
of a product. The variable control charts are X-BAR chart, R-BAR chart, SIGMA chart.
2. Attribute control chart: It is one in which it is not possible to measures the quality
characteristics of a product, i.e., it is based on visual inspection only like good or bad, success or
failure, accepted or rejected. The attribute control charts arep-charts, np-charts, c-charts, u-
charts. It requires only a count of observations on characteristics e.g., the number of
nonconformingitems in a sample.
2. Two control limits used to judge whether action is required, an upper control limit (UCL) and a
lower control limit (LCL).
3. Data points, each consisting of the average measurement calculated from a sample taken from
the process, ordered overtime.
By the Central Limit Theorem, regardless of the distribution of the underlying individual
measurements, the distribution of the sample means will follow a normal distribution. The control
limits are set based on the sampling distribution of the quality measurement.
1. A control chart indicates when something may be wrong, so that corrective action can be taken.
2. The patterns of the plot on a control chart diagnosis possible cause and hence indicate possible
remedial actions.
As the name indicates, these charts will use variable data of a process. X chart given an idea of the
central tendency of the observations. These charts will reveal the variations between sample
observations. R chart gives an idea about the spread (dispersion) of the observations. This chart
shows the variations within the samples.
X-Chart and R-Chart: The formulas used to establish various control limits are as follows:
R-Chart:To calculate the range of the data, subtract the smallest from the largest measurement in
the sample.
The control limits are: UCLR = D4R- and LCLR =D3R-where R- = average of several past R values
and is the central line of the control chart, and
D3, D4 = constants that provide three standard deviation (three-sigma) limits for a given sample
size
UCL X = X́ + A2 R andLCL X = X́ + A2 R
Where X́ = central line of the chart and the average of past sample mean’s, and
Control charts for variables (with the standard deviation of the process, σ, known) monitor the
mean, X , of the process distribution.
Where X́ = center line of the chart and the average of several past sample means, Zis the standard
normal deviate (number of standard deviations fromthe average),
σ X = σ / √ n and is the standard deviation of the distribution of sample means, and n is the sample
size
3. Decide a suitable sample size (n) and number of samples to be collected (k).
5. Find the measurement of interest for each piece within the sample.
P-charts and C-charts are charts will used for attributes. This chart shows the quality characteristics
rather than measurements.
P-CHART
A p-chart is a commonly used control chart for attributes, whereby the quality characteristic is
counted, rather than measured, and the entire item or service can be declared good or defective.
σp =√ p (1− p)/n, where n = sample size, and p = average of several past p values and central line
on the chart.
Using the normal approximation to the binomial distribution, which is the actual distributionof p,
Where: z is the normal deviate (number of standard deviations from the average).
ILLUSTRATION 1: Several samples of size n = 8 have been taken from today’s production of
fence posts. The average post was 3 yards in length and the average sample range was 0.015 yard.
Find the 99.73% upper and lower control limits.
SOLUTION: X́ = 3 yds
R = 0.015 yds
ILLUSTRATION 2 (Problem on X and R Chart): The results of inspection of 10 samples with its
average and range are tabulated in the following table. Compute the control limit for the X and R-
chart and draw the control chart for the data.
R = ΣR/No. of samples
Therefore, X́ = 76 = 7.6
10
R =26= 2.6
10
For X chart
For R chart
The values of various factors (like A2, D4 and D3) based on normal distribution can be found from
the following table:
LCL = D3 × R = 0 × R = 0
These control limits are marked on the graph paper on either side of the mean value (line).
X and R values are plotted on the graph and jointed, thus resulting the control chart.
From the X chart, it appears that the process became completely out of control for 4 thsample over
labels.
ILLUSTRATION 3: Twenty-five engine mounts are sampled each day and found to have an average
width of 2 inches, with a standard deviation of 0.1 inches. What are the control limits that include
99.73% of the sample means (z = 3)?
ILLUSTRATION 4 (Problem on p-Chart): The following are the inspection results of 10 lots, each
lot being 300 items. Number defective in each lot is 25, 30, 35, 40, 45, 35, 40, 30, 20 and 50.
Calculate the average fraction defective and three sigma limit for P-chart and state whether the
process is in control.
p = 350= 0.1167
3000
= 300
Therefore,
√ P(1−P)
n
=
√
0.1167(1−0.1167)
300
=
√ 0.1167 x 0.8333
300
= 0.01852
and 3
√ P(1−P)
n
= 3*0.01852 = 0.05556
Conclusion: All the samples are within the control limit and we can say process is under control.
There are two types of errors. They are type-I and type-II that can occur when making inferences
from control chart.
This results from inferring that a process is out of control when it is actually in control. The
probability of type-I error is denoted by α, suppose a process is in control. If a point on the control
chart falls outside the control limits, we assume that, the process is out of control.
Faculty of Business and Economics Page 114
Harambee University College Operation Management
However, since the control limits are a finite distance (3σ) from the mean. There is a small chance
about 0.0026 of a sample falling outside the control limits. In such instances, inferring the process is
out of control is wrong conclusion.
The control limits could be placed sufficiently far apart say 4 or 5σ stand deviations on each side of
the central lines to reduce the probability of type-I error.
This results from inferring that a process is in control when it is really out of control. If no
observations for outside the control limits, we conclude that the process is in control while in reality it
is out control. For example, the process mean has changed.
The objective of acceptance sampling is to take decision whether to accept or reject a lot based on
sample’s characteristics. The lot may be incoming raw materials or finished parts.
An accurate method to check the quality of lots is to do 100% inspection. But, 100% inspection will
have the following limitations:
Assessing capability involves evaluating process variability relative to preset product or service
specifications
Cp ≤ 1, as
in Fig. (b),
process not
capable of
producing
within
specifications
One shortcoming, Cpassumes that the process is centered on the specification range
Cp=Cpkwhen process is centered
Example
Computing the Cp Value at Cocoa Fizz: three bottling machines are being evaluated for possible use
at the Fizz plant. The machines must be capable of meeting the design specification of 15.8-16.2 oz.
with at least a process capability index of 1.0 (Cp≥1)
The table below shows the information gathered from production runs on each machine. Are they all
acceptable?
Machine σ USL-LSL 6σ
A .05 .4 .3
B .1 .4 .6
C .2 .4 1.2
Solution:
USL−LSL . 4
Cp= = =1. 33
Machine A: 6σ 6 (.05 )
Machine B: Cp=?
Machine C: Cp=
Design specifications call for a target value of 16.0 ±0.2 OZ. (USL = 16.2 & LSL = 15.8)
Observed process output has now shifted and has a µ of 15.9 and a σ of 0.1 oz.
Cpk=min
(163(..1)2−15 . 9 , 15. 9−15 .8
3(.1 ) )
.1
Cpk= =. 33
.3
Service Organizations have lagged behind manufacturers in the use of statistical quality control.
Statistical measurements are required and it is more difficult to measure the quality of a service
A way to deal with service quality is to devise quantifiable measurements of the service element
Example
Service at a bank: The Dollars Bank competes on customer service and is concerned about service
time at their drive-by windows. They recently installed new system software which they hope will
meet service specification limits of 5±2 minutes and have a Capability Index (Cpk) of at least 1.2.
They want to also design a control chart for bank teller use.
They have done some sampling recently (sample size of 4 customers) and determined that the process
mean has shifted to 5.2 with a Sigma of 1.0 minutes.
Solution
USL−LSL 7-3
Cp= = =1 .33
( )
6σ 1.0
6
√4
1 .8
Cpk= =1 .2
1 .5
Chapter 7
Faculty of Business and Economics Page 118
Harambee University College Operation Management
4.6 Introduction
JIT is a philosophy that was developed by the Toyota Motor Company in the mid-1970s. It is a
manufacturing system whose goal is to optimize process and procedures by continuously pursuing
waste reduction. It has since become the standard of operation for many industries. It focuses on
simplicity, eliminating waste, taking a broad view of operations, visibility, and flexibility. Three key
elements of this philosophy are JIT manufacturing, total quality management, and respect for people.
JIT views waste as anything that does not add value.
Traditional manufacturing systems use “push” production, whereas JIT uses “pull” production. Push
systems anticipate future demand and produce in advance in order to have products in place when
demand occurs. This system usually results in excess inventory. Pull systems work backwards.
They last workstation in the production line requests the precise amounts of materials required.
JIT manufacturing is a coordinated production system that enables the right quantities or parts to
arrive when they are needed precisely where they are needed. Key elements of JIT manufacturing are
the pull system and kanban production, small lot sizes and quick setups, uniform plant loading,
flexible resources, and streamlined layout.
TQM creates an organizational culture that defines quality as seen by the customer. The concepts of
continuous improvement and quality at the source are integral to allowing for continual growth and
the goal of identifying the causes of quality problems.
JIT considers people to be the organization’s most important resource. JIT is equally applicable in
service organizations, particularly with the push toward time-based competition and the need to cut
costs. JIT success is dependent on inter-functional coordination and effort.
Whilst we may think today that Japan has harmonious industrial relations with management and
workers working together for the common good, the fact is that, in the past, this has not been true. In
the immediate post Second World War period, for example, Japan had one of the worst strike records
Faculty of Business and Economics Page 119
Harambee University College Operation Management
in the world. In 1953, the car maker Nissan suffered a four month strike - involving a lockout and
barbed wire barricades to prevent workers returning to work. That dispute ended with the formation
of a company backed union, formed initially by members of the Nissan accounting department.
Striking workers who joined this new union received payment for the time spent on strike, a powerful
financial incentive to leave their old union during such a long dispute. The slogan of this new union
was ‘Those who truly love their union love their company’.
In order to have a method of controlling production (the flow of items) in this new environment
Toyota introduced the kanban. The kanban is essentially information as to what has to be done.
Within Toyota the most common form of kanban was a rectangular piece of paper within a
transparent vinyl envelope.
The information listed on the paper basically tells a worker what to do - which items to collect or
which items to produce. In Toyota two types of kanban are distinguished for controlling the flow of
items:
• A withdrawal kanban. Which details the items that should be withdrawn from the preceding step
in the process.
All movement throughout the factory is controlled by these kanbans—in addition since the kanbans
specify item quantities precisely no defects can be tolerated—e.g. if a defective component is found
when processing a production ordering kanban then obviously the quantity specified on the kanban
cannot be produced. Hence, the importance of automation (as referred to above) the system must
detect and highlight defective items so that the problem that caused the defect can be resolved.
Another aspect of the Toyota Production System is the reduction of setup time. Machines
andprocesses must be re-engineered so as to reduce the setup time required before processing of a
new item can start. In the Western world, JIT only began to impact on manufacturing in the late
1970’s and early1980’s. Even then it went under a variety of names—e.g. Hewlett Packard called it
‘stockless production’. Such adaptation by Western industry was based on informal analysis of the
systems being used in Japanese companies.
Just-In-Time (JIT) is a very popular term these days among the managers of various industries. No
conference on Operations Management is seen complete unless some topics or papers are included in
the deliberation. It is seen in various ways by the practitioners in manufacturing, services and
administrative sectors. JIT is a system, a concept, a philosophy, a set of tools, a way of life and so on.
No two
JIT are same—they vary according to the places and conditions in which they are being applied.
JIT is both a philosophy and a set of methods for manufacturing. JIT emphasizes waste reduction,
total quality control, and devotion to the customer. It strives to eliminate sources of manufacturing
waste by producing the right part in the right place at the right time. Waste results from any activity
that adds cost without adding value, such as moving and storing of an item. It tries to provide the
right part at the right place and at the right time.
JIT is also known as lean production or stockless production system. It should improve profits and
return on investment by reducing inventory levels (or increasing the inventory turnover rate),
improving product quality, reducing production and delivery lead times, and reducing other costs
(such as those associated with machine setup and equipment breakdown). In a JIT system,
underutilized (or excess) capacity is used instead of buffer inventories to hedge against problems that
may arise. JITapplies primarily to repetitive manufacturing processes in which the same products and
components are produced over and over again. The general idea is to establish flow processes (even
when the facility uses a jobbing or batch process layout) by linking work centers so that there is an
even, balanced flow of materials throughout the entire production process, similar to that found in an
assembly line. To accomplish this, an attempt is made to reach the goals of driving all queues toward
zero and achieving the ideal lot size of one unit.
Just-In-Time (JIT) has assumed a kind of mystique of an oriental philosophy. Much of it is plain
common sense-as more American and European companies are discovering to their benefit. General
Motors, IBM, Hewlett-Packard, General Electric, and Black and Decker are among the big US
companies that have adopted JIT production methods. European companies are joining them,
Britain’s state owned Rover Group is the latest recruit. Its car division has announced that ‘preferred
suppliers’ will get long-term contracts to prove the bids which make up more than half of its
production costs.
One misconception of JIT is that it is limited to the flow line/large-batch environment of the
automotive industry. Once the automotive company has started along the JIT route, there seems to be
no area which does not benefit from JIT principles like the elimination of waste. JIT applies very well
to the tool-room (job-shop) as it does to the assembly line. Techniques for eliminating waste can be
applied to good effect outside manufacturing as well, such as in sales and distribution.
Most successful JIT applications have been in repetitive manufacturing, where batches of standard
products are produced at high speeds and high volumes. Smaller, less complex job shops have used
JIT, but operations have been changed so that they behave somewhat like repetitive manufacturing.
JIT concepts, which started in manufacture, have spread to all functions of a business. In Japan,
JIT has developed into a total management system from marketing to delivery. It has diffused
throughsuppliers and distributors. It has provided Japanese companies with a formidable competitive
advantageover their Western rivals. If we are competing against a Japanese company, we are
competingagainst JIT.
Putting this concept into practice means a reversal of the traditional thinking process. In conventional
production processes, units are transported to the next production stage as soon as they are ready. In
JIT, each stage is required to go back to the previous stage to pick up the exact number of units
needed.
Toyota is accredited with systematizing JIT. The Japanese carmaker defines it as the ‘reduction of
cost through the elimination of waste’. It spreads throughout Japan in the 1970s as a logical way to
manage a large flow of materials. Materials do not increase in value unless they are being processed.
So profits are increased when inventory and safety stocks are reduced or replaced by small, frequent
deliveries.
Unlike automation, JIT is not capital intensive. Prof. Voss of UK observes that the average
manufacturing company put 75% of its effort into reducing labor costs, which often represent about
10 % of its total costs, instead of concentrating on material which can represent more than half its
costs.
The volume of materials flowing through a factory is reduced by JIT, making bottlenecks and other
problems more visible. A favorite analogy is with water in a river. When the level of water falls,
rocks start to appear. The rocks can then be removed rather than hit. It can, for instance, become plain
that it is pointless automating a warehouse because the warehouse itself is unnecessary.
The results of just-in-time inventory management are apparent: cost reduction, increased speed to
market and identification of bottlenecks in the workflow. Effective implementation, however,
requires a different way of thinking about relationships with suppliers, bringing them into a
cooperative endeavor with the recognition of mutual goals.
Corporate culture must promote an inquiring attitude and an interest in finding better ways to do
things through communication and cooperation. The Ford and Toyota examples illustrate a final
important point for knowledge management: Some of the best ideas for process improvement can
come from tapping the brains of those closest to the situation.
It is insufficient for firms just to be high-quality and low-cost producers. Today, they must also be
first in getting products and services to customer fast. To compete in this new environment, order-to
delivery cycle must be drastically reduced. JIT is weapon of choice today to reduce elapsed time of
this cycle.
A JIT company adds value with every activity where JIT has been introduced; there have been
dramatic increases in the proportion of the actual value-adding time to the total cycle time, often
morethan 70%. Since non-JIT companies usually report about 15%, JIT improves operating
efficiency significantly. By eliminating non-value added costs, such as defective materials, in process
inventories, and delays; JIT simplifies the entire manufacturing system and improves long-term
productivity.
JIT has been found to be so effective that it increases productivity, work performance and product
quality, while saving costs and it helps companies spotlight those areas that are falling behind and
need improvement. It also slashes inventory, free up space on the factory floor and shine a blinding
spotlight on the delivery and quality performance of parts suppliers. Therefore, the result of JIT was
smaller inventories of both parts and final products with smaller inventories, billions of dollars were
freed up for investment purposes.
There are some prerequisites for successful JIT implementation. Industries need to do the following:
• Stabilize and level the Master Production Schedule (MPS) with uniform plant loading: createa
uniform load on all work centers through constant daily production (establish freeze windowsto
prevent changes in the production plan for some period of time) and mixed modelassembly (produce
roughly the same mix of products each day, using a repeating sequence if several products are
produced on the same line). Meet demand fluctuations through end-item inventory rather than
through fluctuations in production level.
• Reduce or eliminate setup times: The process of JIT is to produce parts in a lot size of 1. In many
cases this is not economically feasible because of the cost of set up compared with inventory carrying
cost. The JIT solution to this problem is to reduce the setup time as much as possible ideally to
zero.Bringing down the set up time for machine is the key factor to implement JIT system. This
concept is popularly known by the name ‘Single Minute Exchange of Dies (SMED)’. This means the
maximum time taken in changing a die to switch over from one type of component to another should
be in single digit (0 to 9). This is possible by off-line set up of the dies. Aim for ‘one-touch’ setup -
which is possible through better planning, process redesign, and product redesign.
• Reduce lot sizes (manufacturing and purchase): reducing setup times allows economical
production of smaller lots; close cooperation with suppliers is necessary to achieve reductions in
order lot sizes for purchased items, since this will require more frequent deliveries.
• Reduce lead times (production and delivery): production lead times can be reduced by moving
work stations closer together, applying group technology and cellular manufacturing concepts,
reducing queue length (reducing the number of jobs waiting to be processed at a given machine), and
improving the coordination and cooperation between successive processes; delivery lead times can be
reduced through close cooperation with suppliers, possibly by inducing suppliers to locate closer to
the factory.
• Preventive maintenance: use machine and worker idle time to maintain equipment and prevent
breakdowns.
• Flexible work force: workers should be trained to operate several machines, to perform
maintenance tasks, and to perform quality inspections. In general, the attitude of respect for people
leads to giving workers more responsibility for their own work.
• Require supplier quality assurance and implement a zero defects quality program:errors leading
to defective items must be eliminated, since there are no buffers of excess parts. A quality at the
source (jidoka) program must be implemented to give workers the personal responsibility for the
quality of the work they do, and the authority to stop production when something goes wrong.
• Small-lot (single unit) conveyance: use a control system such as a kanban (card) system to convey
parts between work stations in small quantities (ideally, one unit at a time). In its largest sense, JIT is
not the same thing as a kanban system, and a kanban system is not required to implement JIT (some
companies have instituted a JIT program along with a MRP system), although JIT is required to
implement a kanbansystem and the two concepts are frequently equated with one another.
The Kanban system was developed by Toyota in the early stages of JIT improvement campaign. The
particular feature of a Kanban system is that it short-circuits normal ordering procedures: as supplies
of a Kanban-controlled material are used up, new supplies are requested simply by releasing a re-
ordercard which is sent direct to the supply point (i.e. the manufacturer or stock lists). It is often
described as a ‘pull’ system, in contrast with traditional ordering procedures, which ‘push’ orders into
the system.
The term ‘Kanban’ simply means ‘card’. To explain the Kanban concept, consider the case of an
assembler who is drawing a particular component from a pallet which, when full, contains 100 pieces.
As the last piece is drawn, the assembler takes an identifying card from the empty pallet and sends it
back down the line to the earlier work center where that part (among others) is made. On receiving
the Kanban card, the work center responsible for supplying the component makes a new batch of 100
and sends it to the assembly post (so that the assembler isn’t kept waiting, there will probably be an
extra pallet in the system to maintain the supply while the new batch is being made). This means that
there is a minimum of paperwork, and the order cycle is generated on a ‘pull’ basis, the components
only being made when there is an immediate need for them, thus keeping work-in-progress to a
minimum.
A kanban is a card that is attached to a storage and transport container. It identifies the part number
and container capacity, along with other information. There are two main types of kanban(some other
variations are also used):
• Production Kanban (P-kanban): This signals the need to produce more parts.
• Conveyance Kanban (C-kanban): This signals the need to deliver more parts to the next work center
(also called a “move kanban” or a “withdrawal kanban”).
Faculty of Business and Economics Page 126
Harambee University College Operation Management
A kanban system is a pull-system, in which the kanban is used to pull parts to the next production
stage when they are needed; a MRP system (or any schedule-based system) is a push system, inwhich
a detailed production schedule for each part is used to push parts to the next production stagewhen
scheduled. The weakness of a push system (MRP) is that customer demand must be forecast
andproduction lead times must be estimated. Bad guesses (forecasts or estimates) result in excess
inventory,and the longer the lead time, the more room for error. The weakness of a pull system
(kanban) isthat following the JIT production philosophy is essential, especially concerning the
elements of short setup times and small lot sizes.
The main idea behind the principle of JIT is to exclude the roots of manufacturing waste by getting
just the right quantity of raw materials and generating just the right quantity of products in the right
place at the right time. In other words JIT is a process aimed at increasing value added and
eliminating waste by providing the environment to perfect and simplify the processes.
JIT works as a pull system and applies to generally every level in a multi-level production system. A
pull system is actually “the subsequent process that pulls its requirements from the preceding
processes in question”. One useful and effective way to implement this “pull” production is a
kanbansystem.
Companies are beginning to turn to internet based technologies to communicate with their suppliers,
making the JIT ordering and delivering process speedier and more flexible. Although applied mostly
to manufacturing, the concepts are not limited to this area of the business. Indeed JIT concepts are
always applied to non-manufacturing areas in the same way as in manufacturing areas in the excellent
company.
The philosophy of JIT is a continuous improvement that puts emphasis on prevention rather than
correction, and demands a companywide focus on quality. It is about developing competence and
simplification in the way we do things by squeezing out waste every step of the way.
But there are no short cuts to excellence. We can learn from, and so avoid the pitfalls of, companies
which have already embarked on the JIT journey. It is not necessary to make the same mistakes.
The requirement of JIT is that equipment, resources and labor are made available only in the amount
required and at the time required to do the job. It is based on producing only the necessary units in the
necessary quantities at the necessary time by bringing production rates exactly in line with market
Faculty of Business and Economics Page 127
Harambee University College Operation Management
demand. In short, JIT means making what the market wants, when it wants it, while using a minimum
of facilities, equipment, materials, and human resources.
The relationship of JIT to manufacturing strategy development can be considered in terms of both its
impact on customer needs and of matching or improving or competitor activities. Table-below shows
how JIT benefits can be used to provide different forms of competitive advantage. For example, an
improvement in flexibility helps to make the facility more responsive to changes to customer demand,
and shortens lead time.
A JIT system is designed to expose errors and get them corrected rather than covering them up with
inventory because a perfect quality is required for the successful functioning of a JIT system.
Shigeo Shingo, a recognized JIT authority and engineer at the Toyota Motor Company identifies
seven wastes as being the targets of continuous improvement in production processes.
Waste of stocks: Reduce by shortening setup times and reducing lead times, by
synchronizing work flows and improving work skills, and even by smoothing fluctuations in
demand for the product. Reducing all the other wastes reduces the waste of stocks.
Waste of motion: Study motion for economy and consistency. Economy improves
productivity,and consistency improves quality. First improve the motions, then mechanize or
automate.Otherwise there is danger of automating waste.
Waste of making defective products: Develop the production process to prevent defects
from being made so as to eliminate inspection. At each process, accept no defects and make
no defects.
7.6. Value-Added Manufacturing
A method of manufacturing that seeks to eliminate waste in processing. Any step in the
manufacturing process that does not add value to the product for the customer is wasteful. Some
examples of wasteful steps are: process delays, material transport, storages, work-in-process (WIP)
inventories, finished goods inventories, excessive paper processing, etc that do not add value to the
product.
JIT system cannot be implemented overnight. It should be a gradual process. It may be practical to
have a hybrid model in the early phase. According to Shingo, Toyota Motor Company took 20years
to implement JIT system. We also need to note the following points:
The terms push and pull are used to describe two different systems for moving work through a
production process.
Push System
When work is finished at a workstation, the output is pushed to the next station; or, in the case of the
final operation, it is pushed on to final inventory. In this system work is pushed on as it is completed,
with no regard for whether the next station is ready for the work. In a totally predictable environment,
demand forecasts would always be realized; bill of materials will be absolutely accurate; suppliers
would ship on time and with total accuracy; nothing would be misplaced or miscounted; machines
would never fail; all personnel would be present when expected; and all intervals in the process
would be totally predictable. In such an environment, all manufacturing activities may be scheduled
using the push MRP system.
Pull system
Control of moving the work rests with the following operation; each work station pulls the output
from the preceding station as it is needed. Output of the final operation is pulled by customer demand
or the master schedule. Thus in pull system, work is moved in response to demand from the next
stage in the process.
3. A hybrid model (traditional + JIT) is the most appropriate. If JIT fails, the traditional model will be
used as a fallback position.
4. A flexible management system is essential. The private sector is usually more compatible.
(ii) Low inventories of raw materials, work-in-process inventories and finished goods.
(iii) Appropriate material handling system, so that there won’t be work-in-process inventories.
JIT is about doing the simple things well, and gradually doing them better and it is about developing
competence and simplification in the way we do things. In addition to this, it is also about squeezing
out waste every step of the way. Generally, JIT manufacturing seeks to achieve the following goals:
1. To produce the required quality or zero defects. In manufacturing, traditionally people thought
that zero defects producing was not possible because of the fact that people thought that at some level
of production it would be no longer be possible to produce without defects. Also despite there were
defects, the product did reach customers expectation. With the aim of JIT there will be no longer any
cause of a defect and therefore all products will meet more than the expectations. This is also related
to a part of quality management.
An approach to quality control that starts from the premise that if quality cannot be built into the
process, then the only way to ensure that no defective products are passed on to the customer
(downstream process) is to inspect every part made.
In a Just-in-time environment where waiting for an inspector would be intolerable, the alternatives are
self-inspection and error proofing. Inspection at source also improves the likelihood of discovering
Each individual and function involved in the manufacturing system must, therefore, accept the
responsibility for the quality level of its products. Traditional companies believe quality is costly,
defects are caused by workers, and the minimum level of quality that can satisfy the customer is
enough.
Companies practicing the JIT believe quality leads to lower costs, than systems cause most defects,
and that quality can be improved within the Kaizen framework. This concept introduces the
correction of the problem before many other defective units have been completed.
2. Zero set-up time. Reducing the set-up times leads to a more predictable production. No setup time
also leads to a shorter production time/production cycle, and less inventories.
To effectively implement a low inventory system, the common practice of lot sizing through the
economic order quantity model must be forgotten. Therefore, the time to set up for a different product
in the line needed to be significantly reduced. Innovative designs and changeover techniques are
critical.
4. Zero handling. Zero handling in JIT means eliminating all non-value adding activities. So, zero-
handling means reducing (by redesigning) non-value adding activities.
5. Zero lead-time. Lead time is the time between ordering a product and receiving it. The time taken
to process orders, order parts, manufacture goods, store, pick and dispatch goods all impact the
customer lead-time.
Zero lead-time is a result of the usage of small lots and increases the flexibility of the system.
When there are no lead times, the possibility to make planning which do not rely on forecasts
becomes bigger and bigger. The JIT philosophy recognizes that in some markets it is impossible to
Faculty of Business and Economics Page 133
Harambee University College Operation Management
have zero lead-times, but makes clear that when a firm focuses on reducing lead-times, this firm can
manufacture in the same market.
7. Reducing Manufacturing Cost. Designing products that facilitate and ease manufacturing
processes. This will help to reduce the cost of manufacturing and building the product to
specifications. One aspect in designing products for manufacturability is the need to establish a good
employer and employee relationship. This is to cultivate and tap the resources of the production
experts (production floor employee), and the line employees to develop cost saving solutions.
Advantages of JIT
Advocates of JIT claim it is a revolutionary concept that all manufacturers will have to adopt in order
to remain competitive. Its benefits are many:
3. Eliminate waste and rework and consequently reduce requirements for raw materials, person,
power and machine capacity
5. Reduced inventory. As a result: Frees up working capital for other projects, Less space is needed,
and Customer responsiveness increases.
7. Reduce lot sizes (manufacturing and purchase): reducing setup times allows economical production
of smaller lots.
8. Problem clarification.
9. Cost savings
Faculty of Business and Economics Page 134
Harambee University College Operation Management
(a) Materials Cost Savings: Materials cost savings is basically the reduction of costs incorporated
with purchasing, receiving, inspection, and stockroom costs.
• Reduction of Suppliers
• Long-term Contracts
• Eliminate unpacking
• Eliminate Inspection
(b) Manufacturing Cost Savings: Manufacturing cost savings identifies saving in the engineering,
production, and the quality control activities. A major part of manufacturing cost savings is keeping a
high level of quality, quality reduces cost and increases revenue.
(c) Sales Cost Savings: Sales cost saving comes in the form of reducing overlap between the supplier
and the customer, which is inspection and testing. The most effective situation that the sales
department can establish is finding customers that also use JIT systems.
Disadvantages of JIT
There are often a number of barriers that also have to be overcome to achieve the final goal.
The JIT method demands a much disciplined assembly-line process. The entire factory has to
be in sync to successfully exploit its methods. Manufacturers can afford fewer errors in the
delivery of the supplier’s component; if a part isn’t there, the assembly line stops, and that can
result in the loss of manpower and cash.
Changes in production planning, inaccurate forecasting procedures resulting in under or
overforecasting of demand, equipment failures creating capacity problems and employee
absenteeismall create problems in implementing JIT.
JIT requires special training and the reorganization of policies and procedures.
The organizational cultures vary from firm to firm. There are some cultures that tie to
JITsuccess but it is difficult for an organization to change its cultures within a short time.
Difference in implementation of JIT. Because JIT was originally established in Japan,
thebenefits may vary.
Resistance to change. JIT involves a change throughout the whole organization, but
humannature resists changing. The most common resistances are emotional resistance and
rationalresistance. Emotional resistances are those psychological feeling which hinder
performancesuch as anxiety. Rational resistance is the deficient of the needed information for
the workersto perform the job well.
JIT requires workers to be multi-skilled and flexible to change.
Product cost—is greatly reduced due to reduction of manufacturing cycle time, reduction of
waste and inventories and elimination of non-value added operation.
Quality—is improved because of continuous quality improvement programmes.
Design—Due to fast response to engineering change, alternative designs can be quickly
Chapter 8
Forecasting
8.1 Introduction
Forecasting is an estimate of demand, which will happen in future. Since, it is only an estimate based
on the past demand, proper care must be taken while estimating it. Given the sales forecast, the
factory capacity, the aggregate inventory levels and size of the work force, the manager must decide
at what rate of production to operate the plant overan intermediate planning horizon.
Many types of forecasting models, each differ in complexity and amount of data. Forecasts are rarely
perfect, they are more accurate for grouped data than for individual items, and are more accurate for
shorter than longer time periods
Forecasting Steps
Decide what needs to be forecast; i.e. Level of detail, units of analysis & time horizon
required
Evaluate and analyze appropriate data
o Identify needed data & whether it’s available
o Select and test the forecasting model
o Cost, ease of use & accuracy
Generate the forecast
Monitor forecast accuracy over time
Estimating method that relies on expert human judgment combined with a rating scale, instead of on
hard (measurable and verifiable) data. Forecasts generated subjectively by the forecaster, it is an
educated guesses.
This is forecasting that uses factors that cannot be directly measured. The estimates are made with a
system of ratings to produce a figure. No verifiable data is used it is based on human judgment and
the system of ratings to produce a result.
These techniques are primarily based upon judgment and intuition and especially when sufficient
information and data is not available so that complex quantitative techniques cannot be used. The
widely used qualitative methods are:
This is a method by which the relevant opinions of experts are taken, combined and averaged. These
opinions could be taken on an individual basis or there could be a brain storming group session in
which all members participate in generating new ideas that can later be evaluated for their feasibility
and profitability.
The sales people being closer to consumers can estimate future sales in their own territories, more
accurately. Based on these and the opinions of sales managers, reasonable trends of the future sales
can be calculated. These forecasts are good for short range planning since sales people are not
sufficiently sophisticated to predict long-term trends.
This method involves a survey of the customers as to their future needs. This method is especially
useful where the industry serves a limited market. Based on the future needs of the customers a
general overall forecast for the demand can be made.
The Delphi method originally developed by Rank Corporation in 1969 for forecasting military events,
has become a useful tool in other areas also. It is basically a more formal version of the jury of
opinion method. A panel of experts is given a situation and asked to make initial predictions, on the
basis of a prescribed questionnaire, these experts develop written opinions. These responses are
analyzed and summarized and submitted back to the panel for further considerations. All these
responses are anonymous so that no member is influenced by others opinions. This process is
repeated until a consensus is obtained.
Managers use forecasts to inform and support their decisions. A small business owner can use sales
forecasts to determine if he should hire new employees, while the chief executive of a large company
can use customer research surveys to plan marketing campaigns. Unlike quantitative forecasting,
numbers are not at the core of qualitative forecasting, which relies on judgment, experience and
opinions.
A statistical technique for making projections about the future which uses numerical facts and prior
experience to predict upcoming events. The two main types of quantitative forecasting used by
business analysts are the explanatory method that attempts to correlate two or more variables and the
time series method that uses past trends to make forecasts.
Quantitative forecasting techniques typically call for the analysis of statistics and raw data. The
simple moving method, weight moving method, exponential smoothing method, and time series
analysis are quantitative forecasting techniques that are usually used by economists and dataanalysts.
These techniques are used to evaluate numerical data while considering changes in trends. Accurate
forecasting is used by businesses to help make sound business decisions.
Forecasts are never perfect; we need to know how much we should rely on our chosen forecasting
Et = A t −F t = negative errors and under-forecasts =
method. Measuring forecast error:Note that over-forecasts
positive errors.
Mean Absolute Deviation (MAD) measures the total error in a forecast without regard to sign.
MAD=
∑ |actual−forecast|
n of all forecast errors is calculated, and the forecasting method
The mean of the absolute values
or parameter(s) which minimize this measure is selected. The mean absolute deviation
measure is less sensitive to individual large forecast errors than the mean squared error
measure.
Cumulative Forecast Error (CFE) measures any bias in the forecast
CFE=∑ ( actual−forecast )
MSE=
∑ ( actual - forecast )2
n
The average of the squared forecast errors for the historical data is calculated. The forecasting
method or parameter(s) which minimize this mean squared error is then selected. It is a
traditionalerror measures.
Tracking Signal measures if your model is working
CFE
TS=
MAD
Faculty of Business and Economics Page 140
Harambee University College Operation Management
Accuracy & Tracking Signal Problem: A company is comparing the accuracy of two forecasting
methods. Forecasts using both methods are shown below along with the actual values for January
through May. The company also uses a tracking signal with ±4 limits to decide when a forecast
should be reviewed. Which forecasting method is best?
Jan. 30 28 2 2 2 27 2 2 1
Feb. 26 25 1 3 3 25 1 3 1.5
March 32 32 0 3 3 29 3 6 3
April 29 30 -1 2 2 27 2 8 4
May 31 30 1 3 3 29 2 10 5
MAD 1 2
Focus Forecasting
o Developed by Bernie Smith
o Relies on the use of simple rules
o Test rules on past data and evaluate how they perform
Combining Forecasts
o Combining two or more forecasting methods can improve accuracy
Collaborative Planning Forecasting and Replenishment (CPFR)
o Establish collaborative relationships between buyers and sellers
o Create a joint business plan
o Create a sales forecast
o Identify exceptions for sales forecast
o Resolve/collaborate on exception items
o Create order forecast
o Identify exceptions for order forecast
o Resolve/collaborate on exception items
o Generate order
Self-Check
1. What is forecasting
2. What are the steps in forecasting
3. Describe time series forecasting method and explain each.
4. Forecast for week 11 sales using exponential smoothing method based on the following data.
Chapter 9
Design of the production system involves planning for the inputs, conversion process and outputs of
production operation. The effective management of capacity is the most important responsibility of
production management. The objective of capacity management (i.e., planning and control of
capacity) is to match the level of operations to the level of demand.
Capacity planning is to be carried out keeping in mind future growth and expansion plans, market
trends, sales forecasting, etc. It is a simple task to plan the capacity in case of stabledemand. But in
practice the demand will be seldom stable. The fluctuation of demand createsproblems regarding the
procurement of resources to meet the customer demand.
Capacity decisionsare strategic in nature. Capacity is the rate of productive capability of a facility.
Capacity is usually expressed as volume of output per period of time.
Production managers are more concerned about the capacity for the following reasons:
Capacity planning is the first step when an organization decides to produce more or new products.
The capacity of the manufacturing unit can be expressed in number of units of output per period.In
some situations measuring capacity is more complicated when they manufacture multipleproducts. In
such situations, the capacity is expressed as man-hours or machine hours. Therelationship between
capacity and output is shown in Fig. below.
1. Design capacity: Designed capacity of a facility is the planned or engineered rate ofoutput of
goods or services under normal or full scale operating conditions.For example, the designed
capacity of the cement plant is 100 TPD (Tonnes per day).Capacity of the sugar factory is 150
tonnes of sugarcane crushing per day. Design capacity is the maximum output that can possibly
attained.
2. System (effective) capacity: System capacity is the maximum output of the specific product
orproduct mix the system of workers and machines is capable of producing as an integrated
whole.System capacity is less than design capacity or at the most equal, because of the limitation
ofproduct mix, quality specification, breakdowns. The actual is even less because of many
factorsaffecting the output such as actual demand, downtime due to machine/equipment failure,un-
authorized absenteeism.Effective capacity is the maximum possible output given a product mix,
scheduling difficulties, machine maintenance, quality factors, and so on.
3. Actual output: the rate of output actually achieved. It cannot exceed effective capacity, and is
often less than effective capacity due to breakdowns, defective outputs, shortage of materials,
and similar factors.
These different measures of capacity are useful in defining two measures of system effectiveness:
efficiency and utilization. Efficiency is the ratio of actual output to effective capacity. Utilization is
the ratio of actual output to design capacity.
It is common for managers to focus exclusively on efficiency, but in many instance, this emphasis can
be misleading. This happens when effective capacity is low compared with design capacity. In those
cases, high efficiency would seem to indicate effective uses of resources when it does not. The
following example illustrates this point.
Given the information below, compute the efficiency and the utilization of the vehicle repair
department.
Solution
Efficiency = actual output/effective capacity = 36 units per day/40 units per day = 90%
Utilization = actual output/design capacity = 36 units per day/50 units per day = 72%
Thus, compared with the effective capacity of 40 units per day, 36 units per day looks pretty good.
However, compared with the design capacity of 50 units per day, 36 units per day is much less
impressive although probably more meaningful.
fig.
4. Installed capacity: The capacity provided at the time of installation of the plant is called installed
capacity.
5. Rated capacity: Capacity based on the highest production rate established by actual trials is
referred to as rated capacity.
Many decision made concerning system design have an impact on capacity. The same is true for
many operating decisions. The main factors related to the following.
1. Facilities
2. Products or services
3. Processes
4. Human considerations
5. Operations
6. External forces
Facilities factors: the design of facilities, including size and provision for expansion, is very
important. Locational factors, such as transportation costs, distance to market, labor supply, energy
Faculty of Business and Economics Page 146
Harambee University College Operation Management
sources, and room for expansion, are also important. Likewise, layout of work areas often determines
how smoothly work can be performed, and environmental factors such as heating, lightening,
ventilation also play an important role in determining whether personnel can perform effectively or
they must struggle to overcome poor design characteristics.
Product/service factors: product or service design can have a tremendous influence on capacity. For
example, when items are similar, the ability of the system to produce those items is generally much
greater than when successive items differ. For example, a restaurant that offers a limited menu can
usually prepare and serve meals at a faster rate than a restaurant with excessive menu. Generally
speaking, the more uniform the output, the more opportunities there are for standardization of
methods and materials, which leads to greater capacity. The particular mix of products or services
rendered must also be considered since different items will have different rates of output.
Process factors: The quantity capability of a process is an obvious determinants of capacity. A more
subtle determinant is the influence of output quality. For instance, quality of outputs does not meet
standards, the rate of output will be slowed by the need for inspection and rework activities.
Human factors: the task that make up a job, the variety of activities involved, and the training, skill,
and experience required to perform a job all have an impact on potential and actual output. In
addition, employee motivation has a very basic relationship to capacity, as do absenteeism and labor
turnover.
Operation factors: scheduling problems may occur when an organization has differences in
equipment capabilities among alternatives pieces of equipment or differences in job requirements.
Inventory stocking decisions, late delivery, acceptability of purchased materials and parts, and quality
inspections and control procedures also can have an impact on effective capacity.
External factors: product standards especially minimum quality and performance standard can
restrict management’s option for increasing and using capacity. Thus, pollution standards on products
and equipment often reduces effective capacity, as does paper work required by government
regulatory agencies by engaging employees in nonproductive activities. A similar effect occurs when
a union contract limits the number of hours and type of work an employee may do.
Example: A department works one 8 hour shift, 250 days a year, and has these figures for usage of a
machine that is currently being considered.
Product annual demand standard processing Processing time needed (hr)
time per unit (hr.)
#1 400 5.0 2,000
#2 300 8.0 2,400
#3 700 2.0 1,400
5,800
How many machines are needed for the given 250 working days per year?
Solution
Working one 8 hour shift, 250 days a year provide an annual capacity of 8 x 250 = 2,000hours per
year. We can see that three of these machines would be needed to handle the required volume.
5,800 hours/2,000 hours per machine = 2.90 machines.
Example 2: A manager has the option of purchasing one, two, or three machines. Fixed costs and
potential volumes are as follows.
Solution
a. Compute the breakeven point for each range using the formula QBE = FC/R-VC
For one machine QBE = $9,600/$40 per unit - $10 per unit = 320 units (not in range).
For two machine QBE = $15,000/$40 per unit - $10 per unit = 500 units.
For one machine QBE = $20,000/$40 per unit - $10 per unit = 666.67 units.
b. Comparing the projected range of demand for the two range for which a breakeven point
occurs, you can see that the breakeven point is 500, in the range 301 to 600. This means that
even if demand is at the low end of the range (i.e., 580), it would be above the breakeven point
and thus yield a profit. That is not true of range 601 to 900. At the top end of projected demand
(i.e. 660), the volume still be less than the breakeven point for that range. So there would be no
profit. Hence, the manager should choose two machines.
Capacity planning is concerned with defining the long-term and the short-term capacity needs of an
organization and determining how those needs will be satisfied. Capacity planning decisions are taken
based upon the consumer demand and this is merged with the human, material and financial resources
of the organization.Capacity requirements can be evaluated from two perspectives—long-term
capacity strategies and short-term capacity strategies.
requirements are dependent on marketing plans, product development and lifecycleof the product.
Long-term capacity planning is concerned with accommodating majorchanges that affect overall
level of the output in long-term. Marketing environmental assessmentand implementing the long-
term capacity plans in a systematic manner are the major responsibilitiesof management. Following
parameters will affect long range capacity decisions.
A. Multiple products: Company’s produce more than one product using the same facilities in order
to increase the profit. The manufacturing of multiple products will reduce the risk offailure. Having
more than one product helps the capacity planners to do a better job. Becauseproducts are in
different stages of their life-cycles, it is easy to schedule them to get maximumcapacity utilization.
B. Phasing in capacity: In high technology industries, and in industries where technology
developments are very fast, the rate of obsolescence is high. The products should be brought intothe
market quickly. The time to construct the facilities will be long and there is no much time asthe
products should be introduced into the market quickly. Here the solution is phase in capacityon
modular basis. Some commitment is made for building funds and men towards facilities over
aperiod of 3–5 years. This is an effective way of capitalizing on technological breakthrough.
3. Phasing out capacity: The outdated manufacturing facilities cause excessive plant closures and
down time. The impact of closures is not limited to only fixed costs of plant and machinery. Thus,
the phasing out here is done with humanistic way without affecting the community.The phasing out
options makes alternative arrangements for men like shifting them to other jobs or to other
locations, compensating the employees, etc.
5. Community Infrastructure
Controllable Factors
1. Proximity to markets: Every company is expected to serve its customers by providing goods and
services at the time needed and at reasonable price organizations may choose to locate facilities close
to the market or away from the market depending upon the product. When the buyers for the product
are concentrated, it is advisable to locate the facilities close to the market.
Nearness to the market ensures a consistent supply of goods to customers and reduces the cost of
transportation.
2. Supply of raw material: It is essential for the organization to get raw material in right qualities
and time in order to have an uninterrupted production. This factor becomes very important if the
materials are perishable and cost of transportation is very high.
• When a single raw material is used without loss of weight, locate the plant at the raw
material source, at the market or at any point in between.
• When weight loosing raw material is demanded, locate the plant at the raw material source.
• When raw material is universally available, locate close to the market area.
• If the raw materials are processed from variety of locations, the plant may be situated so as
to minimize total transportation costs.
Nearness to raw material is important in case of industries such as sugar, cement, jute and cotton
textiles.
3. Transportation facilities: Speedy transport facilities ensure timely supply of raw materials to the
company and finished goods to the customers. The transport facility is a prerequisite forthe location
of the plant. There are five basic modes of physical transportation, air, road, rail, water and pipeline.
Goods that are mainly intended for exports demand a location near to the port or large airport. The
choice of transport method and hence the location will depend on relativecosts, convenience, and
suitability. Thus transportation cost to value added is one of the criteriafor plant location.
4. Infrastructure availability: The basic infrastructure facilities like power, water and waste
disposal, etc., become the prominent factors in deciding the location. Certain types of industries are
power hungry e.g., aluminum and steel and they should be located close to the power station or
location where uninterrupted power supply is assured throughout the year. The non-availability of
power may become a survival problem for such industries. Process industries like paper, chemical,
cement, etc., require continuous. Supply of water in large amount and good quality, and mineral
content of water becomes an important factor. A waste disposal facility for process industries is an
important factor, which influences the plant location.
5. Labour and wages: The problem of securing adequate number of labour and with skills specific is
a factor to be considered both at territorial as well as at community level during plant location.
Importing labour is usually costly and involve administrative problem. The history of labour relations
in a prospective community is to be studied. Prospective community is to be studied. Productivity of
labour is also an important factor to be considered. Prevailing wage pattern, cost of living and
industrial relation and bargaining power of the unions’ forms in important considerations.
6. External economies of scale: External economies of scale can be described as urbanization and
locational economies of scale. It refers to advantages of a company by setting up operations in a large
city while the second one refers to the “settling down” among other companies of related Industries.
In the case of urbanization economies, firms derive from locating in larger cities rather than in
smaller ones in a search of having access to a large pool of labour, transport facilities, and as well to
increase their markets for selling their products and have access to a much wider range of business
services.
Location economies of scale in the manufacturing sector have evolved over time and have mainly
increased competition due to production facilities and lower production costs as a result of lower
transportation and logistical costs. This led to manufacturing districts where many companies of
related industries are located more or less in the same area. As large corporations have realized that
inventories and warehouses have become a major cost factor, they have tried reducing inventory costs
by launching “Just in Time” production system (the so called Kanban System). This high efficient
production system was one main factor in the Japanese car industry for being so successful. Just in
time ensures to get spare parts from suppliers within just a few hours after ordering. To fulfill these
criteria corporations have to be located in the same area increasing their market and service for large
corporations.
decisions. For example, large Multinational Corporations such as Coca-Cola operate in many
different countries and can raise capital where interest rates are lowest and conditions are most
suitable.
Capital becomes a main factor when it comes to venture capital. In that case young, fast growing (or
not) high tech firms are concerned which usually have not many fixed assets. These firms particularly
need access to financial capital and also skilled educated employees.
Uncontrollable Factors
1. Government policy: The policies of the state governments and local bodies concerning labour
laws, building codes, safety, etc., are the factors that demand attention.
In order to have a balanced regional growth of industries, both central and state governments in our
country offer the package of incentives to entrepreneurs in particular locations. The incentive package
may be in the form of exemption from a safes tax and excise duties for a specific period, soft loan
from financial institutions, subsidy in electricity charges and investment subsidy. Some of these
incentives may tempt to locate the plant to avail these facilities offered.
2. Climatic conditions: The geology of the area needs to be considered together with climatic
conditions (humidity, temperature). Climates greatly influence human efficiency and behavior. Some
industries require specific climatic conditions e.g., textile mill will require humidity.
3. Supporting industries and services: Now a day the manufacturing organizationwill not make all
the components and parts by itself and it subcontracts the work to vendors. So, the source of supply
of component parts will be the one of the factors that influences the location.
The various services like communications, banking services professional consultancy servicesand
other civil amenities services will play a vital role in selection of a location.
4. Community and labour attitudes: Community attitude towards their work and towards the
prospective industries can make or mar the industry. Community attitudes towards supporting trade
union activities are important criteria. Facility location in specific location is not desirable even
though all factors are favoring because of labour attitude towards management, which brings very
often the strikes and lockouts.
These factors are also needed to be considered by location decisions as infrastructure is enormously
expensive to build and for most manufacturing activities the existing stock of infrastructure provides
physical restrictions on location possibilities.
Dominant Factors
Factors dominating location decisions for new manufacturing plants can be broadly classified in six
groups. They are listed in the order of their importance as follows.
2. Proximity to markets
3. Quality of life
1. Favorable labour climate: A favorable labour climate may be the most important factor in
location decisions for labor-intensive firms in industries such as textiles, furniture, and consumer
electronics. Labour climate includes wage rates, training requirements, attitudes toward work, worker
productivity, and union strength. Many executives consider weak unions or al low probability of
union organizing efforts as a distinct advantage.
2. Proximity to markets: After determining where the demand for goods and services is greatest,
management must select a location for the facility that will supply that demand. Locating near
markets is particularly important when the final goods are bulky or heavy and outbound
transportation rates are high. For example, manufacturers of products such as plastic pipe and heavy
metals all emphasize proximity to their markets.
3. Quality of life: Good schools, recreational facilities, cultural events, and an attractive lifestyle
contribute to quality of life. This factor is relatively unimportant on its own, but it can make the
difference in location decisions.
4. Proximity to suppliers and resources: In many companies, plants supply parts to other facilities
or rely on other facilities for management and staff support. These require frequent coordination and
communication, which can become more difficult as distance increases.
5. Utilities, taxes, and real estate costs: Other important factors that may emerge include utility
costs (telephone, energy, and water), local and state taxes, financing incentives offered by local or
state governments, relocation costs, and land costs.
Secondary Factors
There are some other factors needed to be considered, including room for expansion, construction
costs, accessibility to multiple modes of transportation, the cost of shuffling people and materials
between plants, competition from other firms for the workforce, community attitudes, and many
others. For global operations, firms are emphasizing local employee skills and education and the local
infrastructure.
Dominant Factors
The factors considered for manufacturers are also applied to service providers, with one important
addition — the impact of location on sales and customer satisfaction. Customers usually look about
how close a service facility is, particularly if the process requires considerable customer contact.
Proximityto Customers
Location is a key factor in determining how conveniently customers can carry on business with a
firm. For example, few people would like to go to remotely located dry cleaner or supermarket if
another is more convenient. Thus the influence of location on revenues tends to be the dominant
factor.
For warehousing and distribution operations, transportation costs and proximity to markets are
extremely important. With a warehouse nearby, many firms can hold inventory closer to the
customer, thus reducing delivery time and promoting sales.
Locationof Competitors
One complication in estimating the sales potential at different location is the impact of competitors.
Management must not only consider the current location of competitors but also try to anticipate their
reaction to the firm’s new location. Avoiding areas where competitors are already well established
often pays. However, in some industries, such as new-car sales showrooms and fast food chains,
locating near competitors is actually advantageous. The strategy is to create a critical mass, whereby
several competing firms clustered in one location attract more customers than the total number who
would shop at the same stores at scattered locations. Recognizing this effect, some firms use a follow
–the leader strategy when selecting new sites.
Secondary Factors
Retailers also must consider the level of retail activity, residential density, traffic flow, and site
visibility. Retail activity in the area is important, as shoppers often decide on impulse to go shopping
or to eat in a restaurant. Traffic flows and visibility are important because businesses’ customers
arrive in cars. Visibility involves distance from the street and size of nearby buildingsand signs. High
residential density ensures nighttime and weekend business when the population in the area fits the
firm’s competitive priorities and target market segment.
Alfred Weber (1868–1958), with the publication of Theory of the Location of Industries in 1909, put
forth the first developed general theory of industrial location. His model took into account several
spatial factors for finding the optimal location and minimal cost for manufacturing plants.The point
for locating an industry that minimizes costs of transportation and labour requires analysis of three
factors:
1. The point of optimal transportation based on the costs of distance to the ‘material index’—the ratio
of weight to intermediate products (raw materials) to finished product.
2. The labour distortion, in which more favorable sources of lower cost of labour may justify greater
transport distances.
Agglomeration or concentration of firms in a locale occurs when there is sufficient demand for
support services for the company and labour force, including new investments in schools and
hospitals. Also supporting companies, such as facilities that build and service machines and financial
services, prefer closer contact with their customers.
Degglommeration occurs when companies and services leave because of over concentration of
industries or of the wrong types of industries, or shortages of labour, capital, affordable land, etc.
Weber also examined factors leading to the diversification of an industry in the horizontal relations
between processes within the plant.
The issue of industry location is increasingly relevant to today’s global markets and transnational
corporations. Focusing only on the mechanics of the Weberian model could justify greater transport
distances for cheap labour and unexploited raw materials. When resources are exhausted or workers
revolt, industries move to different countries.
Various models are available which help to identify the ideal location. Some of the popular models
are:
3. Load-distance method
The process of selecting a new facility location involves a series of following steps:
2. Rate each factor according to its relative importance, i.e., higher the ratings is indicative of
prominent factor.
3. Assign each location according to the merits of the location for each factor.
4. Calculate the rating for each location by multiplying factor assigned to each location with basic
factors considered.
5. Find the sum of product calculated for each factor and select best location having highest total
score.
ILLUSTRATION 1: Let us assume that a new medical facility, Health-care, is to be located in Delhi.
The location factors, factor rating and scores for two potential sites are shown in the following table.
Which is the best location based on factor rating method?
In this method to merge quantitative and qualitative factors, factors are assigned weights based on
relative importance and weightage score for each site using a preference matrix is calculated.
The site with the highest weighted score is selected as the best choice.
ILLUSTRATION 2: Let us assume that a new medical facility, Health-care, is to be located in Delhi.
The location factors, weights, and scores (1 = poor, 5 = excellent) for two potential sites are shown in
the following table. What is the weighted score for these sites? Which is the best location?
SOLUTION: The weighted score for this particular site is calculated by multiplying each factor’s
weight by its score and adding the results:
= 75 + 100 + 75 + 15 + 50 = 315
= 125 + 75 + 75 + 30 + 30 = 335
The load-distance method is a mathematical model used to evaluate locations based on proximity
factors. The objective is to select a location that minimizes the total weighted loads moving into and
out of the facility. The distance between two points is expressed by assigning the points to grid
coordinates on a map. An alternative approach is to use time rather than distance.
DISTANCE MEASURES
Faculty of Business and Economics Page 159
Harambee University College Operation Management
Suppose that a new warehouse is to be located to serve Delhi. It will receive inbound shipments from
several suppliers, including one in Ghaziabad. If the new warehouse were located at Gurgaon, what
would be the distance between the two facilities? If shipments travel by truck, the distance depends
on the highway system and the specific route taken. Computer software is available for calculating
the actual mileage between any two locations in the same county.
However, for load-distance method, a rough calculation that is either Euclidean or rectilinear distance
measure may be used. Euclidean distance is the straight-line distance, or shortest possible path,
between two points.
The point A on the grid represents the supplier’s location in Ghaziabad, and the point B represents the
possible warehouse location at Gurgaon. The distance between points A and B is the length of the
hypotenuse of a right triangle, or
XA = x-coordinate of point A
YA = y-coordinate of point A
XB = x-coordinate of point B
YB = y-coordinate of point B
Rectilinear distance measures distance between two points with a series of 90° turns as city blocks.
Essentially, this distance is the sum of the two dashed lines representing the base and side of the
triangle in figure. The distance travelled in the x-direction is the absolute value of the difference in x-
coordinates. Adding this result to the absolute value of the difference in they-coordinates gives
Suppose that a firm planning a new location wants to select a site that minimizes the distances that
loads, particularly the larger ones, must travel to and from the site. Depending on the industry, a load
may be shipments from suppliers, between plants, or to customers, or it may be customers or
employees travelling to or from the facility. The firm seeks to minimize its load distance, generally by
choosing a location so that large loads go short distances.
To calculate a load-distance for any potential location, we use either of the distance measures and
simply multiply the loads flowing to and from the facility by the distances travelled. These loads may
be expressed as tones or number of trips per week.
ILLUSTRATION 3: The new Health-care facility is targeted to serve seven census tracts in Delhi.
The table given below shows the coordinates for the center of each census tract, along with the
projected populations, measured in thousands. Customers will travel from the seven census tract
centers to the new facility when they need health-care. Two locations being considered for the new
facility are at (5.5, 4.5) and (7, 2), which are the centers of census tracts C and F. Details of seven
census tract centers, co-ordinate distances along with the population for each center are given below.
If we use the population as the loads and use rectilinear distance, which location is better in terms of
its total load distance score?
Solution: Calculate the load-distance score for each location. Using the coordinates from the above
table. Calculate the load-distance score for each tract.
Summing the scores for all tracts gives a total load-distance score of 239 when the facility is located
at (5.5, 4.5) versus a load-distance score of 168 at location (7, 2). Therefore, the location in census
tract F is a better location.
Centre of gravity is based primarily on cost considerations. This method can be used to assist
managers in balancing cost and service objectives. The center of gravity method takes into account
the locations of plants and markets, the volume of goods moved, and transportation costs in arriving
at the best location for a single intermediate warehouse.
The center of gravity is defined to be the location that minimizes the weighted distance between the
warehouse and its supply and distribution points, where the distance is weighted by the number of
tones supplied or consumed. The first step in this procedure is to place the locations on a coordinate
system. The origin of the coordinate system and scale used arearbitrary, just as long as the relative
distances are correctly represented. This can be easily done by placing a grid over an ordinary map.
The centre of gravity is determined by the formula.
ILLUSTRATION 4: The new Health-care facility is targeted to serve seven census tracts in Delhi.
The table given below shows the coordinates for the center of each census tract, along with the
projected populations, measured in thousands. Customers will travel from the seven census tract
centers to the new facility when they need healthcare.
Two locations being considered for the new facility are at (5.5, 4.5) and (7, 2), which are the centers
of census tracts C and F. Details of seven census tract centers, coordinate distances along with the
population for each center are given below. Find the target area’s center of gravity for the Health-care
medical facility.
Cx = 453.5/68 = 6.67
Cy = 205.5/68 = 3.02
The center of gravity is (6.67, 3.02). Using the center of gravity as starting point, managers can now
search in its vicinity for the optimal location.
Break even analysis implies that at some point in the operations, total revenue equals total cost.
Break even analysis is concerned with finding the point at which revenues and costs agree exactly. It
is called ‘Break-even Point’. The Fig. 2.3 portrays the Break Even Chart:
Breakeven point is the volume of output at which neither a profit is made nor a loss is incurred. The
Break Even Point (BEP) in units can be calculated by using the relation:
Plotting the break even chart for each location can make economic comparisons of locations.
This will be helpful in identifying the range of production volume over which location can be
selected.
ILLUSTRATION 5: Potential locations X, Y and Z have the cost structures shown below. The ABC
company has a demand of 1,300,000 units of a new product. Three potential locations X, Y and Z
having following cost structures shown are available. Select which location is to be selected and also
identify the volume ranges where each location is suited?
2X = 200,000
X = 100,000 units
8X + 350,000 = 6X + 950,000
2X = 600,000
X = 300,000 units
Therefore, at a volume of 1,30,000 units, Y is the appropriate strategy. From the graph (Fig. 2.4) we
can interpret that location X is suitable up to 100,000 units, location Y is suitable up to between
100,000 to 300,000 units and location Z is suitable if the demand is more than 300,000 units.
An ideal location is one which results in lowest production cost and least distribution cost per unit.
These costs are influenced by a number of factors as discussed earlier. The various costs which
decide locational economy are those of land, building, equipment, labour, material, etc. Other factors
like community attitude, community facilities and housing facilities will also influence the selection
of best location. Economic analysis is carried out to decide as to which locate best location.
The following illustration will clarify the method of evaluation of best layout selection.
ILLUSTRATION 6: From the following data select the most advantageous location for setting a plant
for making transistor radios.
Chapter 10
Facility Layout
10.1 Introduction
Decisions about layout are made only periodically, but since they have long-term consequences,
theymust be made with careful planning. The layout design affects the cost of producing goods and
deliveringservices for many years into the future. The design of layouts begins with a statement of the
goals of the facility. Layouts are designed to meet these goals. After initial designs are developed,
improved designs are sought. This can be a tedious and cumbersome task because the number of
possible designs is so large. For this reason, quantitative and computer-based models are often
used.Layout planning is deciding on the best physical arrangement of all resources that consumes
space within a facility. Proper layout planning is highly important for the efficient running of a
business. Otherwise, there can be much wasted time and energy, as well as confusion.
Plant layout is defined as the most effective physical arrangement of machines, processing
equipment, and service departments to have the best co-ordination and efficiency of man, machine
and material in a plant. It refers to the physical arrangement of production facilities. It is the
configuration of departments, work centers and equipment in the conversion process. It is a floor plan
of the physical facilities, which are used in production.
1. Principle of integration: A good layout is one that integrates men, materials, machines and
supporting services and others in order to get the optimum utilization of resources and maximum
effectiveness.
2. Principle of minimum distance: This principle is concerned with the minimum travel (or
movement) of man and materials. The facilities should be arranged such that, the total distance
travelled by the men and materials should be minimum and as far as possible straight line
movement should be preferred.
3. Principle of cubic space utilization: The good layout is one that utilizes both horizontal and
vertical space. It is not only enough if only the floor space is utilized optimally but the third
dimension, i.e., the height is also to be utilized effectively.
4. Principle of flow: A good layout is one that makes the materials to move in forward direction
towards the completion stage, i.e., there should not be any backtracking.
5. Principle of maximum flexibility: The good layout is one that can be altered without much cost
and time.
6. Principle of safety, security and satisfaction: A good layout is one that gives due consideration
to workers safety and satisfaction and safeguards the plant and machinery against fire, theft, etc.
7. Principle of minimum handling: A good layout is one that reduces the material handling to the
minimum.
Layouts are affected by types of industry, production systems, types of products, volume of
production, and types of manufacturing processes used to get the final products. They are elaborated
below.
1. Types of Industries
Synthetic process based industry: In this, two or more materials are mixed to get a product, e.g.
cement is obtained from the combination of limestone and clay.
Analytic process based industry: It is opposite of synthetic process. Here, the final products are
obtained as a result of breaking of material into several parts. For example, the petroleum products
are obtained from the fractional distillation (breaking process) of the crude oil.
Conditioning process based industry: Here, the form of raw material is changed into the desired
products, e.g. jute products in the jute industry, or the milk products in the dairy farm.
Extractive process based industry: By applying heat, desired product is extracted from the raw
material, e.g. Aluminum from bauxite, and steel from iron ores.
Continuous Production
This is characterized by made-to-order, low volume, labor-intense products; by a large product mix;
by general purpose equipment; by interrupted product flow; and by frequent schedule changes. The
system should be flexible, which needs general purpose machines and highly skilled workers.
Example: space vehicle, aircraft, special tools and equipment, prototype of future products.
Batch Production
They are characterized by medium size lots of the same type of item or product and has the following
other features:
Example: industrial equipment, furniture, house-hold appliances, machine shop, casting, plastic
molding, press work-shop, etc.
3. Typeof Product
Whether the product is heavy or light, large or small, liquid or solid, etc.
4. Volumeof Production
Whether the production is in small quantity, or in lots or batches, or in huge quantity (mass
production).
1. Process layout
2. Product layout
3. Combination layout
5. Group layout
Process layout is recommended for batch production. All machines performing similar type of
operations are grouped at one location in the process layout e.g., all lathes, milling machines, etc. are
grouped in the shop will be clustered in like groups.
Thus, in process layout the arrangement of facilities are grouped together according to their functions.
A typical process layout is shown in Fig. 2.5. The flow paths of material through the facilities from
one functional area to another vary from product to product. Usually the paths are long and there will
be possibility of backtracking.
Process layout is normally used when the production volume is not sufficient to justify a product
layout. Typically, job shops employ process layouts due to the variety of products manufactured and
their low production volumes.
Advantages
1. In process layout machines are better utilized and fewer machines are required.
3. Lower investment on account of comparatively less number of machines and lower cost of general
purpose machines.
5. A high degree of flexibility with regards to work distribution to machineries and workers.
6. The diversity of tasks and variety of job makes the job challenging and interesting.
7. Supervisors will become highly knowledgeable about the functions under their department.
Limitations
1. Backtracking and long movements may occur in the handling of materials thus, reducing material
handling efficiency.
Faculty of Business and Economics Page 171
Harambee University College Operation Management
3. Process time is prolonged which reduce the inventory turnover and increases the in process
inventory.
5. Throughput (time gap between in and out in the process) time is longer.
In this type of layout, machines and auxiliary services are located according to the processing
sequence of the product. If the volume of production of one or more products is large, the facilities
can be arranged to achieve efficient flow of materials and lower cost per unit. Special purpose
machines are used which perform the required function quickly and reliably.
The product layout is selected when the volume of production of a product is high such that a separate
production line to manufacture it can be justified. In a strict product layout, machines are not shared
by different products. Therefore, the production volume must be sufficient to achieve satisfactory
utilization of the equipment. A typical product layout is shown in Fig. below.
Advantages
7. Reduced material handling cost due to mechanized handling systems and straight flow.
Limitations
1. A breakdown of one machine in a product line may cause stoppages of machines in the
downstream of the line.
A combination of process and product layouts combines the advantages of both types of layouts.
A combination layout is possible where an item is being made in different types and sizes. Here
machinery is arranged in a process layout but the process grouping is then arranged in a sequence to
manufacture various types and sizes of products. It is to be noted that the sequence of operations
remains same with the variety of products and sizes.Figure below shows a combination type of layout
for manufacturing different sized gears.
Fig. 10.5.3.1 Combination layout for making different types and sizes of gears
This is also called the project type of layout. In this type of layout, the material, or major components
remain in a fixed location and tools, machinery, men and other materials are brought to this location.
This type of layout is suitable when one or a few pieces of identical heavy products are to be
manufactured and when the assembly consists of large number of heavy parts,the cost of
transportation of these parts is very high.
Advantages
2. The workers identify themselves with a product in which they take interest and pride in doing the
job.
There is a trend now to bring an element of flexibility into manufacturing system as regards to
variation in batch sizes and sequence of operations. A grouping of equipment for performing a
sequence of operations on family of similar components or products has become all the important.
Group technology (GT) is the analysis and comparisons of items to group them into families with
similar characteristics. GT can be used to develop a hybrid between pure process layout and pure
flow line (product) layout. This technique is very useful for companies that produce variety of parts in
small batches to enable them to take advantage and economics of flow line layout.
The application of group technology involves two basic steps; first step is to determinecomponent
families or groups. The second step in applying group technology is to arrange the plants equipment
used to process a particular family of components. This represents small plants within the plants. The
group technology reduces production planning time for jobs. It reduces the set-up time.
Thus group layout is a combination of the product layout and process layout. It combines the
advantages of both layout systems. If there are m-machines and n-components, in a groupnumber of
machine-component cells (group) such that all the components assigned to a cell are almost processed
within that cell itself. Here, the objective is to minimize the intercell movements.
The basic aim of a group technology layout is to identify families of components that require similar
of satisfying all the requirements of the machines are grouped into cells. Each cell is capable of
satisfying all the requirements of the component family assigned to it.
The layout design process considers mostly a single objective while designing layouts. In process
layout, the objective is to minimize the total cost of materials handling. Because of the nature of the
layout, the cost of equipment’s will be the minimum in this type of layout. In product layout, the cost
of materials handling will be at the absolute minimum. But the cost of equipment’swould not be at
the minimum if the equipment’s are not fully utilized.
In-group technology layout, the objective is to minimize the sum of the cost of transportation and the
cost of equipment’s. So, this is called as multi-objective layout. A typical Group Layout (or Cellular
Layout)is shown in Fig. below.
2. Reliability of estimates.
4. Customer service.
3. Overall cost.
This type of layout may not be feasible for all situations. If the product mix is completely dissimilar,
then we may not have meaningful cell formation.
In product layout, equipment or departments are dedicated to a particular product line, duplicate
equipment is employed to avoid backtracking, and a straight-line flow of material movement is
achievable. Adopting a product layout makes sense when the batch size of a given product or part is
large relative to the number of different products or parts produced.
Assembly lines are a special case of product layout. In a general sense, the term assembly line refers
to progressive assembly linked by some material-handling device. The usual assumption is that some
form of pacing is present and the allowable processing time is equivalent for all workstations. Within
this broad definition, there are important differences among line types. A few of these are material
handling devices (belt or roller conveyor, overhead crane); line configuration (U-shape, straight,
branching); pacing (mechanical, human); product mix (one product or multiple products);
workstation characteristics (workers may sit, stand, walk with the line, or ride the line); and length of
the line (few or many workers). The range of products partially or completely assembled on lines
includes toys, appliances, autos, clothing and a wide variety of electronic components. In fact,
virtually any product that has multiple parts and is produced in large volume uses assembly lines to
some degree.
Fig.1
0.6.1 Traditional assembly line
In this example, parts move along a conveyor at a rate of one part per minute to three groups of
workstations. The first operation requires 3 minutes per unit; the second operation requires 1 minute
per unit; and the third requires 2 minutes per unit. The first workstation consists of three operators;
the second, one operator; and the third, two operators. An operator removes a part from the conveyor
and performs some assembly task at his or her workstation. The completed part is returned to the
conveyor and transported to the next operation. The number of operators at each workstation was
chosen so that the line is balanced. Since three operators work simultaneously at the first workstation,
on the average one part will be completed eachminute.
This is also true for other two stations. Since the parts arrive at a rate of one per minute, parts are also
completed at this rate.
Assembly-line systems work well when there is a low variance in the times required toperform the
individual subassemblies. If the tasks are somewhat complex, thus resulting in a higher assembly-
time variance, operators down the line may not be able to keep up with the flow of parts from the
preceding workstation or may experience excessive idle time. An alternative to a conveyor-paced
assembly-line is a sequence of workstations linked by gravity conveyors, which act as buffers
between successive operations.
Line Balancing
Assembly-line balancing often has implications for layout. This would occur when, for balance
purposes, workstation size or the number used would have to be physically modified.
The most common assembly-line is a moving conveyor that passes a series of workstations in a
uniform time interval called the workstation cycle time (which is also the time between successive
units coming off the end of the line). At each workstation, work is performed on a product either by
adding parts or by completing assembly operations. The work performed at each station is made up of
many bits of work, termed tasks, elements, and work units. Such tasks are described by motion-time
analysis. Generally, they are grouping that cannot be subdivided on the assembly-line without paying
a penalty in extra motions.
The total work to be performed at a workstation is equal to the sum of the tasks assigned to that
workstation. The line-balancing problem is one of assigning all tasks to a series of workstations so
that each workstation has no more than can be done in the workstation cycle time, and so that the
unassigned (idle) time across all workstations is minimized.
The problem is complicated by the relationships among tasks imposed by product design and process
technologies. This is called the precedence relationship, which specifies the order in which tasks must
be performed in the assembly process.
3. Determine the theoretical minimum number of workstations (Nt) required to satisfy the
workstation cycle time constraint using the formula
4. Select a primary rule by which tasks are to be assigned to workstations, and a secondary rule to
break ties.
5. Assign tasks, one at a time, to the first workstation until the sum of the task times is equal to the
workstation cycle time, or no other tasks are feasible because of time or sequence restrictions.
Repeat the process for workstation 2, workstation 3, and so on until all tasks are assigned.
Illustration 7: The MS 800 car is to be assembled on a conveyor belt. Five hundred cars are required
per day. Production time per day is 420 minutes, and the assembly steps and times for the wagon are
given below. Find the balance that minimizes the number of workstations, subject to cycle time and
precedence constraints.
2. Determine workstation cycle time. Here we have to convert production time to seconds because
our task times are in seconds.
3. Determine the theoretical minimum number of workstations required (the actual number may be
greater)
Our secondary rule, to be invoked where ties exist from our primary rule, is (b) Prioritize tasks in
order of longest task time. Note that D should be assigned before B, and E assigned before C due to
this tie-breaking rule.
5. Make task assignments to form workstation 1, workstation 2, and so forth until all tasks are
assigned. It is important to meet precedence and cycle time requirements as the assignments are
made.
7. Evaluate the solution. An efficiency of 77 per cent indicates an imbalance or idle time of 23 per
cent (1.0 – .77) across the entire line.
In addition to balancing a line for a given cycle time, managers must also consider four other options:
pacing, behavioral factors, number of models produced, and cycle times.
Pacing is the movement of product from one station to the next after the cycle time has elapsed.
Paced lines have no buffer inventory. Unpaced lines require inventory storage areas to be placed
between stations.
Behavioral Factors
The most controversial aspect of product layout is behavioral response. Studies have shown that
paced production and high specialization lower job satisfaction. One study has shown that
productivity increased on unpaced lines. Many companies are exploring job enlargement and rotation
to increase job variety and reduce excessive specialization.
A mixed-model line produces several items belonging to the same family. A single-model line
produces one model with no variations. Mixed model production enables a plant to achieve both
high-volume production and product variety. However, it complicates scheduling and increases the
need for good communication about the specific parts to be produced at each station.
Cycle Times
A line’s cycle time depends on the desired output rate (or sometimes on the maximum number of
workstations allowed). In turn, the maximum line efficiency varies considerably with the cycle time
selected. Thus, exploring a range of cycle times makes sense. A manager might go with a particularly
efficient solution even if it does not match the output rate. The manager can compensate for the
mismatch by varying the number of hours the line operates through overtime, extending shifts, or
adding shifts. Multiple lines might even be the answer.
The analysis involved in the design of production lines and assembly lines relates primarily to timing,
coordination, and balance among individual stages in the process.
For process layouts, the relative arrangement of departments and machines is the critical factor
because of the large amount of transportation and handling involved.
Process layout design determines the best relative locations of functional work centers. Work centers
that interact frequently, with movement of material or people, should be located close together,
whereas those that have little interaction can be spatially separated.
3. Identify and estimate the amount of material and personnel flow among work centers
5. Evaluate and modify the layout, incorporating details such as machine orientation, storage
area location, and equipment access.
The first step in the layout process is to identify and describe each work center. The description
should include the primary function of the work center; drilling, new accounts, or cashier; its major
components, including equipment and number of personnel; and the space required. The description
should also include any special access needs (such as access to running water or an elevator) or
restrictions (it must be in a clean area or away from heat).
For a new facility, the spatial configuration of the work centers and the size and shape of the facility
are determined simultaneously. Determining the locations of special structures and fixtures such as
elevators, loading docks, and bathrooms becomes part of the layout process.
However, in many cases the facility and its characteristics are a given. In these situations, it is
necessary to obtain a drawing of the facility being designed, including shape and dimensions,
locations of fixed structures, and restrictions on activities, such as weight limits on certain parts of a
floor or foundation.
To minimize transport times and material-handling costs, we would like to place closetogether those
work centers that have the greatest flow of materials and people between them.
To estimate the flows between work centers, it is helpful to begin by drawing relationship diagram as
shown in Fig. above (Relationship flow diagram).
For manufacturing systems, material flows and transporting costs can be estimated reasonably well
using historical routings for products or through work sampling techniques applied to workers or
jobs. The flow of people, especially in a service system such as a business office or a university
administration building, may be difficult to estimate precisely, although work sampling can be used
to obtain rough estimates.
The amounts and/or costs of flows among work centers are usually presented using a flow matrix, a
flow-cost matrix, or a proximity chart.
1. Flow Matrix
A flow matrix is a matrix of the estimated amounts of flow between each pair of work centers. The
flow may be materials (expressed as the number of loads transported) or people who move between
centers. Each work center corresponds to one row and one column, and the element fij designates the
amount of flow from work center (row) I to work center (column) j.
Normally, the direction of flow between work centers is not important, only the total amount, so fij
and fji can be combined and the flows represented using only the upper right half of a matrix.
2. Flow-cost Matrix
A basic assumption of facility layout is that the cost of moving materials or people between work
centers is a function of distance travelled. Although more complicated cost functions can be
accommodated, often we assume that the per unit cost of material and personnel flows between work
centers is proportional to the distance between the centers. So for each type of flow between each pair
of departments, i and j, we estimate the cost per unit per unit distance, cij.
3. Proximity Chart
Proximity charts (relationship charts) are distinguished from flow and flow-cost matrices by the fact
that they describe qualitatively the desirability or need for work centers to be close together, rather
than providing quantitative measures of flow and cost. These charts are used when it is difficult to
measure or estimate precise amounts or costs of flow among work centers.
This is common when the primary flows involve people and do not have a direct cost but rather an
indirect cost, such as when employees in a corporate headquarters move among departments (payroll,
printing, information systems) to carry out their work.
The major factors considered for service providers, is an impact of location on sales and customer
satisfaction. Customers usually look about how close a service facility is, particularly if the process
requires considerable customer contact. Hence, service facility layouts should provide for easy
entrance to these facilities from the freeways. Well-organized packing areas, easily accessible
facilities, well designed walkways and parking areas are some of the requirements of service facility
layout.
Service facility layout will be designed based on degree of customer contact and the serviceneeded by
a customer. These service layouts follow conventional layouts as required. For example, for car
service station, product layout is adopted, where the activities for servicing a car follows a sequence
of operation irrespective of the type of car. Hospital service is the best example for adaptation of
process layout. Here, the service required for a customer will follow an independent path. The layout
of car servicing and hospital is shown in Figs. 1 and 2
1. Factory building
2. Lighting
3. Climatic conditions
4. Ventilation
I. Factory Building
Factory building is a factor which is the most important consideration for every industrial enterprise.
A modem factory building is required to provide protection for men, machines, materials, productsor
even the company’s secrets. It has to serve as a part of the production facilities and as a factor to
maximize economy and efficiency in plant operations. It should offer a pleasant andcomfortable
working environment and project the management’s image and prestige. Factorybuilding is like skin
and bones of a living body for an organization. It is for these reasons that the factory building
acquires great importance.
B. Types of buildings.
1. Flexibility: Flexibility is one of the important considerations because the building is likely to
become obsolete and provides greater operating efficiency even when processes and technology
change. Flexibility is necessary because it is not always feasible and economical to build a new plant,
every time a new firm is organized or the layout is changed. With minor alternations, the building
should be able to accommodate different types of operations.
2. Product and equipment: The type of product that is to be manufactured, determines column-
spacing, type of floor, ceiling, heating and air-conditioning. A product of a temporary nature may call
for a less expensive building and that would be a product of a more permanent nature. Similarly, a
heavy product demands a far more different building than a product which is light in weight.
3. Expansibility: Growth and expansion are natural to any manufacturing enterprises. They are the
indicators of the prosperity of a business. The following factors should be borne in mind if the future
expansion of the concern is to be provided for:
(i) The area of the land which is to be acquired should be large enough to provide for the future
expansion needs of the firm and accommodate current needs.
(ii) The design of the building should be in a rectangular shape. Rectangular shapes facilitate
expansion on any side.
(iii) If vertical expansion is expected, strong foundations, supporters and columns must be provided.
(iv) If horizontal expansion is expected, the side walls must be made non-load-bearing to provide for
easy removal.
4. Employee facilities and service area: Employee facilities must find a proper place in the building
design because they profoundly affect the morale, comfort and productivity. The building plan
should include facilities for lunch rooms, cafeteria, water coolers, parking area and the like. The
provision of some of these facilities is a legal requirement. Others make good working conditions
possible. And a good working condition is good business.
Faculty of Business and Economics Page 188
Harambee University College Operation Management
Service areas, such as the tool room, the supervisor’s office, the maintenance room, receiving and
dispatching stations, the stock room and facilities for scrap disposal, should also be included in the
building design.
B. Types of Buildings
The decision on choosing a suitable type for a particular firm depends on the manufacturing
process and the area of land and the cost of construction.
1. Single-Storey Buildings
Most of the industrial buildings manufacturing which are now designed and constructed are single
storeyed, particularly where lands are available at reasonable rates. Single-storey buildings offer
several operating advantages. A single-storey construction is preferable when materials handling is
difficult because the product is big or heavy, natural lighting is desired, heavy floor loads are
required and frequent changes in layout are anticipated.
Advantages
2. The maintenance cost resulting from the vibration of machinery is reduced considerablybecause
of the housing of the machinery on the ground.
4. The cost of transportation of materials is reduced because of the absence of materials handling
equipment between floors.
5. All the equipment is on the same level, making for an easier and more effective layout
supervision and control.
7. The danger of fire hazards is reduced because of the lateral spread of the building.
Limitations
3. High cost of transportation for moving men and materials to the factory which is generally
located far from the city.
2. Multi-Storey Buildings
Schools, colleges, shopping complexes, and residences, and for service industries like Software,
etc. multi-storey structures are generally popular, particularly in cities. Multi-storey buildings are
useful in manufacture of light products, when the acquisition of land becomes difficult and
expensive and when the floor load is less.
Advantages
When constructed for industrial use, multi-storey buildings offer the following advantages:
1. Maximum operating floor space (per sq. ft. of land). This is best suited in areas where land is
very costly.
3. Reduced cost of materials handling because the advantage of the use of gravity for the flow of
materials.
Limitations
1. Materials handling becomes very complicated. A lot of time is wasted in moving them between
floors.
3. Floor load-bearing capacity is limited, unless special construction is used, which is very
expensive.
4. Natural lighting is poor in the centers of the shop, particularly when the width of the building is
somewhat great.
II. Lighting
It is estimated that 80 per cent of the information required in doing job is perceived visually. Good
visibility of the equipment, the product and the data involved in the work process is an essential
factor in accelerating production, reducing the number of defective products, cutting down waste and
preventing visual fatigue and headaches among the workers. It may also be added that both
inadequate visibility and glare are frequently causes accidents.
In principle, lighting should be adapted to the type of work. However, the level of illumination,
measured in should be increased not only in relation to the degree of precision or miniaturization of
the work but also in relation to the worker’s age. The accumulation of dust and the wear of the light
sources cut down the level of illumination by 10–50 per cent of the original level. This gradual drop
in the level should therefore be compensated for when designing the lighting system.
Excessive contrasts in lighting levels between the worker’s task and the general surroundings should
also be avoided. The use of natural light should be encouraged. This can be achieved by installing
windows that open, which are recommended to have an area equal to the time of day, the distance of
workstations from the windows and the presence or absence of blinds. For this reason it is essential to
have artificial lighting, will enable people to maintain proper vision andwill ensure that the lighting
intensity ratios between the task, the surrounding objects and the general environment are maintained.
Control of Lighting
In order to make the best use of lighting in the work place, the following points should be taken into
account:
1. For uniform light distribution, install an independent switch for the row of lighting fixtures closest
to the windows. This allows the lights to be switched on and off depending on whether or not natural
light is sufficient.
3. Use localized lighting in order to achieve the desired level for a particular fine job.
4. Clean light fixtures regularly and follow a maintenance schedule so as to prevent flickering
5. Avoid direct eye contact with the light sources. This is usually achieved by positioning them
property. The use of diffusers is also quite effective.
Control of the climatic conditions at the workplace is paramount importance to the workers health
and comfort and to the maintenance of higher productivity. With excess heat or cold, workers may
feel very uncomfortable, and their efficiency drops. In addition, this can lead to accidents.
This human body functions in such a way as to keep the central nervous system and the internal
organs at a constant temperature. It maintains the necessary thermal balance by continuous heat
exchange with the environment. It is essential to avoid excessive hear or cold, and wherever possible
to keep the climatic conditions optimal so that the body can maintain a thermal balance.
Hot working environments are found almost everywhere. Work premise in tropical countries may, on
account of general climatic conditions, be naturally hot. When source of heat such as furnaces, kilns
or hot processes are present, or when the physical workload is heavy, the human body may also have
to deal with excess heat. It should be noted that in such hot working environments sweating is almost
the only way in which the body can lose heat. As the sweat evaporates, the body cools. There is a
relationship between the amount and speed of evaporation and a feeling of comfort. The more intense
the evaporation, the quicker the body will cool and feel refreshed. Evaporation increases with
adequate ventilation.
Working in cold environments was once restricted to non-tropical or highly elevated regions. Now as
a result of modern refrigeration, various groups of workers, even in tropical countries, are exposed to
a cold environment.
Exposure to cold for short periods of time can produce serious effects, especially when workers are
exposed to temperatures below 10°C The loss of body heat is uncomfortable and quickly affects work
efficiency. Workers in cold climates and refrigerated premises should be well protected against the
cold by wearing suitable clothes, including footwear, gloves and, most importantly, a hat.
There are many ways of controlling the thermal environment. It is relatively easy to assess the effects
of thermal conditions, especially when excessive heat or cold is an obvious problem. To solve the
problem, however, consistent efforts using a variety of available measures are usually necessary. This
is because the problem is linked with the general climate, which greatly affects the workplace
climate, production technology, which is often the source of heat or cold and varying conditions of
the work premises as well as work methods and schedules. Personal factors such as clothing,
nutrition, personal habits, and age and individual differences in response to the given thermal
conditions also need to be taken into account in the attempt to attain the thermal comfort of workers.
In controlling the thermal environment, one or more of the following principles may be applied:
1. Regulating workroom temperature by preventing outside heat or cold from entering (improved
design of the roof, insulation material or installing an air-conditioned workroom.
3. Separation of heat sources from the working area, insulation of hot surfaces and pipes,or placement
of barriers between the heat sources and the workers;
4. Control of humidity with a view to keeping it at low levels, for example by preventing the escape
of steam from pipes and equipment;
5. Provision of adequate personal protective clothing and equipment for workers exposed to excessive
radiant heat or excessive cold (heat-protective clothing with high insulation value may not be
recommended for jobs with long exposure to moderate or heavy work as it prevents evaporative
heat loss);
6. Reduction of exposure time, for example, by mechanization, remote control or alternating work
schedules;
7. Insertion of rest pauses between work periods, with comfortable, if possible air-conditioned,
resting facilities;
8. Ensuring a supply of cold drinking-water for workers in a hot environment and of hot drinks for
those exposed to a cold environment.
IV. Ventilation
Ventilation is the dynamic parameter that complements the concept of air space. For a given number
of workers, the smaller the work premises the more should be the ventilation.
Ventilation differs from air circulation. Ventilation replaces contaminated air by fresh air, whereas as
the air-circulation merely moves the air without renewing it. Where the air temperature and humidity
are high, merely to circulate the air is not only ineffective but also increases heat absorption.
Ventilation disperses the heat generated by machines and people at work. Adequate ventilation should
be looked upon as an important factor in maintaining the worker’s health and productivity.
Work-related welfare facilities offered at or through the workplace can be important factors.
Some facilities are very basic, but often ignored, such as drinking-water and toilets. Others may seem
less necessary, but usually have an importance to workers far greater than their cost to the enterprise.
1. Drinking Water
Safe, cool drinking water is essential for all types of work, especially in a hot environment.
Without it fatigue increases rapidly and productivity falls. Adequate drinking water should be
provided and maintained at convenient points, and clearly marked as “Safe drinking water”.
Where possible it should be kept in suitable vessels, renewed at leastdaily, and all practical steps
taken to preserve the water and the vessels from contamination.
2. Sanitary Facilities
Hygienic sanitary facilities should exist in all workplaces. They are particularly important where
chemicals or other dangerous substances are used. Sufficient toilet facilities, with separate facilities
for men and women workers, should be installed and conveniently located. Changing rooms and
cloakrooms should be provided. Washing facilities, such as washbasins with soap and towels, or
showers, should be placed either within changing-rooms or close by.
Facilities for rendering first-aid and medical care at the workplace in case of accidents or unforeseen
sickness are directly related to the health and safety of the workers. First-aid boxes should be clearly
marked and conveniently located. They should contain only first-aid requisites of a prescribed
standard and should be in the charge of qualified person. Apart from first-aid boxes, it is also
desirable to have a stretcher and suitable means to transport injured persons to a center where medical
care can be provided.
4. Rest Facilities
Rest facilities can include seat, rest-rooms, waiting rooms and shelters. They help workers to recover
from fatigue and to get away from a noisy, polluted or isolated workstation. A sufficient number of
suitable chairs or benches with backrests should be provided and maintained, including seats for
occasional rest of workers who are obliged to work standing up. Rest-rooms enable workers to
recover during meal and rest breaks.
5. Feeding Facilities
It is now well recognized that the health and work capacity of workers to have light refreshmentsare
needed. A full meal at the workplace in necessary when the workers live some distance away and
when the hours of work are so organized that the meal breaks are short.
6. Child-Care Facilities
Many employers find that working mothers are especially loyal and effective workers, but they often
face the special problems of carrying for children. It is for this reason that child-care facilities,
including crèches and day-care centers, should be provided. These should be in secure, airy, clean and
well lit premises. Children should be looked after property by qualified staff and offered food, drink
education and play at very low cost.
7. Recreational Facilities
Recreational facilities offer workers the opportunity to spend their leisure time in activities likely to
increase physical and mental well-being. They may also help to improve social relations within the
enterprise. Such facilities can include halls for recreation and for indoor and outdoor sports, reading-
rooms and libraries, clubs for hobbies, picnics and cinemas. Special educational and vocational
training courses can also be organized.
Chapter 11
Work-System Design
11.1 Introduction
Productivity has now become an everyday watch word. It is crucial to the welfare of industrial firm as
well as for the economic progress of the country. High productivity refers to doing the work in a
shortest possible time with least expenditure on inputs without sacrificing quality and with minimum
wastage of resources.
Work-study forms the basis for work system design. The purpose of work design is to identify the
most effective means of achieving necessary functions. This work-study aims at improving the
existing and proposed ways of doing work and establishing standard times for work performance.
Work-study is encompassed by two techniques, i.e., method study and work measurement.
“Method study is the systematic recording and critical examination of existing and proposed ways of
doing work, as a means of developing and applying easier and more effective methods and reducing
costs.”
“Work measurement is the application or techniques designed to establish the time for a qualified
worker to carry out a specified job at a defined level or performance.”
There is a close link between method study and work measurement. Method study is concerned with
the reduction of the work content and establishing the one best way of doing the job whereas work
measurement is concerned with investigation and reduction of any ineffective time associated with
the job and establishing time standards for an operation carried out as per the standard method.
Job design is defined as the process through which specific work tasks are allocated to individuals
and groups.as shown in figure below,a manager’s efforts in job design should address job content and
job context;that is ,job design encompasses the specification of task attributes (job content ) and the
creation of a supportive work setting (job context).
The strategies are job simplification,job enlargement and rotation, and job enrichment. Each strategy
varies in the degree of specialization involved in the division of labor.Thecontingency orientation of
modern management theory recognizes that there are situations in which highly specialized jobs are
the best and others in which less specialization is appropriate.Managers must learn to deploy each of
the strategies to proper advantage.
HighFaculty
task of Business and Economics Page 197
Low task
specialization
specialization
Harambee University College Operation Management
Fig.12.2.1 a continuum of job design strategies;variations in job scope and job depth
Job simplification:
It involves standardizing work procedures and employing people in clearly defined and highly
specialized tasks.Simplified jobs are very narrow in job scope,the number and combination of
different tasks a person performs.
The most extreme form of job simplification is, of course, complete automation-the total
mechanization of a job.While our concern is with the forms of job simplification that still involve the
human element,we will deal with automation and its growing contemporary significance at other
points in the book.
Job simplification is often done with the expectation of increased productivity through lower skill
requirements for workers,easier and quick training,and less difficult supervision.On the other hand,it
can sometimes reduce productivity because of the cost of related absenteeism and turnover and poor
performance due to boredom and dissatisfaction. Although jobs narrow in scope appeal to some
people,disadvantages emerge when they prove inconsistent with what people really desire from their
work.
These are strategies of job design that increase the number and variety of tasks performed by a
worker.They both expand job scope.This is assumed to offset some of the disadvantages of job
simplification and increase job performance and satisfaction.
Job Rotation
Job rotation increases task variety by periodically shifting workers among jobs involving different
sets of task assignmentswhile no one job is changed in design,the workers gain variety in their tasks
by switching jobs on a regular basis.Job rotation can be done on almost any time schedule,such as an
hourly,daily or weekly basis.Job enlargement increases task variety by combining into one job two or
more tasks that were previously assigned to separate workers.
Job Enlargement
This involves the horizontal integration of tasks to expand the range of tasks involved in a particular
job. If successfully implemented this can increase task identity, task significance and skill variety
through involving the worker in the whole work task either individually or within the context of a
group. Job Rotation is a common form of job enlargement and involves a worker changing job roles
with another worker on a periodic basis. If successfully implemented this can help increase task
identity, skill variety and autonomy through involvement in a wider range of work task with
discretion about when these mix of tasks can be undertaken. However this method does not actually
improve the design of the jobs and it can mean that people gravitate to the jobs that suit them and are
not interested in initiating rotation with colleagues. At worst it can mean rotation between a numbers
of boring jobs with no acquisition of new skills.
Job Enrichment
Job enrichment involves the vertical integration of tasks and the integration of responsibility and
decision making. If successfully implemented this can increase all five of the desirable job
characteristics by involving the worker in a wider range of tasks and providing responsibility for the
successful execution of these tasks. This technique does require feedback to so that the success of the
work can be judged. The managerial and staff responsibilities potentially given to an employee
through enrichment can be seen as a form of empowerment. This should in turn lead to improved
productivity and product quality.
Job depth
Horizontal
loading
increases Pull Pre- To improve this job –rearrange
workin its task elements Pulllater
Job scope work in
There are a number of factors which account for the fact that job enlargement and job enrichment are
not more widely implemented. Firstly the scope for using different forms of work organization will
be dependent to a large extent on the type of operation in which the work is organized.
Job shop manufacturing will require skilled workers who will be involved in a variety of tasks and
will have some discretion in how they undertake these tasks. Sales personnel may also have a high
level of discretion in how they undertake their job duties also.
The amount of variety in a batch manufacturing environment will to a large extent depend on the
length of the production runs used. Firms producing large batches of a single item will obviously
have less scope for job enrichment than firms producing in small batches on a make-to-order basis.
One method for providing job enlargement is to use a cellular manufacturing system, which can
permit a worker to undertake a range of tasks on a part. When combined with responsibility for cell
performance this can lead to job enrichment.
Jobs in mass production industries may be more difficult to enlarge. Car plants must work at a certain
rate in order to meet production targets and on a moving line it is only viable for each worker to
spend a few minutes on a task before the next worker on the line must take over. A way of
overcoming this problem is to use teams. Here tasks are exchanged between team members and
performance measurements are supplied for the team as a whole. This provides workers with greater
variety and feedback, but also some autonomy and participation in the decisions of the team.
Secondly financial factors may be a constraint on further use. These may include the performance of
individuals who actually prefer simple jobs, higher wage rates paid for the higher skills of employees
increasing average wage costs and the capital costs of introducing the approaches. The problem is that
many of the benefits associated with the technique, such as an increase in creativity, may be difficult
to measure financially.
Finally the political aspects of job design changes have little effect on organizational structures and
the role of management. Although job enrichment may affect supervisory levels of management, by
replacement with a team leader for example, the power structures in which technology is used to
justify decisions for personal objectives is intact.
Dividing and analyzing a job is called method study. The approach takes a systematic approach to
reducing waste, time and effort. The approach can be analyzed in a six-step procedure:
1. Select
Tasks most suitable will probably be repetitive, require extensive labour input and be critical to
overall performance.
2. Record
This involves observation and documentation of the correct method of performing the selected tasks.
Flow process charts are often used to represent a sequence of events graphically. They are intended to
highlight unnecessary material movements and unnecessary delay periods.
3. Examine
This involves examination of the current method, looking for ways in which tasks can be eliminated,
combined, rearranged and simplified. This can be achieved by looking at the flow process chart for
example and re-designing the sequence of tasks necessary to perform the activity.
4. Develop
Developing the best method and obtaining approval for this method. This means choosing the best
alternative considered taking into account the constraints of the system such as the performance of the
firm’s equipment. The new method will require adequate documentation in order that procedures can
be followed. Specifications may include tooling, operator skill level and working conditions.
5. Install
Implement the new method. Changes such as installation of new equipment and operator training will
need to be undertaken.
6. Maintain
Faculty of Business and Economics Page 201
Harambee University College Operation Management
New methods may not be followed due to inadequate training or support. On the other hand people
may find ways to gradually improve the method over time. Learning curves can be used to analyze
these effects.
Motion study is the study of the individual human motions that are used in a job task. The purpose of
motion study is to try to ensure that the job does not include any unnecessary motion or movement by
the worker and to select the sequence of motions that ensure that the job is being carried out in the
most efficient manner possible. For even more detail videotapes can be used to study individual work
motions in slow motion and analyze them to find improvement
The principles are generally categorized according to the efficient use of the human body, efficient
arrangement of the workplace and the efficient use of equipment and machinery. These principles can
be summarized into general guidelines as follows:
Work should be rhythmic, symmetrical and simplified. The full capabilities of the human body
should be employed. Energy should be conserved by letting machines perform tasks when possible.
Tools, materials and controls should have a defined place and be located to minimize the motions
needed to get to them. The workplace should be comfortable and healthy.
Equipment and mechanized tools enhance worker abilities. Controls and foot-operated devices that
can relieve the hand/arms of work should be maximized. Equipment should be constructed and
arranged to fit worker use.
Motion study is seen as one of the fundamental aspects of scientific management and indeed it was
effective in the design of repetitive, simplified jobs with the task specialization which was a feature of
the mass production system. The use ofmotion study as declined as there as been a movement
towards greater job responsibility and a wider range of tasks withina job. However the technique is
still a useful analysis tool and particularly in the service industries, can help improve process
performance.
The second element of work-study is work measurement which determines the length of time it will
take to undertake a particular task. This is important not only to determine pay rates but also to ensure
that each stage in a production line system is of an equal duration (i.e. ‘balanced’) thus ensuring
maximum output. Usually the method study and work measurement activities are undertaken together
to develop time as well as method standards. Setting time standards in a structured manner permits
the use of benchmarks against which to measure a range of variables such as cost of the product and
share of work between team members. However the work measurement technique has been criticized
for being misused by management in determining worker compensation. The time needed to perform
each work element can be determined by the use of historical data, work sampling or most usually
time study.
The purpose of Time Study is through the use of statistical techniques to arrive at a standard time for
performing one cycle of a repetitive job. This is arrived at by observing a task a number of times. The
standard time refers to the time allowed for the job under specific circumstances, taking into account
allowances for rest and relaxation. The basic steps in a time study are indicated below:
It is essential that the best method of undertaking the job is determined using method study before a
time study is undertaken. If a better method for the job is found then the time study analysis will need
to be repeated.
The job should be broken down into a number of easily measurable tasks. This will permit a more
accurate calculation of standard time as varying proficiencies at different parts of the whole job can
be taken into account.
This has traditionally been undertaken with a stopwatch, or electronic timer, by observation of the
task. Each time element is recorded on an observation sheet. A Video camera can be used for
observation, which permits study away from the workplace, and in slow motion which permits a
higher degree of accuracy of measurement.
As the time study is being conducted a rating of the worker’s performance is also taken in order to
achieve a true time rating for the task. Rating factors are usually between 80% and 120% of normal.
This is an important but subjective element in the procedure and is best done if the observer is
familiar with the job itself.
Once a sufficient sample of job cycles have been undertaken an average is taken of the observed
times called the cycle time. The sample size can be determined statistically, but is often around five to
fifteen due to cost restrictions.
Adjust the cycle time for the efficiency and speed of the worker who was observed. The normal time
is calculated by multiplying the cycle time by the performance rating factors.
The standard time is computed by adjusting the normal time by an allowance factor to take account of
unavoidable delays such as machine breakdown and rest periods. The standard time is calculated as
Standard Time (ST) = Normal Time (NT) x allowance.
One problem with time studies is that workers will not always co-operate with their use, especially if
they know the resultswill be used to set wage rates. Combined with the costs of undertaking a time
study, a company may use historical data in the form of time files to construct a new standard job
time from previous job element. This has the disadvantage however of the reliability and applicability
of old data.
Another method for calculating standard times without a time study is to use predetermined motion
time system (PMTS) which provides generic times for standard micro motions such as reach, move
and release which are common to many jobs. The standard item for the job is then constructed by
breaking down the job into micro motions that can then be assigned a time from the motion time
database. The standard time for the job is the sum of these micro motion times.
The advantages of this approach are that standard times can be developed for jobs before they are
introduced to the workplace without causing disruption and needing worker compliance. Also
performance ratings are factored in to the motion times and so the subjective part of the study is
eliminated. The timings should also be much more consistent thanhistorical data for instance.
Disadvantages include the fact that these times ignore the context of the job in which they are
undertaken i.e. the timings are provided for the micro motion in isolation and not part of a range of
movement. The sample is from a broad range of workers in different industries with different skill
levels, which may lead to an unrepresentative time. Also the timings are only available for simple
repetitious work which is becoming less common in industry.
Work Sampling is useful for analyzing the increasing proportion of non-repetitive tasks that are
performed in most jobs. It is a method for determining the proportion of time a worker or machine
spends on various activities and as such can be very useful in job redesign and estimating levels of
worker output. The basic steps in work sampling are indicated below:
All possible activities must be categorized for a particular job. e.g. “worker idle” and “worker busy”
states could be used to define all possible activities.
The accuracy of the proportion of time the worker is in a particular state is determined by the
observation sample size.
Assuming the sample is approximately normally distributed the sample size can be estimated using
the following formula. n = (z/e)2 * p(1 - p)
z = number of standard deviation from the mean for the desired level of confidence
The accuracy of the estimated proportion p is usually expressed in terms of an allowable degree of
error e (e.g. for a 2% degree of error, e = 0.02). The degree of confidence would normally be 95%
(giving a z value of 1.96) or 99% (giving a z value of 2.58).
There must be sufficient time in order for a random sample of the number of observations given by
the equation in 2 to be collected. A random number generate can be used to generate the time
between observations in order to achieve a random sample.
Calculate the sample and calculate the proportion (p) by dividing the number of observations for a
particular activityby the total number of observations.
It may be that the actual proportion for an activity is different from the proportion used to calculate
the sample size in step
2. Therefore as sampling progresses it is useful to re-compute the sample size based on the
proportions actually observed.
Organizations have often used learning curves to predict the improvement in productivity that can
occur as experience is gained of a process. Thus learning curves can give an organization a method of
measuring continuous improvement activities. If a firm can estimate the rate at which an operation
time will decrease then it can predict the impact on cost and increase in effective capacity over time.
The learning curve is based on the concept of when productivity doubles, the decrease in time per unit
is the rate of the learning curve. Thus if the learning curve is at a rate of 85%, the second unit takes
85% of the time of the first unit, the fourth unit takes 85% of the second unit and the eighth unit takes
85% of the fourth and so on. Mathematically the learning curve is represented by the function
Faculty of Business and Economics Page 206
Harambee University College Operation Management
y = ax-b
Where
Where
ln = log10
Learning curves are usually applied to individual operators, but the concept can also be applied in a
more aggregate sense, termed an experience or improvement curve, and applied to such areas as
manufacturing system performance or cost estimating. Industrial sectors can also be shown to have
different rates of learning. It should be noted that improvements along a learning curve do not just
happen and the theory is most applicable to new product or process development where scope for
improvement is greatest. In addition step changes can occur which can alter the rate of learning, such
as organizational change, changes in technology or quality improvement programs. To ensure
learning occurs the organization must invest in factors such as research and development, advanced
technology, people and continuous improvement efforts.
Chapter 12
Inventory management
Introduction
Inventory generally refers to the materials in stock. It is also called the idle resource of an enterprise.
Inventories represent those items which are either stocked for sale or they are in the process of
manufacturing or they are in the form of materials, which are yet to be utilized. The interval between
receiving the purchased parts and transforming them into final products varies from industries to
industries depending upon the cycle time of manufacture. It is, therefore, necessary to hold
inventories of various kinds to act as a buffer between supply and demand for efficient operation of
the system. Thus, an effective control on inventory is a must for smooth and efficient running of the
production cycle with least interruptions.Demand can be classified into two categories; dependent and
independent
A dependent demand item has a demand which is relatively predictable because it is dependent on
other factors. Thus a dependent demand item can be classified has having a demand that can be
calculated as the quantity of the item needed to produce a scheduled quantity of an assembly that uses
that item.
Independent demand is when demand is not directly related to the demand for any other inventory
item. Usually this demand comes from customers outside the company and so is not as predictable as
dependent demand. Because of the unknown future requirements of customers, forecasting is used to
predict the level of demand. A safety stock if then calculated to cover expected forecast error.
Independent demand items can be finished goods or spare parts used for after sales service.
Generally inventory is classified as either raw materials, work-in-progress (WIP) or finished goods.
The proportion between these inventory types will vary but it is estimated that generally 30% are raw
materials, 40% are work in progress and 30% finished goods. The location of inventory can be used
to define the inventory type and its characteristics. There are various definitions of inventory types
including the following:
- Buffer/Safety
This is used to compensate for the uncertainties inherent in the timing or rate of supply and demand
between two operational stages.
- Cycle
If it is required to produce multiple products from one operation in batches, there is a need to produce
enough to keep a supply while the other batches are being produced.
- Anticipation
This includes producing to stock to anticipate a increase in demand due to seasonal factors. Also
speculative policies such as buying in bulk to take advantage of price discounts may also increase
inventory levels.
- Pipeline/Movement
This is the inventory needed to compensate for the lack of stock while material is being transported
between stages. e.g. the time taken in distribution from the warehouse to a retail outlet.
The main concern of inventory management is the trade-off between the cost of not having an item in
stock against the cost of holding and ordering the inventory. A stock-out can either be to an internal
customer in which case a loss of production output may occur, or to an external customer when a
drop in customer service level will result. In order to achieve a balance between inventory availability
and cost the following inventory management aspects must be addressed of volume - how much to
order and timing - when to order.
The Economic Order Quantity (EOQ) calculates the inventory order volume which minimizes the
sum of the annual costsof holding inventory and the annual costs of ordering inventory. The model
makes a number of assumptions including:
These assumptions have led to criticisms of the use of EOQ in practice. The assumption of one
delivery per order, and then the use of that stock over time increases inventory levels and goesagainst
a JIT approach. Also annual demand will not exist for products with a life-cycle of less than a year.
However the EOQ approach still has a role in inventory management in the right circumstances and if
its limitations are recognized.
Using the EOQ each order is assumed to be of Q units and is withdrawn at a constant rate over time
until the quantity in stock is just sufficient to satisfy the demand during the order lead time (the time
between placing an order and receiving the delivery). At this time an order for Q units is placed with
the supplier. Assuming that the usage rate and lead time areconstant the order will arrive when the
stock level is at zero, thus eliminating excess stock or stock-outs.
The order quantity must be set at a level which is not too small, leading to many orders and thus high
order costs and not too large leading to high average levels of inventory and thus high holding costs.
The annual holding cost is the average number of items in stock multiplied by the cost to hold an item
for a year. If the amount in stock decreases at a constant rate from Q to 0 then the average in stock is
Q/2.
Thus if CH is the average annual holding cost per unit, the total annual holding cost is:
The annual ordering cost is a function of the number of orders per year and the ordering cost per
order. If D is the annualdemand, then the number of orders per year is given by D/Q. Thus if CO is the
ordering cost per order then the totalannual ordering cost is:
Thus the total annual inventory cost is the sum of the total annual holding cost and the total annual
ordering cost:
2 Q
D = annual demand
The minimum total cost point is when the holding cost is equal to the ordering cost and solving for Q
gives:
EOQ=¿
√ 2∗(D∗Co)
CH
Example: A computer company has annual demand of 10,000. They want to determine EOQ for
circuit boards which have an annual holding cost (H) of $6 per unit, and an ordering cost (S) of $75.
They want to calculate TC and the reorder point (R) if the purchasing lead time is 5 days.
EOQ (Q)
Q=
√ 2*( D* Co)
CH
=
$6√
2*10,000*$75
=500 units
10,000
R=Daily Demand x Lead Time= *5 days=200 units
250 days
Total Inventory Cost (TC)
The EOQ model tells us how much to order, but not when to order. The Reorder point model
identifies the time to order when the stock level drops to a predetermined amount. This amount will
usually include a quantity of stock to cover for the delay between order and delivery (the delivery
lead time) and an element of stock to reduce the risk of running out of stock when levels are low (the
safety stock).
The previous economic order quantity model provides a batch size that is then depleted and
replenished in a continuouscycle within the organization. Thus the EOQ in effect provides a batch
size which the organization can work to. Howeverthis assumes that demand rates and delivery times
are fixed so that the stock can be replenished at the exact time stocks are exhausted. Realistically
though both the demand rate for the product and the delivery lead-time will vary and thus the risk of a
stock-out is high. The cost of not having a item in stock when the customer requests it can obviously
be costly both in terms of the potential loss of sales and the loss of customer goodwill leading to
further loss of business.
Safety stock is used in order to prevent a stock-out occurring. It provides an extra level of inventory
above that needed to meet predicted demand, to cope with variations in demand over a time period.
The level of safety stock used, if any, will vary for each inventory cycle, but an average stock level
above that needed to meet demand will be calculated.
To calculate the safety stock level a number of factors should be taken into account including:
It is important to note that there is no stock-out risk between the maximum inventory level and the
reorder level. The risk occurs due to variability in the rate of demand and due to variability in the
delivery lead time between the reorder point and zero stock level.
The reorder level can of course be estimated by a rule of thumb, such as when stocks are at twice the
expected level of demand during the delivery lead time. However to consider the probability of stock-
out, cost of inventory and cost of stock-out the idea of a service level is used.The service level is a
measure of the level of service, or how sure, the organization is that it can supply inventory from
stock.
This can be expressed as the probability that the inventory on hand during the lead time is sufficient
to meet expected demand (e.g. a service level of 90% means that there is a 0.90 probability that
demand will be met during the lead time period, and the probability that a stock-out will occur is
10%. The service level set is dependent on a number of factors such as stockholding costs for the
extra safety stock and the loss of sales if demand cannot be met.
Normally a mix of fixed-order-interval and fixed order quantity inventory systems are used within an
organization. When there are many inventory items involved this raises the issue of deciding which
particular inventory system should be usedfor a particular item. The ABC classification system sorts
inventory items into groups depending on the amount of annual expenditure they incur. This will
depend on both the estimated number of items used annually multiplied by the unit cost. To instigate
a ABC system a table is produced listing the items in expenditure order (with largest expenditure at
the top), and showing the percentage of total expenditure and cumulative percentage of the total
expenditure for each item.
It is important to recognize that overall expenditure may not be the only appropriate basis on which to
classify items.
Other factors include the importance of a component part on the overall product, the variability in
delivery time, the loss of value through deterioration and the disruption caused to the production
process if a stock-out occurs.
Chapter 13
Aggregate Planning
8.6 Introduction
Aggregate planning is an intermediate term planning decision. It is the process of planning the
quantity and timing of output over the intermediate time horizon (3 months to one year). Within this
range, the physical facilities are assumed to –10 be fixed for the planning period. Therefore,
fluctuations in demand must be met by varying labour and inventory schedule. Aggregate planning
seeks the best combination to minimize costs.
The variables of the production system are labour, materials and capital. More labour effort is
required to generate higher volume of output. Hence, the employment and use of overtime (OT) are
the two relevant variables. Materials help to regulate output. The alternatives available to the
company are inventories, back ordering or subcontracting of items.
These controllable variables constitute pure strategies by which fluctuations in demand and
uncertainties in production activities can be accommodated by using the following steps:
1. Vary the size or the workforce: Output is controlled by hiring or laying off workers in proportion to
changes in demand.
2. Vary the hours worked: Maintain the stable workforce, but permit idle time when there is a slack
and permit overtime (OT) when demand is peak.
3. Vary inventory levels: Demand fluctuations can be met by large amount of inventory.
4. Subcontract: Upward shift in demand from low level. Constant production rates can be met by
using subcontractors to provide extra capacity.
Example: a shoe company produces two models of dance shoes. Over the past 3 years 72,000 pairs of
Model M have been produced using 21,600 direct labor hours and 5760 machine hours, and 108,000
pairs of Model W using 43,200 hours of labor and 12,960 hours of machine time.
Labor Factors
A B C D E F
4 Planning Factors (hours per pair)
5 Direct Machine
6 Labor Time
7 Model M 0.30 0.08
8 Model W 0.40 0.12
9
10 Quarterly Master Production Schedule (MPS) (pairs)
11 Q1 Q2 Q3 Q4 Totals
12 Model M 6000 5500 9500 6500 27500
13 Model W 10000 12000 7500 10100 39600
Step 3: Calculate the Capacity Needs for Each Resource for Each Time Period
A B C D E F
15 Direct Labor Hours Required
16 Q1 Q2 Q3 Q4 Totals
17 Model M 1800 1650 2850 1950 8250
18 Model W 4000 4800 3000 4040 15840
19 Totals 5800 6450 5850 5990 24090
20
21 Machine Time (Hours) Required
22 Q1 Q2 Q3 Q4 Totals
23 Model M 480 440 760 520 2200
24 Model W 1200 1440 900 1212 4752
25 Totals 1680 1880 1660 1732 6952
Step 4: Calculate Individual Work center, Capacity Needs Based on Historical, Percentage Allocation
A B C D E F
27 Work Center Historical Breakdown
28 Direct Machine
29 Labor Time
30 Center 101 60% 60%
31 Center 102 40% 40%
32
33 Direct Labor Hours Required by Work Center
34 Q1 Q2 Q3 Q4 Totals
35 Center 101 3480 3870 3510 3594 14454
36 Center 102 2320 2580 2340 2396 9636
37 Totals 5800 6450 5850 5990 24090
38
39 Machine Time Hours Required by Work Center
40 Q1 Q2 Q3 Q4 Totals
41 Center 101 1008 1128 996 1039.2 4171.2
42 Center 102 672 752 664 692.8 2780.8
43 Totals 1680 1880 1660 1732 6952
Master scheduling follows aggregate planning. It expresses the overall plans in terms of specific end
items or models that can be assigned priorities. It is useful to plan for the material and capacity
requirements.
Time interval used in master scheduling depends upon the type, volume, and component lead times of
the products being produced. Normally weekly time intervals are used. The time horizon covered by
the master schedule also depends upon product characteristics and lead times. Some master schedules
cover a period as short as few weeks and for some products it is more than a year.
Functions of MPS
Master Production Schedule (MPS) gives a formal detail of the production plan and converts this plan
into specific material and capacity requirements. The requirements with respect to labour, material
and equipment are then assessed.
1. To translate aggregate plans into specific end items:Aggregate plan determines level of operations
that tentatively balances the market demands with the material, labour and equipment capabilities of
the company. A master schedule translates this plan into specific number of end 2. 2. Evaluate
alternative schedules: Master schedule is prepared by trial and error. Many computer simulation
models are available to evaluate the alternate schedules.
3. Generate material requirement: It forms the basic input for material requirement planning (MRP).
items to be produced in specific time period.
4. Generate capacity requirements: Capacity requirements are directly derived from MPS. Master
scheduling is thus a prerequisite for capacity planning.
5. Facilitate information processing: By controlling the load on the plant. Master schedule
determines when the delivery should be made. It coordinates with other management information
systems such as, marketing, finance and personnel.
6. Effective utilization of capacity: By specifying end item requirements schedule establishes the load
and utilization requirements for machines and equipment.
Week BI 1 2 3 4 5 6 7 8 9 10 11 12
Forecast 50 50 50 50 75 75 75 75 50 50 50 50
MPS
Week BI 1 2 3 4 5 6 7 8 9 10 11 12
Forecast 50 50 50 50 75 75 75 75 50 50 50 50
MPS 125
The MPS row shows when replenishment shipments need to arrive to avoid a stock out (negative
projected available).
Week BI 1 2 3 4 5 6 7 8 9 10 11 12
Forecast 50 50 50 50 75 75 75 75 50 50 50 50
Week BI 1 2 3 4 5 6 7 8 9 10 11 12
Forecast 50 50 50 50 75 75 75 75 50 50 50 50
MRP refers to the basic calculations used to determine components required from end item
requirements. It also refers to a broader information system that uses the dependence relationship to
plan and control manufacturing operations.“Materials Requirement Planning (MRP) is a technique
for determining the quantity and timing for the acquisition of dependent demand items needed to
satisfy master production schedule requirements.”
1. Inventory reduction: MRP determines how many components are required when they are
required in order to meet the master schedule. It helps to procure the materials/ components as and
when needed and thus avoid excessive buildup of inventory.
2. Reduction in the manufacturing and delivery lead times: MRP identifies materials and
component quantities, timings when they are needed, availabilities and procurements and actions
required to meet delivery deadlines. MRP helps to avoid delays in production and priorities
production activities by putting due dates on customer job order.
3. Realistic delivery commitments: By using MRP, production can give marketing timely
information about likely delivery times to prospective customers.
4. Increased efficiency: MRP provides a close coordination among various work centers and hence
help to achieve uninterrupted flow of materials through the production line. This increases the
efficiency of production system.
The inputs to the MRP system are: (1) A master production schedule, (2) An inventory status file and
(3) Bill of materials (BOM).
Using these three information sources, the MRP processing logic (computer programme)provides
three kinds of information (output) for each product component: order release requirements, order
rescheduling and planned orders.
MPS is a series of time phased quantities for each item that a company produces, indicating how
many are to be produced and when. MPS is initially developed from firm customer orders or from
forecasts of demand before MRP system begins to operate. The MRP system whatever the master
schedule demands and translates MPS end items into specific component requirements. Many
systems make a simulated trial run to determine whether the proposed master can be satisfied.
Every inventory item being planned must have an inventory status file which gives complete and up
to date information on the on-hand quantities, gross requirements, scheduled receipts and planned
order releases for an item. It also includes planning information such as lot sizes, lead times, safety
stock levels and scrap allowances.
BOM identifies how each end product is manufactured, specifying all subcomponents items, their
sequence of buildup, their quantity in each finished unit and the work centers performing the buildup
sequence. This information is obtained from product design documents, workflow analysis and other
standard manufacturing information.
Chapter 14
14.1 Introduction
Starting in the late 1980s and the beginning of the 1990s new software systems known in the industry
as enterprise resource planning (ERP) systems have surfaced in the market targeting mainly large
complex business organizations. These complex, expensive, powerful, proprietary systems are off
the-shelf solutions requiring consultants to tailor and implement them based on the company’s
requirements.
Enterprise resource planning systems or enterprise systems are software systems for business
management, encompassing modules supporting functional areas such as planning, manufacturing,
sales, marketing, distribution, accounting, financial, human resource management, project
management, inventory management, service and maintenance, transportation and e-business. The
architecture of the software facilitates transparent integration of modules, providing flow of
information between all functions within the enterprise in a consistently visible manner.
The evolution of ERP systems closely followed the spectacular developments in the field of computer
hardware and software systems. During the 1960s most organizations designed, developed and
implemented centralized computing systems, mostly automating their inventory control systems using
inventory control packages (IC).
These were legacy systems based on programming languages such as COBOL, ALGOL and
FORTRAN. Material requirements planning (MRP) systems were developed in the 1970s which
involved mainly planning the product or parts requirements according to the master production
schedule. Following this route new software systems called manufacturing resources planning (MRP
II) were introduced in the 1980s with an emphasis on optimizing manufacturing processes by
synchronizing the materials with production requirements. MRP II included areas such as shop floor
and distribution management, project management, finance, human resource and engineering. ERP
systems first appeared in the late 1980s and the beginning of the 1990s with the power of enterprise-
wide inter-functional coordination and integration. Based on the technological foundations of MRP
and MRP II, ERP systems integrate business processes including manufacturing, distribution,
accounting, financial, human resource management, project management, inventory management,
service and maintenance, and transportation, providing accessibility, visibility and consistency across
the enterprise.
During the 1990s ERP vendors added more modules and functions as “add-ons” to the core modules
giving birth to the “extended ERPs.” These ERP extensions include advanced planning and
scheduling (APS), e-business solutions such as customer relationship management (CRM) and supply
chain management (SCM).
ERP vendors, mostly experienced from the MRP and financial software services fields, realized the
limitations of the old legacy information systems used in large enterprises of the 1970s and 1980s.
Some of these old systems were developed in-house while others were developed by different
vendors using several different database management systems, languages and packages, creating
islands of non-compatible solutions unfit for seamless data flow between them. It was difficult to
increase the capacity of such systems or the users were unable to upgrade them with the
organization’s business changes, strategic goals and new information technologies.
Different ERP vendors provide ERP systems with some degree of specialty but the core modules
are almost the same for all of them. Some of the core ERP modules found in the successful ERP
systems are the following:
Accounting management
Financial management
Manufacturing management
Production management
Transportation management
Sales & distribution management
The proliferation of the Internet has shown tremendous impact on every aspect of the IT sector
including ERP systems becoming more and more “Internet-enabled” (Lawton, 2000).This
environment of accessing systems resources from anywhere anytime has helped ERP vendors extend
their legacy ERP systems to integrate with newer external business modules such as supply chain
management, customer relationship management, sales force automation (SFA), advanced planning
and scheduling (APS), business intelligence (BI), and e-business capabilities.
In fact ERP is becoming the e-business backbone for organizations doing online business transactions
over the Internet. Internet-based solutions are destined to improve customer satisfaction, increase
marketing and sales opportunities, expand distribution channels, and provide more cost-effective
billing and payment methods. The extension to SCM and CRM enables effective tri-party business
relationships between the organization, suppliers and the customers. A supply chain management has
sub-modules for procurement of materials, transformation of the materials into products and
distribution of products to customers. “Successful supply chain management allows an enterprise to
anticipate demand and deliver the right product to the right place at the right time at the lowest
possible cost to satisfy its customers. Dramatic savings can be achieved in inventory reduction,
transportation costs and reduced spoilage by matching supply with actual demand” (IBM, 2001).
With the deployment of a CRM, organizations are able to gather knowledge about their customers,
opening opportunities to assess customer needs,values and costs throughout the business life cycle for
better understanding and investment decisions. The sub-modules found in typical CRM packages are
marketing, sales, customer service and support systems using Internet and other access facilities with
the intention of increasing customer loyalty through improved customer satisfaction.
E-commerce is the conduct of business transactions among organizations with the support of
networked information and communication technologies, especially utilizing Internet applications
such as the Web and e- mail, effectively reaching global customers.
The legacy ERP systems designed to integrate enterprise functions within the four walls of the
enterprise have introduced software solutions with a Web-interface essentially extending to Internet-
enabled CRM, SCM and other Internet-business models. Examples of such extended ERPs are
available from most of the ERP vendors. Thus SAP’s Internet-enabled integrated ERP system called
mySAP.COM (SAP, 2001) is a suite of ERP, CRM and other products that can be linked together
using Internet portals.
Time-consuming
Expensive
Conformity of the modules
Vendor dependence
Features and complexity
Scalability and global outreach
Extended ERP capability
Eliminates most of the business problems like Material shortages, Productivity enhancements,
Customer service, Cash Management, Inventory problems, Quality problems, Prompt delivery
etc.
Addresses current requirements of the company and provides opportunity of continually
improving and refining business processes.
Provides business intelligence tools like Decision Support Systems (DSS), Executive
Information System (EIS), Reporting, Data Mining and Early Warning Systems (Robots)
14.7 ERP Implementation Lifecycle
Implementation Approach
Stage 5 – Sandbox
Ensure complete data migration from the old software system to the new begins early in the
implementation process
Constantly evaluate risks, constraints & assumptions
Develop training plan for all users
Develop rollout plan
Key questions that a business should ask are:How do we ensure that the project team and the
end users are in sync?How do we ensure that our people are accepting change?
Roll out training plan for all users in a phased manner
Conduct user group conferences & prototype sessions todemonstrate the system’s capabilities
Solicit feedback from end users and ensure that allconcerns & questions are addressed
Encourage end users to network with peers at otherinstitutions undergoing similar
implementation initiatives
Ensure that implementation information is continuouslycommunicated to the user community
Pilot rollout / evaluation
Complete live rollout - rollout support
Stage 8 - IT Infrastructure
Stage 9 – Operations
Key questions that a business should ask are:How will we recover from a major outage?
Execute a onsite maintenance with partners
Implement a Disaster Recovery Plan
o Review Business Impact & Associated Risk
o Offsite backups
o Provide disaster recovery training to key personnel
Inventory Reduction
Personal reduction
Productivity improvement
Order management improvement
Technology cost reduction
Procurement cost reduction
Cash management improvement
Revenue/profit improvement
Transportation/ logistics cost reduction
Maintenance reduction
On time delivery improvement
Standardization
Flexibility
Globalization
Business performance
Supply/ demand chain
Information/visibility
Economic Performance of firm (Internal coordination cost)
Monitoring cost
Bonding cost
Residual cost
Information processing cost
Communication cost
Documentation cost
Opportunity cost due to poor information
Chapter 15
Scheduling
Introduction
Scheduling can be defined as “prescribing of when and where each operation necessary to
manufacture the product is to be performed.”
It is also defined as “establishing of times at which to begin and complete each event oroperation
comprising a procedure”. The principle aim of scheduling is to plan the sequence ofwork so that
production can be systematically arranged towards the end of completion of allproducts by due date.
Scheduling determines the programme for the operations. Scheduling may be defined as ‘the fixation
of time and date for each operation’ as well as it determines the sequence of operations to be
followed.
It is the time phase of loading. It is assignment of job to a facility specifying the particular sequence
of the work and the time of actual performance. Examples of scheduling include: railway time-table,
examination schedule, the time table for teaching various subjects. Scheduling should be done at
relatively lower level of the organization.
Scheduling activities are highly dependent on the type of the production system and the output
volume delivered by the system. Scheduling activities differ in
They make use of specialized equipment that routes work on a continuous basis through the same
fixed path of operations, generally at a rapid rate. The problems of order release, dispatching, and
monitoring are less complex than in low-volume, make-to-order systems. However, material flows
must be well coordinated, inventories carefully controlled, and extra care taken to avoid equipment
breakdowns, material shortages, etc. to avoid production-line downtime.
They utilize a mixture of equipment and similar processes to produce an intermittent flow of similar
products on the same facilities. The sequencing of jobs and production-run lengths are of significant
concern to schedulers, as they attempt to balance the costs of changeover time against those of
inventory accumulations.
They use general-purpose equipment that must route orders individually through a unique
combination of work centers. The variable work-flow paths and processing time generates queues,
work-in-process inventories, and capacity utilization concerns that can require more day-to-day
attention than in the high- or intermediate-volume systems.
1. The principle of optimum task size: Scheduling tends to achieve maximum efficiency
when the task sizes are small, and all tasks of same order of magnitude.
2. Principle of optimum production plan: The planning should be such that it imposes an
equal load on all plants.
5. Overlapping of operations.
Scheduling strategies vary widely among firms and range from ‘no scheduling’ to very sophisticated
approaches. These strategies are grouped into four classes:
1. Detailed scheduling: Detailed scheduling for specific jobs that are arrived from customers is
impracticable in actual manufacturing situation. Changes in orders, equipment breakdown, and
unforeseen events deviate the plans.
2. Cumulative scheduling: Cumulative scheduling of total work load is useful especially for long
range planning of capacity needs. This may load the current period excessively and under load
future periods. It has some means to control the jobs.
3. Cumulative detailed: Cumulative detailed combination is both feasible and practical approach,if
master schedule has fixed and flexible portions.For continuous systems, detailed schedules
(production rates) can often be firmed as the master schedule is implemented.For job shop
operations, schedules may be planned based on the estimated labor and equipment (standard hour)
requirements per week at key work centers. When detailed scheduling is desirable, capacity is
sometimes allocated to specific jobs as late as a week, or a few days, before the actual work is to
be performed.
4. Priority decision rules: Priority decision rules are scheduling guides that are used independently
and in conjunction with one of the above strategies, i.e., first come first serve. These are useful in
reducing Work-In-Process (WIP) inventory.
Order Release
Order release converts a need from a planned-order status to a real order in the shop or with a vendor
by assigning it either a shop order or purchase order number. Well-designed scheduling and control
systems release work at a reasonable rate that keeps unnecessary backlogs from the production floor.
Releasing all available jobs as soon as they are received from customers is a common cause of
increased manufacturing lead times and excess work in process (WIP).
Figure 15.4.1 illustrates how the order release function creates a scheduled receipt. As the shop day
and current calendar day coincide, the planned order release takes place. The order quantity is deleted
(from the MRP planned-order release row), and a shop order for that amount is added to the dispatch
list, along with a start and due date priority. The order quantity is then reentered (on the MRP form)
as a scheduled receipt on the listed due date.
1. Forward scheduling is commonly used in job shops where customers place their orderson
“needed as soon as possible” basis. Forward scheduling determines start and finish times of next
priority job by assigning it the earliest available time slot and from that time, determines when the job
will be finished in that work center. Since the job and its components start as early as possible, they
will typically be completed before they are due at the subsequent work centersin the routing. The
Faculty of Business and Economics Page 237
Harambee University College Operation Management
forward method generates in the process inventory that are needed at subsequent work centers and
higher inventory cost. Forward scheduling is simple to use and it gets jobs done in shorter lead times,
compared to backward scheduling.
2. Backward scheduling is often used in assembly type industries and commit in advance to specific
delivery dates. Backward scheduling determines the start and finish times for waiting jobs by
assigning them to the latest available time slot that will enable each job to be completed just when it
is due, but done before. By assigning jobs as late as possible, backward scheduling minimizes
inventories since a job is not completed until it must go directly to the next work centeron its routing.
Example 1: A job is due to be delivered at the end of 12th week. It requires a lead time of 2 weeks
for material acquisition, 1 week of run time for operation-1, 2 weeks for operation-2, and 1 week for
final assembly. Allow 1 week of transit time prior to each operation. Illustrate the completion
schedule under (a) forward, and (b) backward scheduling methods.Solution: The solution is shown in
Figure below.
The scheduling methodology depends upon the type of industry, organization, product, and level of
sophistication required. They are:
1. Charts and boards, 2. Priority decision rules, and 3. Mathematical programming methods.
Gantt charts and associated scheduling boards have been extensively used scheduling devices in the
past, although many of the charts are now drawn by computer. Gantt charts are extremely easy to
understand and can quickly reveal the current or planned situation to all concerned. They are used in
several forms, namely,
(b) Load charts, which show the work assigned to a group of workers or machines; and
(c) Record a chart, which are used to record the actual operating times and delays of workers and
machines.
Priority decision rules are simplified guidelines for determining the sequence in which jobs will be
done. In some firms these rules take the place of priority planning systems such as MRP systems.
Following are some of the priority rules followed.
Scheduling is a complex resource allocation problem. Firms process capacity, labour skills, materials
and they seek to allocate their use so as to maximize a profit or service objective, or perhaps meet a
demand while minimizing costs.
The following are some of the models used in scheduling and production control.
(a) Linear programming model: Here all the constraints and objective functions are formulated as a
linear equation and then problem is solved for optimality. Simplex method, transportation methods
and assignment method are major methods used here.
(b) PERT/CPM network model: PERT/CPM network is the network showing the sequence of
operations for a project and the precedence relation between the activities to be completed.
Sequencing activities are closely identified with detailed scheduling, as they specify the order in
which jobs are to be processed at the various work centers. Dispatching is concerned with starting
theprocesses. It gives necessary authority to start a particular work, which has already been planned
under ‘routing’ and ‘scheduling’. For starting the work, essential orders and instructions are given.
Therefore, the definition of dispatching is ‘release of orders and instructions for starting of the
production for an item in accordance with the ‘route sheet’ and schedule charts’.
Implementing the schedule in a manner that retains any order priorities assigned at the
planning phase.
Moving the required materials from stores to the machines, and from operation to operation.
Authorizing people to take work in hand as per schedule
Distributing machine loading and schedule charts, route sheet, and other instructions and
forms.
Issuing inspection orders, stating the type of inspection at various stages.
Ordering tool-section to issue tools, jigs and fixtures.
8.15.1 Dispatching or Priority Decision Rules
Job shops generally have many jobs waiting to be processed. The principal method of job dispatching
is by means of priority rules, which are simplified guidelines (heuristics) to determine the sequence in
which jobs will be processed. The use of priority rule dispatching is an attempt to formalize the
decisions of the experienced human dispatcher. Most of the simple priority rules that have been
suggested are listed in Table 5.6. Some of the rules used job assignment are: first come, first
served(FCFS), earliest due date (EDD), longest processing time (LPT), and preferred customer order
(PCO). These rules can be classified as: Static or Dynamic.
Static rules do not incorporate an updating feature. They have priority indices that stay constant as
jobs travel through the plant, whereas dynamic rules change with time and queue characteristics.
Table below shows standard dispatching rules.
unfinished operations.
MOPNR Most Operations Select a job with the most operations remaining in its
Remaining processing sequence.
MWKR Most Work Remaining Select a job with the most total processing time remaining.
RANDOM Random Select a job at random.
WINQ Work in Next Queue Select a job whose subsequent machine currently has the
shortest queue.
LTWK and EDD (assuming due dates are fixed) are static rules. LWKR is dynamic, since the
remaining processing time decreases as the job progresses through the shop, i.e. through time. Slack
based rules are also dynamic.
Rules can also be classified as myopic or global. Myopic rules look only at the individual machine,
whereas global rules look at the entire shop. SPT is myopic whereas WINQ is global.
1. Job slack (S): This is the amount of contingency or free time, over and above the expected
processing time, available before the job is completed at a predetermined date (to), i.e.
S = to – t1– Σaj , where t1 = present date (e.g. day or week number, where ti< to), Σaj= sum of
remaining processing times. Where delays are associated with each operation,
2. Job slack per operation, i.e. S/N, where N = number of remaining operations. Therefore where S
is the same for two or more jobs, the job having the most remaining operations is processed first.
3. Job slack ratio, or the ratio of the remaining slack time to the total remaining time, i.e. S/(t 0 – t1).
In all the above cases, where the priority index is negative the job cannot be completed by the
requisite date. The rule will therefore be to process first those jobs having negative indices.
4. Shortest imminent operation (SIO), i.e. process first the job with the shortest processing times.
6. Scheduled start date. This is perhaps the most frequently used rule. The date at which operations
must be started in order that a job will meet a required completion date is calculated, usually by
employing reverse scheduling from the completion date, e.g.
Usually some other rule is also used, e.g. first come, first served, to decide priorities between jobs
having equal Xi values.
7. Earliest due date, i.e. process first the job required first.
8. Subsequent processing times. Process first the job that has the longest remaining process times, i.e.
Σai or, in modified form, Σ(ai + .fi).
9. Value. To reduce work in progress inventory cost, process first the job which has the highest value.
10. Minimum total float. This rule is the one usually adopted when scheduling by network
techniques.
11. Subsequent operation. Look ahead to see where the job will go after this operation has been
completed and process first the job which goes to a ‘critical’ queue, that is a facility havinga small
queue of available work, thus minimizing the possibility of facility idle time.
Rules 12 and 13 are random since, unlike the others, neither one depends directly on job
characteristics such as length of operation or value.
1. Local rules depend solely on data relating to jobs in the queue at any particular facility.
2. General rules depend on data relating to jobs in the queue at any particular facility and/or data for
jobs in queues at other facilities.
Local rules, because of the smaller amount of information used, are easier and cheaper to
calculatethan general (sometimes called global) rules. All of the above rules with the exception of
rule 11 are local rules.
• Static rules are those in which the priority index for a job does not change with the passage of time,
during waiting in anyone queue.
• Dynamic rules are those in which the priority index IS a function of the present time.
Rules 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 are all static, whereas the remainders are dynamic.
Perhaps the most effective rule according to present research is the SIO rule, and, lore particularly,
the various extensions of this rule. Massive simulation studies have shown that, of all ‘local’ rules,
those based on the SIO rule are perhaps the most effective, certainly when considered against criteria
such as minimizing the number of Jobs in the system, the mean of the ‘completion distribution’ and
the throughput time. be SIO rule appears to be particularly effective in reducing throughput time, the
‘truncated SIO’ and the ‘two-class SIO’ rules being perhaps the most effective derivatives, having the
additional advantage of reducing throughput time variance and lateness.
The ‘first come, first served’ priority rule has been shown to be particularly beneficial in reducing
average lateness, whereas the ‘scheduled start date and total float’ rule has been proved effective
where jobs are of the network type.
Example 1; Let the current time is 10. Machine B has just finished a job, and it is time to select its
next job. Table 15.7.1.1 provides information on the four jobs available. For each of the dispatching
rules discussed in Table 5.6, determine the corresponding sequence.
A job is due to be delivered at the end of 12th week. It requires a lead time of 2 weeks for material
acquisition, 1 week of run time for operation-1, 2 weeks for operation-2, and 1 week for final
assembly. Allow 1 week of transit time prior to each operation.available jobs are
SPT: Looking at machine B, we find that jobs (1, 2, 3, 4) have processing times of (5, 3, 2, 4).
Placing jobs in increasing order of processing time results in the job sequence {3, 2, 4, 1}. So load job
3 on machine B.
EDD: Jobs (1, 2, 3, 4) have due dates (30, 20, 10, 25) respectively. Arranging in increasing order of
the due dates, we have the job sequence (3, 2, 4, 1), which means job 3 should be loaded next on
machine B.
FCFS: Jobs arrived at machine B at times (10, 5, 9, 8). Placing earliest arrivals first, we obtain the
job sequence (2, 3, 4, 1).
Example. (i) Sequence the jobs given in Table above by the following priority rules:
(a) FCFS (First Come First Served), (b) EDD (Earliest Due Date), (c) LS (Least Slack), (d) SPT
(Shortest Processing Time), and (e) LPT (Longest Processing Time).
(ii) Compare the effectiveness of FCFS and SPT rules in terms of (a) Average completion time, (b)
Average job lateness, (c) Average no. of jobs at work center
Table 15.7.1.1a
Solution: The solution is shown in Tables 5.7a, 5.7b, and 5.7c below.
Tables 15.7.1.1b
Work orders: are issued to departments to commence the desired lot of products.
Time card: is given to each operator in which the time taken by each operation and other necessary
operations are given.
Inspection tickets: are sent to the inspection department, which shows the quality of the work
required, and stages at which inspection is to be carried out. Afterwards these are returned with the
inspection report and quantity rejected.
Move tickets: are used for authorizing the movement of the material from store to shops, and from
operation to operation.
Tool and equipment tickets: authorizes the tool department that new tools, jigs, fixtures and other
equipment may be issued to shops.
Routing may be defined as the selection of path which each part of the product will follow, which
being transformed from raw material to finished products. Path of the product will also give sequence
of operation to be adopted while being manufactured.Sequencing procedures seek to determine the
best order for processing a set of jobs through a set of facilities.
In other way, routing means determination of most advantageous path to be followed from
department to department and machine to machine till raw material gets its final shape, which
involves the following steps:
(e) A proper classification about the personnel required and the machine for doing the work.
For effective production control of a well-managed industry with standard conditions, the routing
plays an important role, i.e., to have the best results obtained from available plant capacity. Thus
routing provides the basis for scheduling, dispatching and follow-up.
Two types of problems can be identified. First, the static case, in which all jobs to be processedare
known and are available, and in which no additional jobs arrive in the queue during the exercise.
Second, the dynamic case, which allows for the continuous arrival of jobs in the queue. Associated
with these two cases are certain objectives. In the static case the problem is merely to order a given
queue of jobs through a given numbers of facilities, each job passing through the facilities in the
required order and spending the necessary amount of time at each. The objective in such a case
isusually to minimize the total time required to process all jobs: the throughput time. In the dynamic
casethe objective might be to minimize facility idle time, to minimize work in progress or to achieve
the required completion or delivery dates for each job. Sequencing procedures are relevant primarily
for static cases.
Several simple techniques have been developed for solving simple sequencing problems, for example
the sequencing of jobs through two facilities, where each job must visit each facility in the same
order. Fairly complex mathematical procedures are available to deal with more realistic problems, but
in all cases either a static case is assumed or some other simplifying assumptions are made. Route or
sequencing depends on the nature and type of industries as discussed below:
A. Continuous Industry
In this type of industry, once the route is decided in the beginning, generally no further control over
the route is needed. The raw material enters the plant, moves through different processes
automatically till it gets final shape, e.g. soft drink bottling plant, brewery, food processing unit,..
B. Assembly Industry
Such industries need various components to be assembled at a particular time. So, it is necessary that
no component should fail to reach at the proper time and proper place in required quantity, otherwise
the production line will be held up, resulting in wastage of time and production delay (e.g. assembly
of bike, scooter, car, radio, type writer, watch, etc). If all batches visit the same sequence of
workstations, the system is called a flow shop.
In these industries much attention is paid for routing. A work-flow sheet for every component is
prepared which gives full information about the processes, machines and the sequence in which parts
will reach at the particular place and time. This type of routing needs a good technical knowledge, so
the staff of the production control department must be qualified and experienced one.
C. Job-shop Industry
This is also called sequencing and scheduling situation with many products. The general job
shopproblem is to schedule production times for N jobs on M machines. At time 0, we have a set of N
jobs.For each job we have knowledge of the sequence of machines required by the job and the
processingtime on each of those machines. Due dates may also be known. The objective may be to
minimize the makespan for completion of all jobs, minimizing the number of tardy jobs or average
tardiness, minimizing the average flow time, or achieving some weighted combination of these
criteria.
This problem is very complex and difficult to solve. On each of the M machines there are N! possible
job orderings making a total of (N!)M possible solutions. For just 10 jobs on 5 machines there are
over 6x1031 choices. Some techniques of optimization like Dynamic programming, and Branch and
bound have been attempted to do scheduling in random or job shop environment. However, we will
try to look at some other options with some examples.
Since such industries always handle different types of products, so after receiving the manufacturing
orders, the planning dept has to prepare each time the detailed drawing and planning.
This will indicate the proper sequence of routes for the job. In a job shop, each part type has its
ownroute. These individual routes may be carefully planned by an experienced process planner.
While converting raw material into required goods different operations are to be performed and the
selection of a particular path of operations for each piece is termed as ‘Routing’. This selection of a
particular path, i.e. sequence of operations must be the best and cheapest to have the lowest cost of
the final product. The various routing techniques are:
1. Route card: This card always accompanies with the job throughout all operations. This indicates
the material used during manufacturing and their progress from one operation to another. In addition
to this the details of scrap and good work produced are also recorded.
(b) Instructions regarding routing of every part with identification number of machines and work
place of operation.This sheet is made for manufacturing as well as for maintenance.
3. Route sheet: It deals with specific production order. Generally made from operation sheets. One
sheet is required for each part or component of the order. These include the following:
(f) Rate at which job must be completed, determined from the operation sheet.
4. Move order: Though this is document needed for production control, it is never used for routing
system. Move order is prepared for each operation as per operation sheet. On this the quantity passed
forward, scrapped and to be rectified are recorded. It is returned to planning office when the operation
is completed.
It means assignment of job to a facility, viz: machine, men, dept, etc. Assigning a subject to a teacher
is loading. Loading should be done at the higher level. Frequently, when attempting to decide how
orders are to be scheduled onto available facilities, one is faced with various alternative solutions. For
example, many different facilities may be capable of performing the operations required on one
customer oritem. Operations management must then decide which jobs are to be scheduled onto
which facilities inorder to achieve some objective, such as minimum cost or minimum throughput
time.
One simple, rapid, but approximate method of facility/job assignment is best described by means of
an example shown in Figure 15.8.2.1.
Figure 15.8.2.1
Example 15.8.2.1. A company must complete five orders during a particular period. Each order
consists of several identical products and each can be made on one of several facilities. Table-5.1
gives the operation time for each product on each of the available facilities. The available capacity for
these facilities for the period in question is: A = 100 hours, B = 80 hours, and C = 150 hours.
Table 15.8.2.1
The index number for a facility is a measure of the cost disadvantage of using that facility for
processing, and is obtained by using this formula:
IC = (2.5 – 2.5)/2.5 = 0
• The best facility for order 1 is C (IC = 0); the processing time for that order (75 hours) is less than
the available capacity. We can therefore schedule the processing of this order on this facility.
• Facility A is the best facility for order 2, but also the best for order 3. Both cannot be accommodated
because of limitations on available capacity, so we must consider the possibility of allocating one of
Faculty of Business and Economics Page 251
Harambee University College Operation Management
the orders to another facility. The next best facility for order 2 is facility B (IB = 0.67) and for order
3 the next best facility is also facility B (IB = 1). Because the cost disadvantage on B is less for
order 2, allocate order 2 to B and 3 to A as shown in the table.
• The best facility for order 4 is B but there is now insufficient capacity available on this facility. The
alternatives now are to reallocate order 2 to another facility or to allocate order 4 elsewhere. In the
circumstances it is better to allocate order 4 to facility C.
Example 15.8.2.2 Suppose there are three machines and 6 jobs need operation on them. All machines
are capable ofdoing the operation but the time of operation varies from machine to machine. Assign
the jobs so that (i) theoverall operation time is minimum, (ii) the capacity of each machine is not
exceeded.
Table 15.8.2.2
Example 15.8.2.3. A piece of mining equipment requires the manufacturing times shown in Table
15.8.2.3. Each of the activities must be done sequentially, except that steel fabrication can begin 2
weeks after purchasing begins, and the hydraulics and electrical activities can be done concurrently.
Construct a Gantt chart for this job.
Table 15.8.2.3
Solution: The Gantt Chart is drawn in Table 15.8.2.4. The chart shows the different activities along
with their respective durations.
Chapter 16
Project Management
10.16 Introduction
A project is an interrelated set of activities with a definite starting and ending point, which results in a
unique outcome for a specific allocation of resources. The complexity of the project will increase
with the size and number of activities within the project. Extensive planning and co-ordination
activities are required for larger projects to ensure that the project aims are met. Examples of projects
include installing an IT system, building a bridge or introducing a new service or product to the
market.
Event: An event is a specific instant of time which marks the start and the end of an activity. Event
consumes neither time nor resources. It is denoted by a circle or a node and the event number is
written within the circle. Example of events: start of examination, end of the game, start of meeting,
meeting ended, etc.
Activity: A project consists of different types of tasks or jobs to be performed. These jobs or tasks are
called activities. An activity may be a process, a material handling or material procurementcycle.
Examples of activities: Laying the foundation of a building, process of writing examination,
arranging for bank loans, etc. An activity is shown by an arrow and it begins and ends with an event.
Unlike event, an activity consumes time and resources. An activity is denoted by a, b, c, etc. which is
marked below the arrow and estimated time to accomplish the activity is written above the arrow.
Dummy activity: When two activities start at the same instant of time (like activities b and c in
Figure 16.2.1), the head events are joined by a dotted arrow-known as a dummy activity. A dummy
activitydoes not consume time. It may be critical or non-critical. It becomes a critical activity when its
earlieststart time (EST) is same as its latest finishing time (LFT).
Fig.
16.2.1An Example of A Network Diagram.
Critical activities: An activity is called critical if its earliest start time plus the time taken by it is
equal to the latest finishing time. In a network diagram, critical activities are those which if consume
more than their estimated time, the project will be delayed. A critical activity in a network diagram is
denoted by a thick arrow to distinguish it from a non-critical activity.
Critical path: Critical path (CP) is formed by critical activities. A CP is the longest path and
consumes the maximum time. A CP has zero float. A dummy activity joining two critical activities is
also a critical activity. Any amount of delay on CP will delay the entire project by the same amount.
So, a CP re.
Subprojects: Projects are frequently divided into more manageable smaller projects which are called
subprojects. Subprojects are often contracted out to an external enterprise or to another
functionalunit in the performing organization. Examples of subprojects include:
Work Breakdown Structure (WBS): WBS represents a systematic and logical breakdown of the
project into its component parts. It is constructed by dividing the project into major parts, with each
of these being further divided into sub-parts. This is continued till a breakdown is done in terms of
manageable units of work for which responsibility can be defined.
WBS is a deliverable oriented grouping of project elements which organizes and defines the total
scope of the project. Each descending level represents an increasingly detailed definition of a project
component which may be products or services. WBS helps in:
• Effective planning by dividing the work into manageable elements which can be planned, budgeted,
and controlled.
• Assignment of responsibility for work elements to project personnel and outside agencies.
OBS represents formally how the project personnel and outside agencies are going to work for the
project. To assign responsibility for the tasks mentioned, the WBS has to be integrated with project
organization structure or the OBS.
This step involves evaluating the expected cost of resources needed to execute the project and
compare these to expected benefits. At the start of the project a plan of the resources required to
undertake the project activities is constructed. If there is a limit on the amount of resources available
then the project completion date may have to be set to ensure there resources are not overloaded. This
is a resource-constrained approach. Alternatively the need to complete the project by a specific date
may take precedence. In this case an alternative source of resources may have to be found, using sub-
contractors for example, to ensure timely project completion. This is called a time-constrained
approach.
Once a plan has been constructed it is necessary to calculate estimates for the time and resources
required to undertake each activity in the project. Statistical methods should be used when the project
is large (and therefore complex) or novel.
This allows the project team to replace a single estimate of duration with a range within which they
are confident the real duration will lie. This is particularly useful for the early stage of the project
when uncertainty is greatest. The accuracy of the estimates can also be improved as their use changes
from project evaluation purposes to approval and day-to-day project control. The PERT approach
allows optimistic, pessimistic and most likely times to be specified for each task from which a
probabilistic estimate of project completion time can be computed.
10.18.2 Plan
This stage estimated the amount and timing of resources needed to achieve the project objectives. The
project management method uses a systems approach to dealing with a complex task in that the
components of the project are broken down repeatedly into smaller tasks until a manageable chunk is
defined. Each task is given its own cost, time and quality objectives. It is then essential that
responsibility is assigned to achieving these objectives for each particular task. This procedure should
produce a work breakdown structure (WBS) which shows the hierarchical relationship between the
project tasks.
10.18.3 Control
This stage involves the monitoring the progress of the project as it executes over time. This is
important so that any deviations from the plan can be addressed before it is too near the project
completion date to take corrective action. The point at which the project progress is assessed is
termed a Milestone.
The type of project structure required will be dependent on the size of the team undertaking the
project. Projects with up to six team members can simply report directly to a project leader at
appropriate intervals during project execution. For larger projects requiring up to 20 team members it
is usual to implement an additional tier of management in the form of team leaders. The team leader
could be responsible for either a phase of the development or a type of work. For any structure it is
important that the project leader ensures consistency across development phases or development areas
as appropriate. For projects with more than 20 members it is likely that additional management layers
will be needed in order to ensure that no one person is involved with too much supervision.
The two main methods of reporting the progress of a project are by written reports and verbally at
meetings of the project team. It is important that a formal statement of progress is made in written
form, preferably in a standard report format, to ensure that everyone is aware of the current project
situation. This is particularly important when changes to specifications are made during the project. In
order to facilitate two-way communication between team members and team management, regular
meetings should be arranged by the project manager. These meetings can increase the commitment of
Faculty of Business and Economics Page 257
Harambee University College Operation Management
team members by allowing discussion of points of interest and dissemination of information on how
each team’s effort is contributing to the overall progression of the project.
CPM stands for Critical Path Method. It has mostly been used in deterministic situations like
construction projects. For the most part, houses, bridges, and skyscrapers use standard materials
whose properties are well known. They employ more or less standard components and stable
technology.
Changes occur mainly in design, size, shapes, and arrangements of different components- rather than
in design concepts. CPM takes just one time into account, and it deals with deterministic situation. It
is activity oriented and can be used for both large and small projects. It is widely recognized and is
the most versatile and potent management planning technique. CPM is used for planning and
controlling the most logical and economic sequence of operations for accomplishing a project.
CPM Technique
PERT is Program Evaluation and Review Technique. This is mostly used in non-deterministic or
probabilistic or stochastic situations such as: space research, R & D projects. These projects (going to
Mars, Moon, etc) are relatively new; their technology is rapidly changing, and their products are
nonstandard. There is some standard hardware in ICBMs (Inter-continental Ballistic Missiles) and
lunar rockets, but much of their design and construction needs new type of materials and technology,
and projects are contracted, planned, and scheduled before all technological problems have been
solved.
PERT is commonly used to conduct the initial review of a project .It is very useful device to
plan the time and resources.
PERT is used in activity where timings could not be estimated with enough certainty. It can be
employed at those places where a project cannot be easily defined in terms of time or
resources required.
However, events can be readily defined which means it is known that, first, part A will be
manufactured, only then subassembly S can be built, and so on.
PERT offers a lot of advantages for non-repetitive type of projects, R & D, prototype
production, space research, defense projects, etc.
Because of the uncertainty of activity timings, PERT fits into a probabilistic model.
Probability concept helps in estimating activity timings.
PERT is mainly concerned with events and is thus seen as an event oriented system.
PERT Techniques
Expected time, earliest starting time, and latest finishing times are marked on the network
diagram.
Slack is calculated.
Critical path(s) are identified and marked on the network diagram.
Length of critical path or total project duration is found out.
Lastly, the probability that the project will finish at due date is calculated.
To take care of uncertainty, PERT takes three time estimates into account: optimistic, most likely, and
pessimistic time. PERT time estimates follow beta distribution.
Optimistic time (to): This is the shortest time taken by an activity if everything goes exceptionally
well.
Most likely time(tm): It is the time in which the activity is normally expected to complete under
normal contingencies.
Pessimistic time (tp): It is the maximum time that would be required to complete the activity if bad
luck were encountered at every turn. This does not include catastrophes like earthquakes, floods,
fires, etc.
PERT CPM
activity-on-arrow network construction activity-on-node network construction
Multiple time estimates Single estimate of activity time
Probabilistic activity times Deterministic activity times
For non-repetitive jobs where the time for the jobs of repetitive in nature
and cost estimates tend to be quite where the activity time estimates can
uncertain. be predicted with certainty
In order to undertake network analysis it is necessary to break down the project into a number of
identifiable activities or tasks. This enables individuals to be assigned responsibility to particular
tasks which have a well-defined start and finish time. Financial and resource planning can also be
conducted at the task level and coordinated by the project manager who must ensure that each task
manager is working to the overall project objectives and not maximizing the performance of
particular task at the expense of the whole project.
Activities consume time and/or resources. The first stage in planning a project is to break down the
project into a number of identifiable activities with a start and end. Performance objectives of time,
cost and quality can be associated with each activity. The project is broken down into these tasks
using a work breakdown structure. This is a hierarchical tree structure which shows the relationship
between the tasks as they are further sub-divided at each level.
The next stage is to retrieve information concerning the duration of the tasks involved in the project.
The can be collated from a number of sources, such as documentation, observation, interviewing etc.
Obviously the accuracy of the project plan will depend on the accuracy of these estimates. There is a
trade-off between the cost of collecting information on task duration’s and the cost of an inaccurate
project plan.
It is necessary to identify any relationships between tasks in the project,. For instance a particular task
may not be able to begin until another task has finished. Thus the task waiting to begin is dependent
on the former task. Other tasks may not have a dependent relationship and can thus occur
simultaneously.
Critical path diagrams are used extensively to show the activities undertaken during a project and the
dependencies between these activities. Thus it is easy to see that activity C for example can only take
place when activity A and activityB has completed. Once a network diagram has been constructed it
is possible to follow a sequence of activities, called apath, through the network from start to end.
The length of time it takes to follow the path is the sum of all the durationsof activities on that path.
The path with the longest duration gives the project completion time. This is called the critical path
because any change in duration in any activities on this path will cause the whole project duration to
either become shorter or longer. Activities not on the critical path will have a certain amount of slack
time in which the activity can be delayed or the duration lengthened and not affect the overall project
duration. The amount of slack is a function of the difference between the path duration the activity is
on and the critical path duration. By definition all activities on the critical path have zero slack. It is
important to note that there must be at least one critical path for each network and there may be
several.
There are two methods of constructing critical path diagrams, Activity on Arrow (AOA) were the
arrows represent the activities and Activity on Node (AON) were the nodes represent the activities.
The issues involved in which one to utilizewill be discussed later. The following description on
critical path analysis will use the AON method.
For the activity-on-node notation each activity task is represented by a node with the following
format. Thus a completed network will consist of a number of nodes connected by lines, one for each
task, between a start and end node.
From the duration of each task and the dependency relationship between the tasks it is possible to
estimate the earliest start and finish time for each task as follows. You move left to right along the
network, forward through time.
If there is more than one task immediately before take the task with the latest finish time to calculate
the earliest start time for the current task.
It is now possible to estimate the latest start and finish time for each task as follows. You move right
to left along the network, backward through time.
1. Assume the end (i.e. last) task end time is the earliest finish time (unless the project end time is
given).
If there is more than one task immediately after take the task with the earliest start time to calculate
the latest finish time for the current task.
The slack or float value is the difference between the earliest start and latest start (or earliest finish
and latest finish) times for each task. To calculate the slack time
1. Slack = Latest Start - Earliest Start OR Slack = Latest Finish - Earliest Finish
Any tasks with a slack time of 0 must obviously be undertaken on schedule at the earliest start time.
The critical path is the pathway connecting all the nodes with a zero slack time. There must be at least
one critical path through the network, but there can be more than one. The significance of the critical
path is that if any node on the path finishes later than the earliest finish time, the overall network time
will increase by the same amount, putting the project behind schedule. Thus any planning and control
activities should focus on ensuring tasks on the critical path remain within schedule.
Immediate Duration
Activity Description
Predecessor (weeks)
A Develop product specifications None 4
B Design manufacturing process A 6
C Source & purchase materials A 3
D Source & purchase tooling & equipment B 6
E Receive & install tooling & equipment D 14
F Receive materials C 5
G Pilot production run E&F 2
H Evaluate product design G 2
I Evaluate process performance G 3
J Write documentation report H&I 4
K Transition to manufacturing J 2
Solution
Step 2:Add Deterministic Time Estimates and show the Connected Paths
The longest path (ABDEGIJK) limits the project’s duration (project cannot finish in less time
than its longest path)
ABDEGIJK is the project’s critical path
All activities on the critical path have zero slack
Slack defines how long non-critical activities can be delayed without delaying the project
Slack = the activity’s late finish minus its early finish (or its late start minus its early start)
Earliest Start (ES) = the earliest finish of the immediately preceding activity
Earliest Finish (EF) = is the ES plus the activity time
Latest Start (LS) and Latest Finish (LF) = the latest an activity can start (LS) or finish (LF)
without delaying the project completion
ES, EF Network
LS, LF Network
Calculating Slack
Although network diagrams are ideal for showing the relationship between project tasks, they do not
provide a clear view of which tasks are being undertaken over time and particularly how many tasks
may be undertaken in parallel at any one time. The Gantt chart provides an overview for the Project
Manager to allow them to monitor project progress against planned progress and so provides a
valuable information source for project control.
- Draw a grid with the tasks along the vertical axis and the time-scale (up to the project duration)
along thehorizontal axis.
- Draw a horizontal bar across from the task identifier along the left of the chart starting at the
earliest starttime and ending at the earliest finish time.
- Indicate the slack amount by drawing a line from the earliest finish time to the latest finish
time.
The use of additional resources to reduce project completion time is termed crashing the network.
This involves reducing overall indirect project costs by increasing direct costs on a particular task.
One of most obvious ways of decreasing task duration is to allocate additional labour to a task. This
can be either an additional team member or through overtime working. To enable a decision to be
made on the potential benefits of crashing a task the following information is required.
The cost of crashing the task to the crash task duration per unit time
The process by which a task is chosen for crashing is by observing which task can be reduced for the
required time for the lowest cost. As stated before the overall project completion time is the sum of
the task durations on the critical path.
Thus it is always necessary to crash a task which is on the critical path. As the duration of tasks on the
critical path are reduced however other paths in the network will also become critical. If this happens
it will require the crashing process to be undertaken on all the paths which are critical at any one time.