Practical Project Management For Engineers and Technicians PDF
Practical Project Management For Engineers and Technicians PDF
Revision 6
Website: www.idc-online.com
E-mail: [email protected]
IDC Technologies Pty Ltd
PO Box 1093, West Perth, Western Australia 6872
Offices in Australia, New Zealand, Singapore, United Kingdom, Ireland, Malaysia, Poland,
United States of America, Canada, South Africa and India
Copyright © IDC Technologies 2012. All rights reserved.
First published 2002
ISBN: 978-1-921716-54-6
All rights to this publication, associated software and workshop are reserved. No part of this
publication may be reproduced, stored in a retrieval system or transmitted in any form or by
any means electronic, mechanical, photocopying, recording or otherwise without the prior
written permission of the publisher. All enquiries should be made to the publisher at the
address above.
Disclaimer
Whilst all reasonable care has been taken to ensure that the descriptions, opinions,
programs, listings, software and diagrams are accurate and workable, IDC Technologies do
not accept any legal responsibility or liability to any person, organization or other entity for
any direct loss, consequential loss or damage, however caused, that may be suffered as a
result of the use of this publication or the associated workshop and software.
In case of any uncertainty, we recommend that you contact IDC Technologies for
clarification or assistance.
Trademarks
All logos and trademarks belong to, and are copyrighted to, their companies respectively.
Acknowledgements
IDC Technologies expresses its sincere thanks to all those engineers and technicians on our
training workshops who freely made available their expertise in preparing this manual.
Contents
Chapter 1 - Fundamentals 1
1.1 Definitions 1
1.2 Project Management 2
1.3 Project Life Cycle 4
1
1.4 Project organizations 6
1.5 Project success 11
1.6 Project planning 13
2
4.8 Monitoring and review 60
3
8.10 Penalties and bonuses 112
Appendix A – Budgets, Variance Analysis, Cost Reporting and Value Management 155
Fundamentals
4
Learning objectives
The objective of this chapter is to:
1.1 Definitions
1.1.1 Project
Performance of work by organisations may be said to involve either operations or projects,
although there may be some overlap.
Operations and projects share a number of characteristics in that they are:
Table 1.1
Operational vs. project activities
5
Controlled Yes Yes
The primary objectives of a project are commonly defined by reference to function, time, and
cost. In every case there is risk attached to the achievement of the specified project
objectives.
1.1.2 Program
A program is a grouping of individual, but inter-dependent, projects that are managed in an
integrated manner to achieve benefits that would not arise if each project were managed on
its own.
1.2.1 Elements
Successful project management requires that planning and control for each project is
properly integrated.
Planning for the project will include the setting of functional objectives, cost budgets and
schedules, and define all other delivery strategies. Successful planning requires the proper
identification of the desired outputs and outcomes.
Control means putting in place effective and timely monitoring, which allows deviations from
the plan to be identified at an early stage. As a result they can be accommodated without
prejudicing project objectives, and corrective action can be initiated as required.
A project organisation appropriate to the task must be set up, and the duties and
responsibilities of the individuals and groups within the organisation must be clearly defined
and documented. The lack of clear definition of structure and responsibilities leads to
problems with authority, communication, co-ordination and management.
The project management procedures put in place for the project must ensure that monitoring
is focused on the key factors that the results obtained by monitoring are timely as well as
accurate, and that effective control systems are established and properly applied by the
project team. Project management involves five basic processes:
Initiating: Undertaking the necessary actions to commence the project or project phase
6
Planning: Identifying objectives and devising effective means to achieve them
Executing: Co-ordinating the required resources to implement the plan
Controlling: Monitoring of the project and taking corrective action where necessary
Closing: Formalising the acceptance of the project or phase deliverables (the
‘handover’), and terminating the project in a controlled manner
Within each of these processes there are a number of sub-process involved, all linked via
their inputs and outputs. Each sub-process involves the application of skills and techniques
to convert inputs to outputs. An example of this is the preparation of a project network
diagram (output) by the application of the precedence method (technique) to the identified
project activities (input).
Table 1.2
Project management body of knowledge
PROJECT INTEGRATION MANAGEMENT PROJECT MANAGEMENT
Project Plan development Systems Management
Project Plan Execution Programme Management
Overall Change Control Project Management
PROJECT SCOPE MANAGEMENT Project Lifecycle
Project Environment
Initiation
Project Strategy
Scope Planning
Project Appraisal
Scope Definition
Project Success/Fail Criteria
Scope Verification
Integration
Scope Change Control
Systems % Procedures
PROJECT TIME MANAGEMENT
Close-Out
Activity Definition Post Project Appraisal
Activity Sequencing
7
Activity Duration Estimating ORGANISATION and PEOPLE
Schedule Development
Organisation Design
Schedule Control
Control and Co-operation
PROJECT COST MANAGEMENT Communication
Resource Planning Leadership
Cost Estimating Delegation
Cost Budgeting Team Building
Cost Control Conflict Management
PROJECT QUALITY MANAGEMENT Negotiation
Management Development
Quality Planning
Quality Assurance TECHNIQUES and PROCEDURES
Quality Control Work Definitions
PROJECT HUMAN RESOURCE MANAGEMENT Planning
Scheduling
Organisational Planning
Estimating
Staff Acquisition
Cost Control
Team Development
Performance Measurement
PROJECT COMMUNICATIONS MANAGEMENT
Risk Management
Communications Planning Value Management
Information Distribution Change Control
Performance Reporting Mobilisation
Administration Closure
GENERAL MANAGEMENT
PROJECT RISK MANAGEMENT
Operational/Technical Management
Risk Identification Marketing and Sales
Risk Quantification Finance
Risk Response Development Information Technology
Risk Response Control Law
PROJECT PROCUREMENT MANAGEMENT Procurement
Procurement Planning Quality
Solicitation Planning Safety
Solicitation Industrial
Source Selection
Contract Administration
Contract Close-out
IIA PMI PMBOK IIB APM PMBOK
1.3 Project life cycle
8
1.3.1 Lifecycle elements
Projects proceed through a sequence of phases from concept to completion. Collectively,
the separate phases comprise the project ‘life cycle’.
There are only a limited number of generic lifecycles, though the breakdown of the phases
within can be at differing levels of detail. The generic types are usually considered to include
capital works, pharmaceutical, petrochemical, defence procurement, research and
development, and software development. Consequently the initial starting point for managing
the project is to define the type, and select an appropriate life cycle model as the planning
framework.
Figures 1.1 and 1.2 illustrate generic project life cycles for two project types.
Figure 1.1
Project life cycle: capital works project
Figure 1.2
Project life cycle: defence acquisition project
1.4.1 General
Where projects are set up within existing organisations, the structure and culture of the
parent organisation has great influence on the project, and will be a deciding factor in
whether or not there is a successful outcome. Where the project team is outside the
sponsoring or client organisation, that organisation may exert significant influence on the
project.
The organisation of the project team also directly influences the probability of achieving a
successful outcome. The benefits and disadvantages of the various options for project team
organization need to be appreciated.
10
a division, but within a division the scope of the project is considered only as it exists within
the boundary of that division. Project issues and conflicts are resolved by the functional
heads.
In a project management organization the staff are grouped by project, and each group
headed by a project manager who operates with a high level of authority and independence.
Where departments co-exist with the project groups, these generally provide support
services to the project groups.
Matrix organisations may lie anywhere between the above. A matrix approach applies a
project overlay to a functional structure. Characteristics of matrix organisations may be
summarised as follows:
Weak matrix organizations are those closely aligned to a functional organization, but
with projects set up across the functional boundaries under the auspices of a project co-
ordinator. The project co-ordinator does not have the authority that would be vested in a
project manager
A strong matrix organization would typically have a formal project group as one of the
divisions. Project managers from within this group (often with the necessary support
staff) manage projects where specialist input is provided from the various functional
groups. The project managers have considerable authority, and the functional managers
are more concerned with the technical standards achieved within their division than with
the overall project execution
In a balanced matrix the project management is exercised by personnel within functional
divisions who have been given the appropriate authority necessary to manage specific
projects effectively
The different organizational structures, and the corresponding project organization options,
are identified in Figure 1.3. In many cases an organization may involve a mix of these
structures at different levels within the hierarchy. For example, a functional organization will
commonly set up a specific project team with a properly authorized project manager to
handle a critical project.
The influence of the organisation structure on various project parameters is illustrated in
Figure 1.3.
While a matrix approach may be seen as providing an inadequate compromise, in reality it is
often the only realistic option to improve the performance of a functional organization. It does
work, but there are some trade-offs. One factor critical to the effectiveness of the matrix
structure is the authority vested in the person responsible for delivery of the project. A key
predictor of project performance is the title of this person i.e. whether he/she is identified as
a ‘project manager’, or something else.
Briefly, the benefits and disadvantages of the matrix approach include (see Table 1.3), also
see Table 1.4:
Table 1.3
Matrix benefits and disadvantages
Benefits Disadvantages
All projects can access a strong technical base Dual reporting structures causes conflict
11
Good focus on project objectives Competition between projects for resources
Figure 1.3
Project structures within organizations
Table 1.4
Influences of organization
12
Balanced Strong
Organization type Functional Weak matrix Project
matrix matrix
% Personnel
assigned 100% on Minimal 0-25% 15-60% 50-95% 85-100%
project
Project mgr Role Part time Part time Full time Full time Full time
The principal or project sponsor. This is the beneficial owner of the project
The Project Control Group (PCG). In some cases this will be the principal, but when the
principal is a large company it is required to identify and make accountable certain
nominated individuals. The function of this group is to exercise approvals required by the
project manager from time to time, controlling the funding to the project manager, and
maintaining an overview of the project through the reporting process
The project manager. In a ‘perfect world’ the responsibilities, roles and authority of this
person would be defined and documented
A project control officer or group, if this function is not undertaken by the project
manager. This group of people is responsible for the acquisition and analysis of data
relating to time, cost and quality, and to compare actual figures with the planned figures
The rest of the project team, which will vary in composition according to the project type,
as well as specific project variables
The project organization may be vertical or horizontal in nature, depending on the span of
control chosen by the project manager. That choice will be a balance between available time
and the desired level of involvement. Typical project structures for a capital works project are
illustrated in Figure 1.4. These illustrate the difference between horizontal, intermediate, and
vertical organisation structures.
In general the horizontal structure is the best option, because the communication channels
between those who execute the project work and the project manager are not subject to
distortion. For instance, in the vertical organisation there is a far higher probability of the
project manager receiving and acting upon inaccurate information. Such inaccuracies may
arise unavoidably, by oversight, carelessly, or deliberately. The impacts can be severe.
Reducing that opportunity, by shortening communication channels and removing the
intermediate filters, improves the likelihood of achieving the desired project outcome.
13
On large projects the desire to maintain a horizontal structure can be largely achieved by
increasing the size of the ‘project manager’. This is typically done by augmenting the project
manager with support staff that have direct management responsibilities for a portion of the
project. Their interests are aligned with those of the project managers, and a higher reliability
of information may be expected.
14
Figure 1.4
Project team organization
15
1.5 Project success
1.5.1 General
Many projects qualify as successes, but we all have experience, anecdotal or otherwise, of
projects that have gone severely wrong. Project failures exist within all industries. Even
today, with the level of awareness for project management processes as well as advanced
tools, there are spectacular failures. These occur even on very large projects where it is
assumed that the investment in management is high. The consequences of failure can be
significant to the sponsoring organisations as well as project personnel.
A 1992 study of some 90 data processing projects, completed in the previous 12 years,
provides a common profile of experience. The study identified the primary factors affecting
the project outcomes as set out in the Table 1.5. These are listed by frequency and severity
(ranked in descending order of impact) in respect of their negative impact on project
success. This analysis provides an instructive basis for any organisation operating, or setting
up, a project management methodology. Note that most of these issues are project
management issues.
Table 1.5
Project problem issues
(O’Connor & Reinsborough, Int’l Journal of Project Management Vol 10 May 1992)
Planning/monitoring 71% 1
Staffing 58% 2
Communications 42% 5
Technical 36% 7
Management 32% 8
Operations 24% 11
Organization 24% 10
16
It is a vital step, yet one commonly omitted, to define the project success criteria before
commencing planning and delivery. In other words, define what needs to be achieved if the
project implementation is to be considered a success. The project stakeholders must identify
and rank the project success criteria. The ‘client’s’ preferences are obviously paramount in
this process, and will consider performance right through the life of the product under
development as well as the factors present only during the project.
The objectives of cost, quality, and time are frequently identified as the definitive parameters
of successful projects. These are a very useful measure in many capital works projects
where they can be defined in advance, adopted as performance indicators during project
implementation, provide a basis for evaluating trade-off decisions, and applied with relative
simplicity.
However, this approach to measuring project success is necessarily only a partial
assessment in almost every situation. Projects completed within the targets for such
constraints may be successfully completed from the perspective of the project
implementation team, but are not necessarily from alternative viewpoints such as those of
the sponsors or users. In some instances projects that are not completed within some of the
time/cost objectives may still be considered a success. Common project success criteria
include safety, loss of service, reputation, and relationships.
The process of defining ranked success criteria provides surprising insights in many
instances, and enhances project planning. During project implementation the project
success criteria provide a meaningful basis for establishing project performance indicators to
be incorporated within project progress reports. They are also helpful in making trade-offs,
should that become necessary.
Table 1.6
Critical success factors
The project manager and Project management expertise, authority, systems, personality,
team resources
In practice, this is a particularly important and useful framework within which critical success
factors can be identified. Where necessary, these can be managed proactively in order to
maximise the probability of project success.
17
A survey was conducted amongst members of the PM, seeking to correlate project success
criteria (specified as time, cost, quality, client satisfaction, and other) against the above
factors. Projects included in the survey covered construction, information services, utilities,
environmental and manufacturing. The study concluded that the critical project success
factors primarily arose from the factors related to the project management and project team.
For each industry the project manager’s performance and the technical skills of the project
team were found to be critical to project outcomes. This confirms the conclusions from the
1992 study noted earlier.
It is important to identify, within this framework, the specific critical success factors which
may impact on the project. It is then the reponsibility of the project team to develop
strategies to address these factors, either in the planning or in the implementation phase.
18
Project control procedures
Note: here lies an inconvenience of terminology. The PQP is much more than a plan for
incorporating quality into the project. There is a comnponent within the PQP that deals
exclusively with quality issues per se.
19
This is a list of the assignment tasks and responsibilities. All tasks and activities previously
defined become the responsibility of specified parties. The WBS and OBS may be extended
to define a ‘task assignment matrix’.
Project schedule
The preliminary master schedule for the project identifies the target milestones for the
project, and the relative phasing of the components.
Project budget
In some cases a project budget is established during the feasibility study, without the benefit
of adequate detail of the concepts evaluated. At that stage a maximum cost may have been
established, because expenditure above that figure would cause the project not to be
economically viable. Where such a constraint exists, and if the feasibility study has not
reliably established the cost of the project, it will be necessary to further develop the design
before committing to the project.
Miscellaneous plans
Additional plans may need to be documented here. These include, inter alia, consultations
and risk management. Alternatively, a strategy for these can be defined in the section
dealing with project controls.
Documentation
All of the above project elements must be documented in the Project Plan. Figure 1.5 shows
the inter-relationships of all these entities.
Figure 1.5
The project plan
20
Monitoring and reporting should include project performance indicators derived from the
Project Success Criteria. Planning should take into account the critical success factors, i.e. it
should address any potential difficulties that may arise from them.
Control procedures need to be established and documented for the management of the
following parameters:
Administration
Procedures for the administration of the project should be defined. These should include
issues such as:
Filing
Document management
Correspondence controls
Administrative requirements of the principal
Scope
The definition of scope change control systems which:
Budget and commitment approvals for design, procurement and construction functions
The issue and control of delegated financial authority, to the project manager controlling
consultants and contractors, as well as to consultants controlling contractors
Variation control for changes arising during project implementation
Value engineering
Cost monitoring, reporting and control systems and procedures
Time
The definition of strategies and procedures for scheduling, monitoring and reporting, likely to
include:
Programming methods and strategies for master and detail programmes, i.e. definition of
programming techniques as well as the frequency of review and updating
Progress monitoring and reporting systems and procedures.
Risk
The definition of objectives and procedures for putting in place effective risk management.
Note that there may be a Risk Management activity schedule in the PQP.
21
Communications
This specifies all requirements for communications within the project and to the
client/sponsor, and is likely to include:
22
Developing the WBS is fundamental to effective planning and control for the simple reason
that the derived work packages are the primary logical blocks for developing the project time
lines, cost plans and allocation of responsibilities.
Many people either miss out this key step in the project management process, or undertake
the step informally without appreciating how important it is.
Definition and terminology
PMI PMBOK 1996 provides the following definition for a WBS:
A deliverable oriented grouping of project elements which organizes and defines the total
scope of the project: work not in the WBS is outside the scope of the project. Each
descending level represents an increasingly detailed description of the project elements.
The WBS is created by decomposition of the project, i.e. dividing the project into logical
components, and subdividing those until the level of detail necessary to support the
subsequent management processes (planning and controlling) is achieved.
Terminology varies in respect of defining project elements. The use of the terms ‘project’,
‘phase’, ‘stage’, ‘work package’, ‘activity’, ‘element’, ‘task’, ‘sub-task’, ‘cost account’, and
‘deliverable’ is common:
PMBOK terminology provides the following definitions:
23
Can any package be broken down further into sensible components?
Is each package the responsibility of only one organizational group?
Does every package represent a reasonable quantity of work?
Does any package constitute more than, say, 5% (or 10%) of the project?
Does any package constitute less than, say, 1% (or 2.5%) of the project?
Does every package provide the basis for effective cost estimating and scheduling?
The following example shows the WBS of a project with geographical location at the second
level (see Figure 1.6).
Figure 1.6
WBS for restaurants project
Alternatively, the various functions (design, build, etc) can be placed at the second level (see
Figure 1.7).
24
Figure 1.7
Alternative WBS for restaurants project
A third alternative shows a subsystem orientation (see Figure 1.8).
25
Figure 1.8
Alternative WBS for restaurants project
A fourth alternative shows a logistics orientation as follows (see Figure 1.9):
26
Figure 1.9
Alternative WBS for restaurants project
The WBS could also be drawn to show a timing orientation (see Figure 1.10).
27
Figure 1.10
Alternative WBS for restaurants project
Note that ‘Design’ and ‘Execution’ in the WBS above are NOT work packages, they are just
headings. ‘Start up’, however, is a work package since it is at the lowest level in its branch.
The WBS could be broken down even further but the risk here is that the lowest-level
packages could be too small. If ‘advertising’, for example, could be accomplished in 100
hours it might be a good idea to stop at that level. It could then be broken up into activities
and tasks (and even sub-tasks); the duration and resource requirements would then be
aggregated at the ‘advertising’ level, but not individually shown on the WBS.
It is, of course, not necessary to use a sophisticated WBS package; a spreadsheet will work
just fine as the following example shows (see Figure 1.11).
28
Figure 1.11
WBS using spreadsheet
Time management
Learning objective
The objective of this chapter is to provide a comprehensive introduction to the key elements
of effective time management for projects. Time management of a project consists of:
Planning the project activities to a time scale (i.e. the project schedule)
Monitoring performance of the implementation phase
Comparing achieved performance with the project schedule
Taking corrective action to ensure planned objectives are most likely to be met
The level of project planning that we propose requires a significant input of time and energy
at the start of the project, but considerably reduces the content and cost of management
29
effort during the project implementation phase. The preparation of the project schedule is
only the first, albeit very important, step.
Time management requires the monitoring and control functions to be carried out effectively
so that the project schedule can be adhered to, or so that any variance from the plan does
not prejudice project objectives.
The planning, monitoring and controlling cycle should be in process continuously until the
project is completed. The project schedule should be prepared with some knowledge of the
monitoring system to be employed. The prerequisite for setting up the monitoring system is
the identification of the key factors to be controlled. It may be, for example, the achievement
of specific milestones or particular resource items. The project manager will have to
establish the boundaries within which these factors need to be constrained. Performance
monitoring must focus on outputs, not inputs; i.e. results not effort.
The ‘Critical Path’ method (also known as the Activity on Arrow or AoA method)
The Precedence method (also known as the Activity on Node or AoN method)
The Critical Path method may be the only one many people are familiar with since it is
intuitively attractive. The Precedence method appears, at least from a superficial look at the
comparable diagrams, to be more complex.
However, the Precedence method is far more flexible. The Precedence method has the
advantage of requiring no dummy activities to establish the correct logic for a project. Once
the superficial complexity is overcome, you will find it to be the more powerful tool. Most
computer-based project scheduling software packages use precedence logic and a proper
understanding of the method enables the software to be used to best effect.
Precedence network analyses are normally presented graphically, either as the network
diagram itself, or as a time-scaled bar chart known as a Gantt chart. Critical path networks
can be presented as time-scaled arrow diagrams, or as Gantt charts. All computer-based
project scheduling software packages use Gantt presentations in addition to the network
diagrams.
Project analysis by either method involves the same four steps:
Defining the activities. For the initial project plan this may involve the breakdown of work
packages used as the basic elements for the other components of the PQP
Preparation of the logic sequence to determine the relationships between the activities
Applying activity (time and resource) data for each activity
Analysis of the network
30
The following is a brief introduction to these techniques. It will, however, be sufficient to allow
you to fully apply both methods to analyze any situation. There is a vast amount of literature
available on the subject that could be consulted for additional guidance.
For an initial analysis the number of activities should be kept to the minimum required to
be useful. This allows for the framework to be developed and checked for consistency
before too much effort has been spent. If found necessary, the activities can be broken
down further at a later stage if that is appropriate
For a definitive plan it is useful to include more detail so that the schedule in the PQP
can be adopted as the baseline for schedule monitoring
Who else will be using the schedule, and for what purpose?
Is it an appropriate master plan, allowing elements to be defined in more detail as
implementation continues?
It is important to note that the word ‘operation’ or ‘activity’ is used in its widest sense. It will
not only include actual physical events, derived form the work packages, but anything that
may exercise a restraint on the completion of the project should be included as an activity.
This will include actions such as obtain finance, obtain approval, place order, and represent
passages of time with no actual activity, e.g. delivery period.
Figure 2.1
Network example
Figure 2.2 shows that Activity E must await the completion of both Activities C and D before
it can commence.
31
Figure 2.2
Network example
Figure 2.3 shows Activities G and H as concurrent activities that can start simultaneously
once Activity F has been completed.
Figure 2.3
Network example
When developing the arrow diagram, three questions are asked of each activity in order to
ensure its logical sequence:
32
Figure 2.4
Numbering of activities
Dummies
Each activity should have a unique identification. If two concurrent activities both start and
finish at the same nodes they will be identified by the same (i, j) numbers, as shown in
Figure 2.5 where both Activities M and N are (15, 30).
Figure 2.5
Concurrent activities
In order to keep the identification of activities unique, a dummy activity is introduced as
shown in Figure 2.6. Activity M is still (15, 30) but Activity N is now (15, 25). Activity (25, 30)
is a dummy. Dummy activities are always represented by a dotted arrow.
Figure 2.6
Dummy activities
33
Another important use for dummies is to keep the sequence logic correct in a group of
arrows where not all preceding and following activities are interdependent. Suppose we have
a situation where starting Activity D depends on the completion of both Activities A and B,
and starting Activity C depends only on completion of Activity A.
The logic shown in Figure 2.7 is incorrect. It introduces a non-existent restraint, namely that
Activity C cannot start until Activity B is complete.
Figure 2.7
Incorrect logic
The correct logic requires the introduction of a dummy activity. Refer to Figure 2.8.
Figure 2.8
Correct logic
Overlapping activities
Unlike the conventional bar chart, no overlapping of activities is permitted in the arrow
diagram. If overlapping exists between activities, then these activities must be broken down
further to provide sequential activities that may subsequently be analyzed.
Figure 2.9 shows two sequential activities, indicating that Activity 2 starts after all of Activity
1 is complete.
Figure 2.9
Non-overlapping activities
For a small job, this is probably the case. For a large job, however, the two activities may
overlap to some extent. This is shown by breaking both activities down into two activities, as
shown in Figure 2.10.
34
Figure 2.10
Overlapping activities
Figure 2.11
Activity data
35
Durations may be determined from calculations, experience, and advice. Estimates should
be made on the basis of normal, reasonable, circumstances according to judgment. For a
given quantity of physical work, the duration will depend on the resources allocated.
For physical activities the duration will depend on the quantity of work and the resource to be
applied, the efficiency of the resource, location etc. For outside activities an allowance must
be made for adverse weather.
Adding resources
Resources must be included where they are likely to be a limitation either within the project
itself, or where the project competes with others for resources from a pool. See paragraph
6.0.
The critical path
The purpose of analyzing the network is to determine the critical path, and thus the total
project duration. Once the network has been drawn there will, generally, be more than one
path between the start and finish. The project duration for each path is calculated very
simply. By adding the durations for all the activities that make up the path, various total
durations will be determined. The longest of these is the time required for completion of the
project. The path associated with it is, by definition, the critical path.
In many cases the critical path is obvious, or can be located by considering only a few paths,
and this should be determined as a first step. If the total project duration is too long, review
the planning (for example by reviewing the assumed sequencing, constraints, overlap
opportunities, resources, etc) to reduce the critical path before carrying out the detailed
calculations for the whole schedule.
Earliest start time and earliest finish time
The Earliest Start Time (EST) of any activity means the earliest possible start time of the
activity as determined by the analysis. The EST of any activity is the Earliest Event Time
(EET) of the preceding node, i.e.:
ESTij = EETi
The Earliest Finish Time of an activity is simply the sum of its earliest start time plus its
duration, i.e.:
EFTij = ESTij plus duration
But note that:
EFTij = EETj
The EST of an activity is equal to the EFT of the activity directly preceding it – if there is only
a single precedent activity. If an activity is preceded by more than one activity, its EST is
then the latest of the EFTs of all preceding activities. The logic of this should be clear: an
activity can only start when all preceding activities have been completed. The latest of these
to finish must govern the start of the subject activity.
ESTs are calculated by a forward pass, working from the first to the last activities along all
paths. This analysis determines the EFT for the last node, and this is the minimum time for
completing all activities included in the network.
Latest finish time and latest start time
The Latest Finish Time (LFT) of any activity means the latest possible time it must be
finished if the completion time of the whole project is not to be delayed. The LFT for an
activity is the Latest Event Time (LET) of the succeeding node, i.e.:
LFTij = LETj
36
The Latest Start Time of an activity is simply the sum of its latest finish time less its duration,
i.e.:
LSTij = LFTij minus duration
But also note that:
LSTij = equals LETi
The LFT for the final activity is taken to be the same as its EFT. The latest times for all other
activities are computed by making a backwards pass from the final activity. The Latest Start
Time (LST) for any activity is obtained by subtracting its duration from its LFT. For each
activity, the LFT must be equal to the LET of the succeeding node. When an activity is,
however, followed by more than one activity, its LFT is equal to the earliest of the LSTs of all
following activities.
The results of the analysis are recorded directly on to the network. The information displayed
is the EET and LET for each node, as shown in Figure 2.12.
Figure 2.12
Results of analysis
Float
Along the critical path none of the activities will have any float; i.e. the EST for each activity
will equal the LST. If any one of those activities is delayed, the completion of the whole
project will be delayed.
In most projects there will be activities for which EST precedes LST, i.e. there is some float.
There are distinct categories of float, of which the following two are the most relevant.
Total float is the difference between the EFT and LFT of any activity. It is a measure of
the time leeway available for that activity. It gives the time by which an activity’s finish
time can be delayed beyond its earliest finish time without affecting the completion time
of the project as a whole. However, using part or the entire total float of an activity will
generally impact on the float available for other activities.
Total Float = LFT-EFT = LFT - EST - duration
The free float of an activity is the difference between its EFT time and the earliest of the
ESTs of all its directly following activities. The significance of free float is that it gives the
time by which the finish time of an activity can exceed the earliest finish time without
affecting any subsequent activity.
37
2.3.1 General
The Precedence method (also known as the Activity on Node or AoN method) includes the
same four steps as the Critical Path method. There are two fundamental differences
between the Precedence and Critical Path methods of network analysis.
For precedence analysis the data relating to each activity is contained on the node
The arrows connecting the activities can show a variety of logical relationships between
activities
The ability to overlap activities more easily using the Precedence method is a considerable
advantage. This method gives the same results as the Critical Path method with respect to
determining the critical path for the project, and the amount of float available for non-critical
activities (those activities not on the critical path). It is often easier to use for people with no
previous programming experience. The work breakdown is performed as per the Critical
Path method.
Figure 2.13
Time data
Dependencies
The logical links between activities are known as dependencies. These are generally one of
the following three types as shown in Figure 2.14, i.e.
Finish-to-start
Start-to-finish
Finish-to-finish
It is also possible, but rare, to have Start-to-finish dependencies.
The dependency may have a time component (shown in the following figure as ‘n’). This is
known as ‘lag’. It is also be just a logical constraint.
38
Figure 2.14
Start-to-finish dependencies
Note that in all cases the logical relationships between A or B and other activities may well
mean that other constraints control the actual start or finish of Activity B, not the dependency
between A and B.
Activity constraints
The timing of activities may be constrained by factors unrelated to logical relationships
between the activities. By default, analysis assumes that all activities start As Soon as
Possible (ASAP type). However, the start or end date for each activity can be controlled by
defining the constraint on it. If the activity constraint conflicts with the logical relationships
between activities, the activity constraint overrides the logical relationship.
Activity constraints may of the following types:
Each activity is labeled in the middle, and the duration is shown at the top, in the centre
field.
The dummy activities ‘start’ and ‘finish’ have zero duration
Now the forward pass:
The ‘start’ activity begins and ends at time zero (EST = EFT =0)
Activity A can begin straight away (EST =0) and ends at time 4 (EFT =4)
Activity B can also begin straight away (EST = 0) and ends at time 3 (EFT =0)
C must wait for A. The earliest it can therefore start is at time 4, ending at 6 (EST = 4,
EST = 6)
D must wait for both A and B to finish and can therefore not start before 4, ending at 8
(EST = 4, EFT = 8)
The project is only finished when D is completed, at time 8
The ‘finish’ dummy task has zero duration, and therefore starts and finishes at 8 (EST =
EFT =8)
Next follows the backward pass:
The ‘finish’ task has zero duration, therefore its LST = LFT = 8
40
Completion of C can now be delayed until time 8 (LFT = 8), so with a duration of 2 its
LST is now 6
There is no slack in C, so its latest times are the same as its earliest times
Since the earliest D can start is 4, B need not finish before then
A must finish by 4, otherwise it will delay D
The floats are now calculated as LST minus EFT, or LFT minus EFT, i.e. the bottom number
minus the top number on either the left or the right side of each block. The critical path
interconnects all those blocks with zero slack. Remember that it is possible to have more
than one critical path, and that the critical path may change once the project is under way
(see Figure 2.15).
Figure 2.15
AoN analysis for given example
41
Figure 2.16
Gantt chart
The following is the result of the AoN analysis, also referred to as a PERT chart (see Figure
2.17).
Figure 2.17
PERT chart
43
44
Figure 2.18
Resource analysis
Figure 2.19
Two scheduling scenarios
Assume that in both scenarios Activity A is completed one month late, at which time the
problem is identified for the first time. Under Scenario I the effective rate of improvement
required to complete within the original project duration is 125% (i.e. 5 periods of work
outstanding with 4 periods available to complete). By comparison, in Scenario II the effective
rate of improvement required to complete within the original project duration is 150% (i.e. 3
periods of work outstanding with 2 periods available to complete).
The project manager has a much greater likelihood of achieving the project time objective in
the first scenario.
2.6.3 Monitoring
The project manager must be aware at all times of the actual deviance of the project from
the plan. Monitoring and reporting on progress must be regular and accurate. There is no
justification for the project manager not to know the precise status of the project.
The objective of the progress monitoring system is to:
45
Identify areas of the project where performance is below expectations
Provide information on such deviations from the project plan in sufficient time for
corrective actions to be usefully applied
Most of the available project scheduling computer software programs allow for progress to
be ‘posted’, that is, for the current status of programmed activities to be updated on the
computer schedule. The network is then re-analyzed to take into account the new data, and
new completion dates for the remaining activities are computed. Generally the updated
schedule can be compared against an earlier baseline schedule and progress variance, both
historical and future, tabulated. This baseline facility is of extreme benefit when monitoring
performance.
A typical graphic progress report can provide the information shown in Figure 2.20 for each
activity in the schedule.
Figure 2.20
Progress reporting
The effectiveness of the reporting is a crucial element of the project control function. A
reporting format should be standardized for each project, and all progress reporting required
conforming to the specified format. To provide effective control the reporting must be:
Timely
Accurate
Easily interpreted
A formal report should include the following information:
46
Actions required/suggested/recommended to obtain desired changes, and their
cost/resource implications
In addition to the text, a graphic progress analysis clearly indicating actual versus
planned progress should be included.
Progress monitoring and reporting must be a regular activity; at least monthly on projects
over six months, but probably fortnightly or weekly for projects of shorter duration.
Remember that the aim of monitoring progress includes the ability to take action that will
allow any time lost to be regained. This may not be achievable if monitoring occurs at less
frequent intervals than, say, 10 percent of the project duration.
Slippage is commonplace and should be a major concern of project managers. It occurs one
day at a time and project managers need to be ever vigilant to keep it from accumulating to
an unacceptable level. Slippage can be caused by complacency or lack of interest, lack of
credibility, incorrect or missing information, lack of understanding, incompetence, and
conditions beyond one’s control, such as too much work to do. Project managers need to be
on the alert to detect the existence of any of these factors in order to take appropriate action
before they result in slippage.
Project slippage is not inevitable. In fact, there is much project managers can do to limit it.
The tools of progress control are the bar charts or critical path networks described above.
The project manager should take the following steps:
47
that case ease of learning and use becomes the primary criteria. That means, for most
people, a Windows-based tool with sufficient single project or multi-project capability.
In the cases where the scheduling program is required for more complex project
management functions and where the cost of learning to use the software is not an issue,
the basis for the selection will relate to the capabilities of the program required to suit the
demands of the particular situation.
The following elements will be relevant to an informed evaluation of the best tool for a
particular application.
Essential requirements
These must match the specific requirements for the required application in order to be
considered for selection:
Multi-project capability
Number of activities per project
Number of resources per activity
Resource input capabilities
Resource calendar options
Capability to input resource and overhead costs in the form required
Baseline capabilities for presenting update comparisons
Presentation graphics capability and flexibility
Flexibility to tailor cross-tab reports, particularly financial reports
Other considerations
Other issues that will be relevant to the selection include:
Ease of learning
Training available
Hardware requirements
Product support
Cost management
Learning objectives
Effective cost management is a key element of successful project management. The history
of project management is punctuated by projects that have been financial disasters. These
include many high profile projects, where it is reasonable to assume that considerable
planning and effort went into the cost management functions. Examples include the Sydney
Opera House, the Concorde and the Channel Tunnel.
48
Many small projects do not have more than superficial attention paid to the implementation
of effective cost management systems, but in general the cost overruns on small projects
are very high in percentage terms.
The objective of this section is to review the critical aspects of the cost management process
that must be properly addressed to ensure effective financial control.
Cost management includes the processes required to ensure that the project/contract is
completed within the approved budget. These processes comprise:
Cost estimating
Budgeting
Financial control
Change control
Cost monitoring
Value management
49
Preliminary Assessment of Cost (PAC)-25% to+25 %
Firm Estimate of Cost (FEC) -5% to +10%
Cost indices
Where it is appropriate to include for escalation, the cost index upon which the estimate has
been prepared must be identified. If the estimate includes provision for escalation, the index
calculated to apply at completion must also be stated. The dollar value of the escalation is
most accurately determined from a forecast cash flow that determines the value of
expenditure, and thus the related escalation, on a quarter-by-quarter basis.
Escalation can only be estimated on the basis of judgment. However, the particular judgment
may only be that of the person doing the estimate, or a projection provided by a recognized
and competent source.
Contingency
Contingency is an estimate of cost provided against expected, but currently undefined, costs
associated with the particular risks of the component being estimated. Contingency levels
may be assessed in any of the following ways:
Judgment based on the cost impact of particular risks, and probabilities of each
particular risk arising
Historical experience of similar activities
Organizational policies for risk management
Probabilistic modeling, using appropriate software
50
Figures for N are published for specific types of equipment, and are valid over a specific
range of capacity, such as liters per minute, for a specific type of equipment such as a
centrifugal pump. N is typically in the order of 0.6 for elements of a process plant, and this
relationship is known as the ‘six tenths rule’.
3.5 Budgeting
Purpose
Budgeting involves allocating the cost estimates against the project/contract schedule. The
budgeting process should establish a cost baseline that provides:
51
accommodated without the need to repeatedly return to the Principal’s board for approvals of
revised budgets.
It is important to understand the distinction between contingency (provision within the
component estimate for expected but undefined cost changes within the project scope) and
management reserve (provision within the budget for possible future project scope changes).
Time base for budget
The time base for the budget, i.e. the ‘spreading’ of expenditures over the budget period, is
defined by allocating the costs for each component against the programmed occurrence of
the activity.
This is achieved directly by the use of appropriate project management software. It is also
very common to use spreadsheets to generate the budget cash flow. Depending on the type
of project, expenditures can be projected in weekly or monthly increments.
Documentation of budget procedures
Procedures defining the basis for budget preparation should be standardized and
documented in the same manner as estimating procedures.
The principal maintains full control over the commencement of the expenditure, and the
extent of his liability at all stages of the project
Where more than one individual has responsibility for implementation of the project, their
specific levels of financial authority are clearly defined
Where outside consultants are used to manage contractors, they may be liable to the
principal if they commit funds in excess of the delegated level of financial authority
Project procedures should therefore define the procedures applying to the allocation,
commitment, and revision of financial authority.
Authorization of payments
Payment of every invoice must be subject to a formal approval by the individual responsible
for the particular budget element. Non-compliance with this obvious requirement is common,
and often causes problems relating to overpayment of moneys properly owed.
52
Effective change control is a vital element of project cost control. This process is often
referred to as scope control.
Following approval of the project budget, there will be unavoidable changes to the project
arising from discretionary and non-discretionary sources. For example:
53
Figure 3.1
Project change notice
54
3.8.1 Reporting requirements
The basic objective of the financial report must be to provide an accurate status report of
forecast financial cost versus approved budget. To meet this objective, the financial report
needs to include the following information:
Initial budget
Approved budget variances to date
Current budget
Current forecast final cost
Further useful information includes:
A breakdown of the difference between the original estimate and the current forecast
final cost by scope, estimate and escalation variance
Trends in the forecast final cost.
Depending on the value to the project team, the following information could also be included:
Commitments to date
Accrued expenditure to date. Accrued expenditure arises when contract payments are
subject to retentions
Expenditure to date versus planned expenditure to date. This will be of value if cost and
schedule variance are identified using earned value analysis
Forecast expenditure
FFCnequalsFFC0
56
4
Risk management
Learning objective
The objective of this chapter is to set out a basis for risk management that will provide
sufficient understanding of the process for implementing effective risk management for a
specific project.
All projects have associated risks. The extent to which risks exists for a particular project
component determines how sensitive successful project outcomes are to that component.
Effective project management requires that, if project outcomes are risk sensitive, relevant
risks are properly managed.
The procedures described in this chapter conform to definitions and processes defined in
document AS/NZS ISO 31000 : 2009 Risk management – Principles and guidelines.
A detailed review of the analytical techniques necessary to undertake comprehensive
quantitative analysis is outside the scope of this chapter. See ISO/IEC 31010, Risk
management — Risk assessment techniques
Figure 4.1
Probability-impact matrix
57
4.2.1 Definitions
The following definitions will be used throughout this chapter (see Table 4.1).
Table 4.1
Risk-related definitions
4.2.2 Elements
The main elements of the risk management process are:
58
The cost of applying risk management to a project varies, as a function of the scope of the
project and the depth to which the process is applied. Costs can be very low (say, a few
hours) at one end of the scale, and ranging up to several percent of the total management
costs if done in depth and as an ongoing process throughout all project phases.
4.2.5 Documentation
In order to maintain a record to facilitate ongoing reviews as well as an adequate audit trail,
all components of the risk management process must be adequately documented.
Sample documentation, based on that recommended within AS/NZS 4360:2004, is
appended.
4.3.1 General
This defines the external and internal parameters which need to be taken into account when
managing risk, and setting the scope and risk criteria for the risk management policy.
The outputs from this step are:
Definition of the elements within the project to define a structure for the identification and
analysis of risks; and
Definition of risk assessment criteria directly related to the policies, objectives and
interests of stakeholders
This process reviews the external, internal and project contexts.
59
This analysis reviews the external environment in which the organization seeks to achieve its
objectives The purpose is to identify factors that enhance or impair the ability of the
organisation to manage risks. This includes the financial, operational, competitive, political,
legal, social, and cultural aspects of the operating environment.
4.4.1 General
The organization needs to identify sources of risk, areas of impacts, events (including
changes in circumstances) and their causes and their potential consequences. The purpose
of this step is to identify all the risks, including those not under the control of the
organisation, which may impact on the framework defined above.
A systematic process is essential because a risk not identified during this step is removed
from further consideration.
4.4.2 Procedure
The first step is to identify all the events that could affect all elements of the framework.
The second step is to consider possible causes and scenarios for each event.
The process of risk identification can be complex, and a planned approach is necessary to
ensure that all sources of risk are identified. This process may involve:
Identifying the key personnel associated with the project, i.e. those whose understanding
of the project environment and the project processes enables them to properly
appreciate the sources of risk
Undertaking structured interviews with these personnel. Checklists should be used to
ensure comprehensive coverage of all project elements. The objective is to determine,
from each person; concerns, constraints and perceived risks within their area of
expertise
Organizing brainstorming sessions
Engaging the services of specialist risk analysts
Reviewing past experiences in this regard
4.5.1 General
60
The objectives of risk analysis are to comprehend the nature of risk and to determine the
level of risk. This involves:
61
Figure 4.2
Risk interview in progress
(Courtesy RiskTrak)
Once the project-related risks have been identified, their chance of occurring and the related
severity of such an occurrence have to be ascertained, together with the method and costs
of addressing the issue. This is done via a conventional possibility/consequence matrix as
shown in Figure 4.3.
62
Figure 4.3
Risk appraisal
(Courtesy RiskTrak)
63
the critical variables are apparent. This provides the opportunity to develop a risk
management strategy that targets the most critical risks.
Probabilistic analysis
Probabilistic analysis is an analysis to identify the frequency distribution for a desired project
outcome, e.g. total project cost, internal rate of return or total project duration.
The most common form of this analysis uses sampling techniques, normally referred to as
Monte Carlo Simulation. This can only be practically undertaken using an appropriate
software application package.
A mathematical model of the project is developed, incorporating all relevant variables. A
probability distribution is then defined for each variable, and the project model is analyzed
taking into account all risks in combination. This analysis is repeated a number of times,
typically 100 to 1000 passes, and at each pass the value for each variable is randomly
calculated within the assigned probability distribution. The results from each analysis provide
a distribution frequency of the project outcome. This establishes a mean outcome, and the
range of outcomes possible.
Probabilistic analysis can be performed on cost as well as project schedules. One of the
better-known software packages in this regard is @RISK, although there are various
alternatives on the market, some stand-alone and others as add-ons for scheduling
packages such MS Project and Primavera. An example of an inexpensive software package
for Monte Carlo analysis on project costs is Project Risk Analysis. The following figures show
the statistical behavior of project costs for a given project (see Figure 4.4).
Figure 4.4
Project Risk Analysis: Cost distribution (Courtesy Katmar Software)
64
If the above bell curve distribution is integrated from left to right, it yields a so-called S curve
that indicates the possibility that the cost will be less than a given value (see Figure 4.5).
Figure 4.5
S-curve (Courtesy Katmar software)
From the S-curve (specific values on the X-axis are available via the ‘Statistics’ function) it
can be seen that, despite a mean cost of $4978 being predicted, there is only a 50% chance
of that happening. In order to guarantee the cost with 99% certainty, provision has to be
made for a cost of up to $5395, i.e. a contingency of $417 or 8.79% is required.
An alternative to Monte Carlo Simulation is the Controlled Interval and Memory Method. On
less complex analyses this technique offers great precision for less computer effort.
Decision trees
This method has been in use for a considerable time and provides for decision making
based on a relatively crude risk assessment. Decision trees display the set of alternative
values for each decision, and chance variable as branches coming out of each node. Figure
4.6 shows the decision tree for the R&D and commercialization of a new product.
65
Figure 4.6
Decision tree (Courtesy Analytica)
Influence diagrams
This is a relatively new technique, used as an interface with computer based risk models to
facilitate development of complex risk models (see Figure 4.7).
Figure 4.7
Influence diagram (Courtesy Analytica)
Decisions, shown as rectangles with sharp corners (i.e. ‘Fund R&D’ and ‘Launch Product’),
are variables that the decision maker has the power to control. Chance variables, shown as
oval shapes (‘Success of R&D’ and ‘Market success’), are uncertain and cannot be
66
controlled directly. Objective variables, shown as hexagons (‘Market value’), are quantitative
criteria that need to be maximized (or minimized). General variables (not shown here)
appear as rectangles with rounded corners, and are deterministic functions of the quantities
they depend on.
Arrows denote influence. If ‘Market success’ influences ‘Market value’ it means that knowing
the extent of the ‘Market success’ would directly affect the beliefs or expectations about the
‘Market value’. An influence expresses knowledge about relevance and does not necessarily
imply a causal relation, or a flow of material, data, or money.
Influence diagrams show the dependencies among the variables more clearly than decision
trees would. Although decision trees show more details of possible paths or scenarios as
sequences of branches from left to right, all variables have to be shown as discrete
alternatives, even if they are actually continuous. In addition, the number of nodes in a
decision tree increases exponentially with the number of decision and chance variables and,
as a result, Figure 4.6 would need in excess of a hundred nodes to display the decision tree
for Figure 4.7, even if we assume only three branches for each of the two decisions and two
chance variables.
4.6.1 General
Risk evaluation is used to assist in making decisions based on the outcomes of risk analysis
as to which risks need treatment and the priority for such treatment This involves comparing
the levels of risks determined from the analysis process against the acceptance criteria
previously established.
The output from the risk evaluation is a prioritised list of risks requiring further action.
4.7.1 General
Risk treatment involves identifying the range of options available for modifying those risks
identified as requiring action in the previous stage, evaluating those options in respect of
each risk, and developing and implementing risk treatment plans.
Note that some risk response activities may have been undertaken during the qualitative
analysis step, if the urgency of developing a response to specific risks warranted it.
67
Risk avoidance means not proceeding with the activity or situation giving rise to the risk
Risk taking means taking or increasing the risk to pursue an opportunity
Risk removal eliminates either the likelihood or the consequences of the risk
Risk reduction reduces either the likelihood (e.g. by training programmes, QA
programmes, preventative maintenance, etc) or the consequences (e.g. by contingency
planning, design features, isolation of activity, etc)
Risk transfer transfers all or part of the risk to another party. Mechanisms include the
use of contracts, insurance, and organisational structures (e.g. a joint venture)
Risk retention can be adopted by informed decision.
There are two additional classifications for risk treatment responses,
viz. immediate or contingency.
An immediate response is one where the project plan is amended in order for the
identified risk to be avoided, or its impact minimized
A contingency response is one where provision is made within the project plan for a
contingent course of action that will only be initiated if the risk occurs
4.8.1 General
Monitoring and review of all elements of the risk management programme is essential.
The specific risks themselves, as well as the effectiveness of the control measures, need to
be monitored. Few risks remain static and factors impacting on the likelihood or
consequences may change.
68
Changing circumstances may alter earlier priorities, and factors impacting on the cost/benefit
of management strategies may vary. If, following treatment for a specific risk, there is still a
residual risk, then the management of that risk needs to be investigated.
69
70
5
Quality management
Learning objective
The purpose of this chapter is to:
Set out some fundamentals concepts and definitions relating to quality and quality
management
Define the components of a quality system
Review the application of ISO 9000:2005 quality guidelines to the development and
certification of quality systems
Define the components of effective quality assurance in a project environment
5.1.1 General
Historically, quality was achieved entirely by reliance on quality control procedures in
isolation; that is, by inspections of the product during manufacture, and/or on completion.
Uncovered defects involved remedial work, often expensive in terms of time and cost. Not all
defects were found before in-service failures occurred. With the increase in complexity of
systems and manufacturing processes, the limitations of the historical approach have
necessitated a different philosophy. Real evidence of quality in all processes and activities
involved in the generation of the output (design, manufacturing, installation and
commissioning) must now be demonstrated. Incorporating quality assurance into each of
71
these processes using a systematic approach promotes the reliable achievement of quality
objectives.
Quality management is a key issue for many businesses in today’s environment, and many
clients rank demonstration of quality management competencies highly when evaluating
potential suppliers. The benefits to the purchasers include better quality of services and
products, cost benefits due to improved efficiencies within the suppliers’ processes, and
lower whole-of-life costs. Suppliers derive competitive advantages from the lower costs as
well as from attaining certification to recognized standards.
Achievement of quality is a management function for which responsibility cannot be
delegated. If it is to be implemented successfully, the quality program must be seen to have
the commitment of senior management.
72
Quality
Quality refers to the totality of features and characteristics of a product or service that bear
on its ability to satisfy stated or implied needs (ISO 8402). In a contractual situation quality
requirements are generally specified. In other situations the needs that are implied rather
than expressed must be identified. Needs are normally defined as criteria to be achieved
with respect to defined characteristics. In some situations the needs may change over time.
Quality assurance
Quality Assurance refers to all those planned and systematic actions necessary to provide
adequate confidence that a product or service will satisfy all requirements laid down by a
given quality policy (ISO 8402).
Quality Assurance is not an add-on to a process. Instead, its success depends on
commitment to the philosophy of total integration of quality planning and implementation
throughout all component activities.
Quality audit
A Quality Audit is a systematic independent examination to determine whether quality
activities and related results comply with planned arrangements, whether these
arrangements are implemented effectively, and whether they are suitable to achieve the
objective (ISO 8402).
Quality Control
Quality Control consists of the operational techniques and activities used to fulfill
requirements for quality (ISO 8402).
Quality management
Quality Management is that aspect of the overall management function that determines and
implements quality policy (ISO 8402).
Quality plan
A Quality Plan is a document setting out the specific quality practices, resources and
sequences of activities relevant to a particular product, service, contract or project (ISO
8402).
Quality planning
Quality Planning, in general, identifies which quality standards are relevant to the project and
determines how to satisfy them
Quality system
A Quality System is the organizational structure, responsibilities, procedures, processes and
resources for implementing quality management (ISO 8402).
Total Quality Management (TQM)
TQM is an approach for continuously improving the quality of goods and services delivered
through the participation of all levels and functions of the organization, in order that fitness
for purpose is achieved in the most efficient and cost-effective manner. TQM systems
provide company-wide operating structures, documented in effective and integrated
technical and managerial procedures, which guide and co-ordinate company activities
across administration, marketing, technical, sales, etc.
73
Customer focus
The ultimate purpose of the quality system is to ensure complete customer satisfaction with
the goods or services provided. Focus on the requirements of the customer at every stage in
the process is fundamental to ensuring satisfaction. The customer-supplier relationship is
therefore of great significance; every quality system should involve the customer, either
directly or indirectly. Customer feedback provides the best inputs from which to define
required improvements. The focus on the customer has the following elements:
74
Quality management will only be effectively implemented where the chief executive is
committed to its success. That commitment is demonstrated by:
75
Figure 5.1
Quality systems – creating the customers/supplier chain
Figure 5.2
External and internal chains
There needs to be a positive interaction between the internal and external processes at all
times; i.e. the customer should be involved throughout.
76
Figure 5.3
A component- based TQM model
77
Figure 5.4
An action- based TQM model
The elements in each model may be summarized as a component-based TQM model or an
action-based TQM model. A component-based TQM model has the following attributes:
78
The fundamental platform for Quality Management, in any organization, is the Quality
Assurance program, a.k.a. Quality System. This comprises a documented set of policies,
organizational structures, systems and procedures serving to implement the quality system
of the organization.
The work of Stebbing in developing the framework for systematic quality assurance has
become the basis of the modern approach to quality system development. His reference
text Quality Assurance – the route to efficiency and competitiveness, [Third Edition] 1993,
pub Ellis Horgood Ltd, is recommended reading.
The Quality System consists of the following elements:
Quality policies
This includes the Policy Statement as well as the Quality Objectives of the organization, and
the organizational structure and responsibilities.
System outlines
For each significant area of activity of the organization, the various systems or processes
that will be subject to quality procedures are defined. The System Outlines include details of
which procedures are to be applied in particular circumstances, and associated
responsibilities for the procedure application.
5.2.1 Procedures
Detailed procedures are developed for each system element identified in the systems
outlines. These procedures should be appropriately detailed, and will generally include
reference to quality documents (standard checklists, record sheets, etc) to be completed as
part of the process in undertaking the defined procedure.
This hierarchy of a typical Quality System documentation is represented in Figure 5.5.
Figure 5.5
Quality assurance program hierarchy
The Quality Manual is a document of intent, not detail. The detail is reserved for the
procedures.
5.3.1 General
The International Organization for Standardization (ISO) has developed quality system
guidelines that have gained international recognition as the benchmark for quality assurance
systems.
79
The objectives of these standards are firstly to provide purchasers of products as well as
services with independent evidence that the supplier has in place systems to ensure that the
desired level of quality will be attained. In addition, they provide suppliers with defined
guidelines that provide the basis for implementing a certified quality assurance system.
It may be argued that adherence to these standards will automatically lead to a quality
output. This is, unfortunately, not so. While certification of quality system compliance with the
relevant ISO standard does mean that an organization is properly applying documented
quality systems, it does not mean that those quality systems are necessarily appropriate.
ISO 9001- For use where quality assurance is to comply with specified requirements
during several stages, which may include design/development, production, installation
and servicing
ISO 9002- For use where quality assurance is to comply with specified requirements
during production and installation
ISO 9003- For use where quality assurance is to comply with specified requirements
solely at final inspection and test
Guidance as to the selection of the appropriate standard to apply is set out in ISO
9000:2005.
80
Table 5.1
ISO 9000:2005 quality system elements defined
81
This involves the definition of the processes required to provide the project deliverables
defined in the project scope statement.
Standards
This involves the definition of specific standards applying to the project. The outputs from the
quality planning process are:
Project quality procedures. These are defined process controls for each process within
the project.
Project inspection & test plans (ITPs). These define the required programme for applying
quality controls.
Acceptance/reject/rework decisions
Test documentation
Adjustments to the process
82
These are issued to design consultants and define the requirements for quality systems and
procedures to be applied by them. These will typically cover:
Demonstrate the power of applying Earned Value analysis to measuring and predicting
project performance
Develop skill in the application of the Performance Measurement System
83
The basic approach is to periodically measure progress and cost in comparable units against
a baseline. The units of measurement are usually dollars, although man-hours are also
sometimes used.
6.1.1 General
A significant benefit of the EVM approach is that most of the data can be presented
graphically, in one macroscopic view of the project progress. Because there is a common
basis for these measurements, all individual measurements can be summarized (rolled up)
to any level, up to the entire project. Each work item is given a weight factor equal to its
planned dollar value or Budget At Completion (BAC) divided by the total of the BACs in any
measured grouping.
Because all measurements are based on their dollar value, they can all be plotted on the
same graph, with TIME on the x-axis and DOLLARS on the y-axis. Refer to Figure 6.1 for an
illustration of the application of EVM.
Figure 6.1(a) depicts a traditional cost report comparing actual expenditure to budget. This
does not assist the project manager in determining the status of the project. Superficially
seen, expenditure is close to the budget and there would appear to be little cause for
concern.
Figure 6.1(b) analyses the same data to provide information on the cost and schedule
variances. In this case both variances are negative: the project is behind schedule, and the
cost of the work completed is higher than budgeted. Thus the true position is that the project
is poised for significant cost overruns as well as time delays.
84
Figure 6.1a
Typical expenditure: budget
85
Figure 6.1b
EVM – cost and schedule variance
86
Table 6.1
EVM terminology defined
87
Totals Period 1 Period 2 Period 3 Period 4
100 No 0 30 40 30
80 Mh 0 24 32 24
Progress evaluation
Assume that it is now the end of period 2, and you are conducting measurements of
progress and cost. Your progress measurement indicates that you have completed 20
connections so far (versus the 30 planned).
Because you have to connect a total of 100 widgets, you are now 20% complete with this
work item. Therefore you have an earned value (BCWP) of $400 (20% x $2,000).
You have incurred 20 man-hours of labor against this cost code to the end of period 2. Thus
you can calculate the costs to date (ACWP) of $500. (20 x $25 = $500)
EVM analysis
Applying the formulae:
CV = BCWP - ACWP. In this example it is $400 - $500 = - $100. The negative value
means performance is worse than planned.
SV = BCWP - BCWS. In this example, it is $400 - $600 = - $200. The negative value
means performance is worse than planned. Note that although this is
a schedule variance and it is really indicating how far you are ahead or behind in terms
of your scheduling for this project, but it is expressed as a Dollar value and hence it can
be shown on the same graph as the other variables.
Several formulas can be used to compute the EAC. The answers can be considerably
different even though based on the same data, and the answers are equally legitimate.
In this example we might assume that the future is likely to be directly proportionate to the
past. Thus since it cost $500 to do $400 worth of work, it will cost $2000/CPI = $2,000/0.8 to
complete all the work that is:
EAC = $2,500.
However, it may equally be valid to assume that future work will be completed at the original
planned rate and cost. Some software packages make this assumption. On that basis:
88
ETC = 80 x $20 = $1,600, and therefore
EAC = ACWP + ETC = $500 + $1600 = $2,100
The reality is probably somewhere between the two. In the latter case, where the
outstanding work is expected to proceed at the original planned rate, ETC is often expressed
as FTC (Forecast To Complete) and EAC is then expressed as FAC (Forecast At
Completion). This could be confusing, but the important thing is to understand the concepts
behind this and not to become entangled in acronyms.
89
7
Review accepted theories relating to the key elements of management and leadership
Identify the personal attributes that influence the effectiveness of the project manager
90
Review organizational cultures, and the implications for project management
Identify key issues relating to the development of effective project teams
Review the factors that influence the authority and power of the project manager
Identify responsibilities of the project manager that are the essential elements of the
project management function
Discuss the alternative strategies for appointing project managers
Intelligence. They have an above average intelligence without being geniuses, as well as
the ability to identify common themes, solve complex problems and comprehend the
work of team members
Proactive. This refers to the ability to identify the need for action, and the strength of
character to initiate the required action
Self-assurance. They have sufficient confidence in themselves to believe that what they
are doing is appropriate
Helicopter mind. They have the ability to move within different levels, i.e. to understand
the detail but without losing the global view
Style theories
Certain types of behavior are more effective for a leader than others:
92
Figure 7.1
Situational leadership model (Hersey & Blanchard)
In this model the follower can be categorized in one of four categories of ‘readiness’ viz. D1
through D4. Ideally the leader has to respond with the appropriate leader behavior, viz. S1
through S4. In the following discussions, ‘unable’ means technically incapable to perform the
task at hand, and ‘unwilling’ means being hesitant or feeling insecure (e.g. because of a lack
of relevant experience), or simply having an attitude (motivation) problem.
The behavior of the leader is shown on two axes. The horizontal axis shows the extent of the
leader’s task behavior (i.e. directive behavior or degree of guidance to the follower). Towards
the left the degree of the leader’s intervention diminishes, while it increases towards the
right. The vertical axis represents the degree of supportive behavior (i.e. relationship
behavior). It decreases towards the bottom and increases towards the top. The leader-
follower relationship can now be summarized as follows:
93
leader is lower on directive behavior than in S1 and S2, and relatively high on supportive
behavior
D4-S4: The follower is now able and willing. The leader can turn over decisions and
responsibilities for the task (i.e. delegate the task) to the follower. In this case the leader
is low on both directive and supportive behavior as there is no need for either. There are,
however, two very important points to consider.
o Delegating a task to a person who is not able and willing to perform the required task
is a surefire recipe for disaster
o One can delegate responsibility but not accountability. The leader delegating the
task still has to keep a watchful eye over it, as ‘the buck still stops’ with the leader if
things go wrong
7.2.1 General
The culture of the organization within which the project is implemented, and the culture
developed within the project team, have significant impacts on the success of the project
management effort. Project managers need to be aware of these influences, how to react to
them, and how to modify them.
The following discussion sets out a definition of culture within these environments, identifies
the characteristics of organizational cultures, recommends strategies to be adopted by the
project manager to operate effectively within those different cultures, and examines the
cultural issues arising within the project team. Characteristics of an effective project team are
listed.
94
7.2.3 Organizational cultures
A useful model of organization cultures has been developed by Cameron (Cultural
Congruence, Strength and Type: Relationships and Effectiveness; Working Paper No 401,
Univ of Michigan Graduate School of Bus, Ann Arbour, MI, 1984). This model is based on
the Jungian framework of psychological functions which states that decision making occurs
within any individual on a continuum between thinking (rational, logic based) and feeling
(based on whether the particular approach is pleasing or not pleasing), and that information
gathering occurs on a continuum between sensing (based on measured or observed values
and data) and intuiting (based on feeling). Jung postulated that individuals tend to combine
an information gathering preference (sensing or intuiting) with a decision making preference
(thinking or feeling) to understand and act on information. The Cameron model is
represented in Table 7.1.
Table 7.1
Model of organizational culture types
The key to Table 7.1 is as follows: [1] Leadership style, [2] Basis for bonding within the
organization, [3] Strategic concerns and [4] Values emphasized.
The characteristics of each type of organization are:
Hierarchical. The hierarchical culture is very bureaucratic. Individuals look internally for
information, and make rationally based decisions. Typically, they build complex
administrative systems to regulate their operations. Mining companies often have this
type of culture
Clan. The clan culture emphasizes flexibility in the decision making style, and an
internally orientated approach to collecting information on which the decisions are
based. An example is Intel
Market. The market culture is orientated towards the external environment, and typically
adopts highly analytical processes in the decision making processes. An example is the
U.S. consumer commodities company Proctor & Gamble
Adhocracy. The adhocracy cultures are flexible in the decision making processes, and
are oriented towards the external environment, and are innovative and entrepreneurial.
An example is the 3M Company
When interacting with a new organization, or component departments, it is important that
project managers identify the type of culture operating. This is necessary in order for them to
95
develop an ‘influence strategy’ that will maximize their ability to obtain the required
cooperation and performance. Elmes & Wilemon (Organization Culture and Project Leader
Effectiveness, Vol XIX, Project Management Journal, 1988) postulate the following
strategies as guidelines.
Hierarchical. Adopt the bureaucratic approach – i.e. work ‘within appropriate channels’,
and adopt precise detailed communications. To gather the required support, look for
‘trades’. It may be useful to appeal to the sense of tradition within many long-lived
bureaucracies. Innovation can create obstacles. Conflict resolution is typically
avoidance, including postponement. The project manger will assist conflict resolution
within this environment by providing a ‘face saving approach’, whereby issues and
underlying feelings are expressed without the dispute becoming public. This requires
sensitivity and a degree of private communications
Clan. Focus on consensus building. Create a critical mass of support by involving many
key players. Listening, expressing concerns and communicating trust are important if
people are to make a commitment. Handle conflicts by collaborative methods
Market. This culture is highly competitive, and the project manager may need to enable
others to see benefits to themselves from project participation. It is useful to attract the
support of ‘stars’ within the organization. Competition is a primary means of resolving
conflict
Adhocracy. Adhocracies have weak authority structures, and the project manager needs
to be flexible and creative to gain support. Provide participants with freedom and
intellectual challenge, and expose them to new problem-solving techniques. Focus on
the task should be sensitive. Conflict should be solved by collaborative methods
Clarifying roles, responsibilities and levels of authority for all team members
Setting up effective channels of communication with all team members
Setting realistic performance expectations for each team member
96
Delegating effectively (instead of over-supervising)
Publicly rewarding good performance
Acknowledging legitimate concerns and conflicts within the team
Team building progresses through a series of defined phases to ultimately reach a high level
of sustained performance. In the first stage, team members develop an understanding of the
project objectives, their roles and responsibilities, and of each other. In the second stage,
team members form a definitive view of their importance to the team and the project. During
this stage conflicts between team members commonly arise. In the third stage the conflicts
have been resolved and the members of the team are able to focus on execution of the
project. In the final phase the team members evaluate what has been learned from
completion of the task. At each stage the project manager can communicate values that
move the team to the next stage.
The stages in team development are referred to as the processes of ‘forming’, ‘storming’,
and ‘norming’.
97
newcomer to sort out things for herself (S4 style with an R1 follower) or trying to prescribe to
an individual to whom a task had been delegated (S1 style with an R4 follower) can lead to
friction and loss of motivation within the team.
There are, however other issues that affect individual performance, sometimes within the
ambit of the project manager’s control, and sometimes not. Herzberg has studied the topic
and arrived at a list of so-called motivators and hygiene factors. A motivator is a cause for
satisfaction, while a hygiene factor is a cause of dissatisfaction. An interesting observation
from the results of his studies is that the lack of a motivator does not automatically make it a
hygiene factor (de-motivator) and vice versa.
From the following figure it is evident that the major motivators are (in descending order of
importance):
Achievement
Recognition
Work itself
Responsibility
Advancement
Personal growth
The absence of these factors, though, do not create an commensurate level of
dissatisfaction.
On the other hand, the major hygiene factors are:
98
Figure 7.2
Herzberg’s motivators and hygiene factors
Organizational. This refers to the authority vested in the individual by the corporate
structure and policies, the individual’s position within the organization, and the formal
delegation of authority from superiors
Project. This is authority arising from the terms of reference for the specific project, and
the manner in which the project organization has been structured and the control
mechanisms defined
de Facto. This is authority arising from the personal and political skills of the individual,
and the influence those skills have on allowing the individual to obtain a significant role
in the communication process and decisions paths within the project.
The level of authority required by the project manager is variable, and dependent upon the
level of accountability that the project manager is held to. Generally the project managers
should have more authority than their responsibilities dictate.
Common causes of problems within projects that arise from authority issues include:
99
7.3.2 Power of the project manager
Power comes from the credibility, expertise and decision making skills of the project
manager. Power is granted to the project manager by the project team as a consequence of
their respect for performance in those areas.
Preparing and being responsible for the project plan. Preparation of the project plan is
the first opportunity for the project manager to show leadership. It is a very critical step,
being the key to all parties involved having a common understanding of objectives,
constraints, responsibilities and interactions
Defining, negotiating and securing resource commitments. Project managers should
define, negotiate and secure commitments for the personnel, equipment, facilities and
services needed for their projects
100
Managing and coordinating interfaces. Projects are typically broken into sub-projects
and activities that can be assigned to individuals or groups for accomplishment.
Whenever the main project is broken down this way, project managers must manage
and co-ordinate the interfaces that are formed by subdividing the work
Monitoring and reporting progress and problems. Project managers are responsible for
reporting progress and problems on the project to the client, and for organising and
presenting reports and reviews as necessary. The project manager is the ‘focal point’ for
the team, and should be seen as such
Advising on difficulties. Occasionally a project manager finds that the only way to
overcome problem situations is to get help from outside the project. Suffering the
difficulty in silence will not be rewarded. The project manager’s management and client
must be alerted so that other resources can be brought to bear, constraints can be
relaxed, or project objectives can be adjusted. It is not sufficient just to mention the
difficulty in passing. The discussion must be an overt act and should be confirmed in
writing. If the management or customer agrees to take specific action to resolve the
difficulty, that should be in writing as well
Maintaining standards and conforming to established policies and practices. Project
managers must set and maintain the standards that will govern the project staff
members, or ensure conformity with established practices
Developing personnel. It is the project manager’s responsibility to ensure that the staff
involved on a specific project has the range of skills necessary to accomplish the task.
Where this requires development of existing skills, the project manager must identify the
need and arrange for the necessary training to be provided.
7.6.1 General
Given the consequences of the appointment, the project manager selection requires to be
carefully considered, and must not be just an arbitrary action.
The primary issue in making a specific selection is what criteria should be applied to
evaluate the suitability of a potential project manager. Technical project management skills
and knowledge base requirements are covered elsewhere in this course. The preceding
discussion defined specific skills and competencies required of the project manager as an
effective leader.
There is a view that the definitions of a good project manager as outlined in this chapter are
of limited application, given the reality that the choice of an individual must be made from a
very limited group, amongst which there will not be one whose profile closely matches the
desired ‘super person’. This view is not acceptable. While that reality may generally apply, it
is still highly relevant for the potential employer to be well-informed as to the essential and
desirable characteristics required of an appointee.
There are a number of approaches that can be adopted to procure a project manager. These
include:
In-house selection
Recruitment
101
External consultant
Each approach has specific issues, discussed below.
The appointee’s familiarity with the organization, its culture, procedures and personnel
The relative ease with which the appointment can be organized
The ability to make the position only part-time if dictated by the workload of the project
management function
Whether or not this option is practical depends upon a number of factors:
The capacity of the personnel within the organization to accept the additional workload
created by the promotion of the incumbent
The skills and experience of the potential incumbent to undertake the management of
the particular project
It must be recognized that only a minority of people have the potential to be effective project
managers. However, those individuals will only make good project managers if they have the
opportunities to learn and develop the skills and techniques required.
7.6.3 Recruitment
Recruitment is an appropriate option where there is sufficient ongoing work to justify the
additional increase in staffing. It has the benefit of allowing the necessary skills to be
acquired at a lower cost than employing consultants, and widens the range of potential
appointees to be considered for the position. It can have an advantage of providing the
opportunity to transfer skills within the organization, but may be better achieved through
formal training using external consultants or trainers.
102
Adequacy of standard control procedures to be applied
Providing it is relevant and practical, the engagement of external consultants should provide
for a transfer of knowledge to the client organization.
The technocrat. The individuals are so concerned and involved with the detail of the
technology that they don’t have the time, ability, or desire, to manage the project
effectively. They are unable to delegate the performance of the tasks, and commonly
lack the interpersonal skills necessary to build an effective team
The bureaucrat. This refers to an individual who is more concerned with the
administration of the project than actually achieving the desired results. Ensuring that the
way in which the work is done conforms to the prescribed procedures is seen as more
important than the quality or effectiveness of the effort
The salesman. This is the individual who accomplishes little, but directs his efforts to
presenting the project as being successful
103
Contract law differs from country to country, even amongst Commonwealth countries.
Readers are therefore encouraged to familiarize themselves with the latest developments in
contract law in their own countries and also in those of their foreign trading partners.
Despite some differences, there is also a remarkable degree of similarity between the
contract laws in different countries, partly due to the efforts of UNIDROIT, the International
Institute for the Unification of Private Law. UNIDROIT is an independent intergovernmental
organization with its seat in Villa Aldobranini in Rome, and has 61 member ‘States’ including
Australia, Canada, China, South Africa, the UK, and the USA. It was set up in 1926 as an
auxiliary organ of the League of Nations, and following the demise of the League it was re-
established in 1940 on the basis of a multilateral agreement, the UNIDROIT Statute. Its
purpose is to study needs and methods for modernizing, harmonizing private and, in
particular, commercial law as between States and groups of States.
104
It comes from a higher court in the same hierarchy
There must be an essential statement in the legal decisions (ratio decidendi). The ratio
decidendi (the reasoning vital to the decision) is the statement by the judge of the facts
he regards as ‘material’, the legal principles which he is applying to these facts and why.
The judge may at the same time discuss the law relating to this type of case generally,
or perhaps discuss one or two hypothetical situations. These are known as obiter
dicta (other comments) and while they may have persuasive force in future cases, they
are not binding.
Material facts must be the same. A material fact is one where it can be reasonably
argued that the presence or absence of the fact affects the result.
Persuasive precedent
Precedents in the following cases exercise a varying degree of persuasion.
105
Contract of record
Confined to the legal system, for example, to jail if fine not paid.
8.2.4 Agreement
The courts must be satisfied that the parties had reached a firm agreement and that they
were not still negotiating. Agreement will usually be shown by the unconditional acceptance
of an offer.
Offer
The offer must fully define the intended agreement. It can be accepted at any time unless:
106
hammer if it is a reserve auction. If it is a no reserve auction, it is held to be an offer to sell to
the highest bidder, and the highest bid must be accepted.
An estimate can be an offer. Revocation of the offer must be communicated to the offeree
(Byrne v van Tienhoven 1880) and communication need not be by the offeror (Dickinson v
Dodds 1876).
An offer may be conditional, and terms as such may be implied. For example, an offer to buy
a car is implicitly conditional upon the car remaining in the same condition as when the offer
is made.
In general, an offer can be revoked at any time before it is accepted. Note the position
regarding tenders as discussed elsewhere in this chapter.
Acceptance
There must be some positive act of acceptance and mere silence will never be enough.
Refer to Felthouse v Bidley 1863. Acceptance must be communicated to the offerer.
A contract is completed when the acceptance is posted. Note, however, that the courts
accept that letters can be lost in the post.
A counter offer and a mere enquiry must be distinguished from each other. A counter offer
destroys the original offer. A mere enquiry does not destroy the offer. Note that an
acceptance relying upon a change in terms would be a counter offer. In this regard, refer
to Hyde v Wrench 1840 and Stevenson v McLean 1880.
Acceptance should generally be in the same form as the offer, that is, a letter in response to
a letter.
An offer must be accepted in the terms of the offer if such terms exist and if they are very
precisely defined.
Conditional acceptance is not acceptance. For example, “... subject to a formal contract”, “...
subject to solicitor’s approval”. Note, however, that “subject to finance” is held to be binding
acceptance.
A provisional agreement pending a later contract is held to be binding.
8.2.5 Consideration
English law will only recognize a bargain, not a mere promise. A contract, therefore, must be
a two-sided affair, each side providing or promising to provide some consideration in
exchange for what the other is to provide. Consideration is defined as an act or forbearance
of value, or the promise of such, for which the same is bought from another. Note that not all
promises are deemed to constitute consideration.
Valid consideration includes:
107
A promise to perform an existing obligation to the promise (see D & C Builders Ltd v
Rees 1966)
A promise made to a third party
If, although the agreement is not complete yet, the parties have made provision to
render it complete without any further negotiations between themselves (for example, by
reference to an independent arbitrator)
If the parties have agreed criteria according to which the price can be calculated, or have
had previous dealings similar to the present transaction, the courts can use these
matters to ascertain the terms of the contract.
If only a fairly minor term is meaningless, it may simply be ignored, and the rest of the
contract treated as binding.
Establishing the terms of the contract
The terms of the contract are generally established by an analysis of the contract
documents. Some terms may be implied. These simple statements can, in fact, give rise to
complex issues.
Contract documents
The law is that the contract will consist only of those papers etc. forming the contract
documents. It is therefore essential to include in the bound documents not only the
conditions of contract, specifications, drawings and letters of offer and acceptance, but all
correspondence between the parties that has a bearing on the bargain agreed to. All
included documents must be listed. It is recommended that a conformed set of documents
be prepared. All agreed changes to the tender documents should be incorporated directly
into the contract conditions, specifications, etc. rather than relying upon an original document
with an accompanying wad of correspondence.
Interpretation of contract documents
The court will always presume the parties have intended what they have, in fact, said. The
words of the contract will be construed as they are written. A judge has said that “one must
consider the meaning of the words used, not what one may guess to be the intention of the
parties”. The law requires the parties to make their own bargain, and will not construct a
contract from terms which are uncertain. In trying to ascertain the intention of the parties, the
contract must be considered in its entirety. Where more than one meaning of a specific
section is possible, the reliable interpretation is that which is consistent with the other
sections of the contract. Despite any other interpretation the words or grammar may be able
to support, if the intention is clear from the context of the document it is pointless to pursue
an alternative construction.
108
In a construction document comprising general and special conditions, specification,
schedule and drawings, it is important to define the priority of the documents within the
special conditions. This avoids the situation where the Contractor claims to be entitled to rely
upon the wording in the schedule only when preparing his/her tender.
The Contra Proferentum rule
This rule means that where a term of the contract is ambiguous, it will be interpreted more
strongly against the party who prepared it. Application of this rule is subject to the general
principle that a term will not be interpreted against the professed intention of the parties. It
will only be applied when other methods of interpretation fail.
Implied terms
Where a term is not stated expressly, but is one which the court considers the parties must
have intended to include in order giving the contract business efficacy, the term may be
implied. For a term to be implied the following conditions must be satisfied:
8.2.7 Legality
Two classes of contracts not enforceable at law because of their content being distinguished
[Reference Cheshire & Fifoot’s Law of Contract 6th Ed]. Examples of the first class, illegal
contacts include:
8.2.8 Capacity
A contract will not be enforceable where one of the parties does not have the legal capacity
to enter into it. This may arise as follows.
109
Minors
Capacity of minors to enter into contacts is limited, and subject to specific laws of the
relevant country.
Corporations
The doctrine of ultra vires means that corporations, including trading and non-trading, can
only exercise the powers provided for within their Articles of Association. In theory all else
is ultra vires and contracts entered into in this manner are void. This doctrine is, however,
subject to significant exceptions.
In theory contracts made by a corporation must be under seal to be enforceable. There are
exceptions which provide for such things as routine minor contracts (e.g. for power etc).
Persons of unsound mind and drunkards
A registered mental patient may not enter into a contract. Contracts may be made on his/her
behalf by the courts or receivers appointed for this purpose.
If a person makes a contract while temporarily insane, or drunk, the contract is avoidable if
he can prove that he was so insane or drunk at the time as to be incapable of understanding
what he did, and the other party knew this. The contract will be binding unless it is avoided
within a reasonable time of regaining sanity or sobriety.
8.3.1 General
Consideration of the appropriate strategy should include the following factors:
Tendering strategies
Pricing strategies
Timing strategies
Contract types
Delivery strategies
For the sake of clarity the following discussion treats each option as an independent
component. However, the options may be interrelated in some instances such that not all
can necessarily be combined: for example, a fast track contract cannot normally be let on a
lump sum basis. The options can be subdivided: there are four recognized design build
scenarios, and a similar number of cost plus contracts. Thus there are a very large number
of different alternatives that may be potentially available when contemplating construction
procurement strategies.
110
8.3.2 Tendering
Tenders may invited be by way of the following mechanisms:
Public advertisement
From parties who pre-qualify following public advertisement to do so
By direct invitation to selected Contractors (including the option of only one)
Public advertisements generally attract all-comers, and while demonstrably ‘open’, create
inefficiencies for both the party inviting the tenders, and Contractors.
If ‘public invitation’ is required, the preferred approach is to invite Registrations of Interest
publicly, and then limit the tenderers to those clearly meeting pre-qualification criteria. In
specialist areas, or where a high standard of performance is sought, tenders should be
invited only from parties well-qualified by previous track record. In that case a direct
invitation to tenderers known to meet the criteria is expedient.
The basis of evaluation of tenders has a significant impact on project outcomes. Tender
evaluation criteria cover a range of issues, i.e. experience, track record, management
systems, price – clearly the relative weighting given to each criteria should be given specific
consideration in every instance.
8.3.3 Pricing
Pricing may be established either by competition or by negotiation. It is generally possible to
get satisfactory tenders by the process of negotiation providing the type of work is well
known to the party calling tenders, but competition is clearly the preferred option to arrive at
the best tender.
8.3.4 Timing
For supply and services contracts, the contract for the supply or services may be:
112
Management (either management contracting or construction management), defined as
follows:
o Management Contracting. The Principal enters into separate contracts with a
designer and a management Contractor, who enters into sub-contracts with works
Contractors
o Construction Management. The Principal enters into separate contracts with a
designer, a construction manager, and works Contractors
An option for all types of contract, i.e. services, supply, or construction, is partnering. There
are two approaches.
Single contract partnering. The partnering agreement is in addition to, but does not
modify, the contractual agreement. This provides an enhanced communication
framework with the objectives of improving relationships in general, and imposing a
dispute resolution process. That process requires matters in dispute to either be settled
at a particular level within a defined time frame or the matter is automatically taken up by
the next level of management. It is yet to be demonstrated that the single contract
partnering model will resolve substantial disputes with major commercial implications.
Strategic partnering. It is not uncommon in the USA and UK for client and supplier
organizations to form strategic partnering alliances. In that case the parties make a long
term commitment to work together over many contracts. The contractual provisions
become less important as parties seek to avoid disputes in order to maintain the longer
term commercial benefits. It is claimed to provide significant benefits to both parties.
The particular delivery strategy adopted will reflect the particular project, the relevant
expertise of the Principal and its advisors, and the abilities and prior performance of
participating Contractors/vendors. There are many and varied opinions on the relative merits
of each approach, often reflecting the interests of the particular party, but some of the
innovative approaches are considered by experienced practitioners to have definite merit –
e.g. design build by novation.
8.4 Tendering
113
In Ben Bruinsma v Chatham [1995], tenders were called for a civil works contract. After
calling tenders it was decided to delete some of the works. This resulted in the tenderer
whose price was initially the lowest no longer remaining so, and the contract was awarded to
another. It was held that the Principal was bound to select the lowest price based on the
documents as originally tendered, or to recall tenders for the varied scope of work.
In Megatech Contracting Ltd v Regional Municipality of Ottawa-Carleton [1989], the
instructions to tenderers required that the names of proposed sub-Contractors be supplied.
The lowest tender, which was accepted, did not include these details. This tender was
significantly lower than the second lowest. The second lowest tenderer sued on the basis the
Council had not complied with the instructions to tenderers. The court rejected this
argument, relying upon further provisions of the instructions; that “the Corporation reserves
the right to reject any or all tenders or to accept any tender should it be deemed in the
interest of the Corporation to do so” and that tenders “which contain irregularities of any kind
may be rejected as informal”. Note that a Principal is, in general, under no obligation to
consider only conforming tenders.
In Chinnok Aggregates Ltd v District of Abbortsford [1990] the instructions to tenders
included the provision that “the lowest or any tender will not necessarily be accepted”. The
Council had a policy, not disclosed to the tenderers, of preferring local Contractors provided
the price penalty was less than ten percent. A local Contractor was awarded the contract,
and the lowest tenderer sued. It was held the Council was in breach of an obligation to treat
all tenderers fairly.
8.5.1 General
Contracts can be:
Mistakes
Misrepresentation
Duress and undue influence
8.5.2 Mistake
Mistake as to facts, but not to general legal propositions, will prevent a contract being
enforceable where the apparent agreement in fact lacks true consent between the parties.
The general provisions relating to the effect of mistakes are set out below. Note, however,
114
that in a number of Commonwealth countries specific Acts have been passed to address this
issue, and the remedies available.
The general principle is that a mistake of fact may prevent the formation of a contract,
providing the mistake has an element of mutuality that is the contract was entered into under
the influence of:
A unilateral mistake, i.e. one party influenced by a material mistake known, or ought to
be known, to the other party
An identical bilateral mistake (‘common’ mistake), i.e. the parties are influenced by the
same mistake
A mutual bilateral mistake (‘mutual’ mistake), i.e. the parties are influenced by different
mistakes about the same matter of fact or law
A mistake on the part of one party only as to motive or purpose, or with respect to the real
meaning of the provisions of the contract, is not sufficient grounds for the contract to be
voided.
The mistake must exist at the time the contract is made. If third parties have acquired rights
or possessions subsequently and lawfully, i.e. prior to the contract being found to be
avoidable, those rights and possessions will generally be retained.
8.5.3 Misrepresentation
A representation is a statement of existing or past fact which does not form part of the
contract. A misrepresentation is grounds for rendering a contract avoidable only if it was
intended to, and did, induce a party to enter into the contract. The injured party must have
been aware of the representation and taken it into account. It need not have been the only
inducement.
Note that opinion must be distinguished from fact. If the opinion is not honestly held on the
facts, then it becomes a misrepresentation. There are the following classes of
misrepresentations.
115
The general remedy available to an innocent party where there has been a
misrepresentation by the other parties include rescission (cancellation of the contract), or
damages in the case of negligent or fraudulent misrepresentations.
8.6.1 General
The contract is discharged when the obligation ceases to be binding on the promiser, who is
then under no obligation to perform. This arises from:
Performance
Agreement
Passage of time
Frustration
Repudiation
Determination
Operation of law
8.6.2 Performance
When all obligations under the contract have been fully performed by each party, all
obligations are at an end; the contract is said to be terminated by performance. Note the
following:
For entire contracts, complete performance is required before any obligation to pay
For divisible contracts, payment is required for partial performance
If performance was prevented by one party, then the innocent party may recover
under quantum meruit
Generally complete performance of the obligations created under a contract is required to
bring the contract to an end. However, in order to allow the Principal to have timely
possession of the works, it is general practice in the construction industry to provide for
Practical Completion. That is the point at which the works are substantially complete apart
from minor defects. A Certificate of Practical Completion must be issued for each separable
portion. Certification should be conditional on the receipt of all maintenance documentation,
as-built information and the like
In most installation contracts there is provision for a defects liability period to follow the date
of Practical Completion. Where not provided for in the General Conditions of Contract, it is
recommended that the Special Conditions should provide for the original defect liability
period to recommence from the date of repair where significant faults are identified, in
respect of the repaired elements only.
116
8.6.3 Agreement
This requires new consideration for the agreement, or for the agreement to be made under
seal. In an executory contract where there has been no, or incomplete, performance, the
mutual release of the parties is consideration, and is called bilateral discharge.
Where the contract has been wholly executed on one side but not the other, consideration is
required for the agreement, unless under seal, and is called unilateral discharge. These
situations are known as ‘accord and satisfaction’.
Where discharge by agreement takes the form of a new contract, this is called novation.
8.6.4 Time
In common law, time is held to be of the essence, even if it is not so stated, in the absence
of contrary provisions. In equity, it is not necessarily of the essence if that view does not
result in an injustice.
If time is of the essence, failure by one party to complete their obligations within the time
specified allows the innocent party the right to rescind the contract. If time is not of the
essence, the continuing failure of a defaulting party to complete within a reasonable time
may be evidence of repudiation, and give the innocent party a basis to rescind the contract.
8.6.5 Frustration
A contract may be discharged under the doctrine of frustration if a later event renders its
performance impossible or sterile. This event must arise externally and must make the intent
of the contracting parties unobtainable, and does not include mere inconvenience or
hardship. Reliance cannot be placed upon a self-induced frustration.
Note that the court may imply a continuing condition as a term of the contract (e.g. the health
of a party), or the non-occurrence of some event.
The contract is discharged as to the future and both parties must fulfill their obligations as
they are due.
Law is now governed by the Frustrated Contracts Act 1944. This Act makes advance
payments less expenses incurred before the time of discharge recoverable.
8.6.6 Repudiation
Repudiation occurs when one party intimates by word or conduct that he does not intend to
honor his obligations under the contract. Under the Commonwealth legal system, unless
modified by specific laws, the common law position is that repudiation arises when one party
is in breach of a ‘fundamental term’ of the contract, i.e. such a breach is by itself evidence of
an intention not to be bound. Such terms are called ‘conditions’, and differentiated from other
terms known as ‘warranties’ breach of which gives rise to a right of damages only as a
remedy.
Where a party repudiates the contract does not necessarily come to an end. The defaulting
party is in effect making an offer to the other to discharge the contract. In these
circumstances the innocent party has the option of refusing or accepting the offer.
If the innocent party makes it clear that he refuses to discharge the contract, the contract
stays in force with all future obligations intact. He is under no duty to mitigate his losses, but
must not aggravate the damages. This principle is, however, subject to the limitation that it
117
will not apply where the innocent party has no substantial interest in completing performance
rather than claiming damages.
If the innocent party elects to treat the contract as discharged he must make this decision
known to the defaulting party, and he may not then retract it. The defaulting party remains
liable in damages for the breach that led to the default, and any earlier breaches, but is
excused from all future obligations.
8.6.7 Determination
In general either party may be justified in determining the contract where the other party has
repudiated the contract, i.e. demonstrated a clear intention not to perform the obligations
arising under the contract.
Other grounds that justify the Principal determining the contract may be set out in the
contract conditions. In these situations, this action should be considered with extreme
reluctance, and never initiated without considered legal advice.
The law is particularly severe on Principals who determine a contract. If the basis for the
determination is not absolutely in conformance with the contract provisions, or the
procedures not followed absolutely without fault, the Principal is most likely to find
himself/herself in a serious position that will translate into significant costs. It is necessary
that, despite specific clauses in the contract to the contrary, the Contractor be given formal
and specific warnings, and to the full extent practical be given the opportunity to rectify the
default.
118
extent, but the primary purpose of such clauses is to ensure that the Principal is able to
retain his/her right to recover liquidated damages. In the absence of such clauses there
could be sufficient justification to make time at large, thereby removing the Principal’s right to
do so. When time is put at large the only obligation on the Contractor is to complete in a
reasonable time. In the absence of flagrant non-performance the Principal could have
difficulty to successfully recover damages for a significant delay.
If the Principal is in breach of the contract as a consequence of which the Contractor is
delayed, the completion date is set at large, in the absence of express provisions in the
contract providing for the Principal to extend time for such a breach. Therefore contracts
should provide for the Principal to extend time for completion for such things as failure by
him/her to give possession of site by the due date, and delays in providing information,
materials and services. If the Principal is to be adequately protected, the wording must
specifically define the particular breach that arises. The law will construe extension of time
clauses very strictly against the Principal, because they are in effect penal provisions. So a
generalized provision, for example “to extend time in the event of special circumstances
arising which are beyond the control of the Contractor”, will not be allowed to protect against
a breach on the part of the Principal.
Has the Contractor been delayed by the particular circumstances, thereby becoming
entitled to an extension of time?
What is a ‘fair’ time extension?
To answer the first question it is necessary to determine whether or not the actual
circumstances affected an operation on the Contractor’s critical path. This introduces the
question of programs. It is not sufficient for the Contractor to show that delay arose on the
critical path of some plan of work set out in a program - it must affect his/her actual critical
path. The Contractor must be able to prove the progress he/she would have actually made
had he/she not been delayed, not the progress he/she said he/she would make, or intended
to make. A program is not conclusive evidence of the progress he/she would actually have
made. To assist their case, some Contractors fail to supply updated programs, or supply
programs that are over-optimistic. It is therefore necessary to exercise vigilance in
requesting program updates, and to review the adequacy of the assumptions regarding
resource levels and productivity.
To determine a ‘fair’ entitlement it is necessary to determine whether, in the circumstance, a
delay could reasonably be minimized or avoided by either rescheduling affected operations,
or by introducing additional resources. Although the introduction of additional resources is
not a normal expectation, there will be some circumstances where that could reasonably be
expected.
The question of concurrent causes of delay is a fertile area of debate. A useful direction is
provided by Abrahamson: Engineering Law and ICE Contracts. 4th Edition at page 139:
“The situation where there are concurrent delays, only one of which is outside the
Contractor’s control, is most difficult. The case may arise where the Contractor, due to his
own deficiencies, is late in reaching a position to start some programmed activity, but in fact
could not have started the activity earlier even if he had been ready because of delay by the
engineer with some necessary drawings. It is suggested that in this sort of situation the net
point is that the Contractor has not in fact been held up by “delay” outside his control, and it
is immaterial that if his progress had been different he would have been so held up. The late
119
drawings are not an actual “cause of delay” within this clause. The Contractor therefore is
entitled to an extension of time only so far as the drawings are withheld past the date on
which he in fact became ready for them.”
“Alternatively, the Contractor may say that if the employer’s concurrent delay had not
occurred, he would have been able, for example, to increase his resources or bring pressure
on a recalcitrant sub-Contractor so as to overcome the delay for which he is being held
responsible. It does not seem that the mere existence of that abstract possibility is sufficient
Unless the Contractor raises the issue at the time and gives evidence of readiness and
ability in fact, a later argument that his delay would have been eliminated or shortened but
for the employer’s concurrent delay is unlikely to be believed by the engineer or an arbitrator
on a claim to extension.”
Commonly, clauses within the standard Conditions of Contract refer only to extensions of
time. There is no provision for the Engineer to reduce the contract period, for instance if
some element of the work is omitted from the contract. This also means that the Engineer
cannot simply aggregate variations that include both additional work and deletion of work, to
conclude no entitlement to a time extension arises. If any one or some of the variations give
rise to an extension of time, that extension must be granted.
Smellie: Building Contracts & Practice 2nd Ed, page 219: “Furthermore, and depending
on the construction of the contract, it may be that the extension of time must be made at
the time the extra works are ordered, and if made later will be ineffective. And further,
the person given power under the contract to extend the time will probably have no
power to fix, as an extension of time, a date which has already passed.”
Page 222 “Extensions of time may be granted even after the works have been
completed if the cause of the delay operates until the completion. But if the cause of the
delay has ceased to operate before completion, a purported extension made after
completion is invalid.”
Justice Roper in Fernbrook Trading Co Ltd v Taggart [1979] “In my opinion no one rule
of construction to cover all circumstances can be postulated and the best that can be
said on the present state of the authorities is that whether the completion date is set at
large by a delay granting an extension must depend upon the particular circumstances
pertaining. I think it must be implicit in the normal extension clause that the Contractor is
to be informed of his new completion date as soon as is reasonably practicable. If the
sole cause is the ordering of extra work then in the normal course the extension should
be given at the time of the ordering so that the Contractor has a target for which to aim.
Where the cause of the delay lies beyond the employer, and particularly where its
duration is uncertain then the extension order may be delayed, although even there it
would be a reasonable inference to draw from the ordinary extension clause that the
120
extension should be given a reasonable time after the factors which will govern the
exercise of the engineer’s discretion have been established. Where there are multiple
causes of the delay there may be no alternative but to leave the final decision until just
before the issue of the final certificate.”
Another consequence of a failure to extend the time for completion at or close to the time the
cause of the delay occurs, is to run a serious risk that the Contractor will succeed with a
claim for constructive acceleration.
8.7.5 Acceleration
In the absence of special conditions allowing it, the Supervisor has no power to request an
acceleration. This provision should be included in the special conditions on significant
contracts. Such clauses should allow for an acceleration to be ordered as a variation to the
contract with the agreement of the supervisor and the Contractor. It is recommended that the
basis of payment be agreed prior to the acceleration commencing in every case.
8.8.1 Damages
Damages for breach of contract are governed by two considerations, namely the remoteness
and the measure of damages.
Remoteness of damages
The test was established in Hadley v Baxendale 1854. The damages must be those either
121
8.8.2 Liquidated damages
The contracting parties may agree as a term of the contract the amount to be paid in the
event of a specific breach. This sum is either a liquidated damage or a penalty. Refer to
Section 8.9. Liquidated damages must be a genuine pre-estimate of loss caused by the
nominated breach. If the breach arises, the innocent party is entitled to recover the liquidated
damages, without having to prove the actual loss.
122
significantly lower than the loss calculated. It is common to prescribe an upper limit on the
liquidated damages, either as a nominated total sum, or as a percentage of the contract
sum.
Where there are separable portions with differing completion dates, liquidated damages will
normally be applied to each stage. It is important to ensure that the cumulative damages that
arise in the event of concurrent delays do not exceed the estimate of maximum loss.
Take care where a pro forma schedule is used for the Special Conditions. If the space for
the nominated figure for liquidated damages is left blank it is taken to mean that there is no
provision for liquidated damages. If however the space contains “-” or “nil” it will be
interpreted as meaning zero (specifically $0.00) damages apply in the event of late
completion.
8.10.1 Penalties
A penalty is a sum in the nature of a threat to secure performance. Courts will not enforce
penalties.
Where the amount stated in the contract as liquidated damages is found to be a punitive
sum, i.e. a sum not being a genuine pre-estimate of the probable damages or losses to be
suffered by the Principal, but merely a sum fixed to ‘terrorize’ the other party to perform, the
Principal will be unable to recover the stated amount as of right under the liquidated
damages provisions.
Courts will compensate a party trying to enforce a penalty only to the extent of actual
damages suffered. In the case where actual loss exceeds the nominated sum, the plaintiff
may sue for breach of contract and receive full damages. This option does not exist where
liquidated damages exist.
If the actual damages exceed the stated penalty the Principal can seek to recover the total
costs by suing for breach of contract rather than enforcement of the penalty.
8.10.2 Bonuses
Building contracts may provide for a bonus to be paid, for example for early completion, or
for completion below a defined price.
123
In this case, if there is a breach of the contract by the Principal that prevents the Contractor
from achieving such completion, recovery of the bonus may be allowed as damages for the
breach. So, again, the contract will need to be very carefully drafted.
The effect of variations ordered on the Contractor’s right to the bonus needs to be
addressed. It need not be the case that an extension of time provision will necessarily
extend the date for which the bonus is achieved.
Exercises
9.1 Work breakdown structures
9.1.1 Exercise 1
Objective: The objective of this exercise is to develop an understanding of the structure and
purpose of Work Breakdown Structures.
Working in groups of two, develop a WBS for a project of your choice. Choose a project with
which you have some experience or with which you can at least relate to. Define your project
in such a way that it is not too complex, due to time limitations. The following examples
might help:
9.2.1 Exercise 2
Run the spreadsheet ‘activity on node.xls’. You may have to enable macros on your
computer. This exercise is self-explanatory and will guide you through the process of
calculating start/finish times, calculating floats, and identifying the critical path(s).
9.2.2 Exercise 3
Using a project planning software package such as Plan Bee, create the PERT and Gantt
charts for the following project. Also identify the critical path. Add START and FINISH
activities and remember to save your project as you proceed.
124
The reason for using a simpler scheduling software package such as PlanBee, instead of
(for example) MS Project or Primavera, is that this is an exercise to teach people the basic
concepts of scheduling, and not the intricacies of ‘driving’ a specific piece of software. Due to
the limited time available we need to use software that is relatively easy to master.
Planning:
A 16 start
B 20 start
C 15 A
Installation:
D 18 B
E 10 B
Commissioning:
F 3 C, D, & E
9.2.3 Exercise 4
For the preceding project, calculate the resource loading and the total cost, using the
following information. Engineers are costed at $800 per day.
Planning:
A 1 Engineer $1000
B 1 Engineer $1000
C 2 Engineers $5000
Installation:
D 1 Engineer $2000
E 1 Engineer $3000
Commissioning:
F 1 Engineer $500
9.2.4 Exercise 5
125
Repeat exercises 2 and 3, but apply them to your case study project, for which you have
developed the WBS in Exercise 1. Hopefully you have limited the number of work packages
as suggested! You will have to specify your own cost structure.
9.3.1 Exercise 6
Objective: The objective of this exercise is to develop expertise in the analysis and reporting
of project costs. Use the pro-forma cost report sheet attached, or the Excel spreadsheet
‘Cost Mgnt I’ supplied.
Date 01/02/08
Maintenance of the gas extraction system on a geothermal power generation plant is
proposed. The initial assessment indicates the following components require
repair/replacement, with the associated indicated cost estimates:
Item 1: $2,000
Item 2: $1,000
Item 3: $500
It is anticipated the project will require a turbine shut down of 72 hours. Assume that the
turbine is running at 30 MW, and the opportunity cost of generation is $40 per MW hour. The
contingency for shutdown costs (i.e. loss of revenue or ‘cost of unsold energy’) is estimated
at $11,100. Assume that this amount is allocated in equal portions to Items 1, 2 and 3.
Develop a realistic FFC for this project, including for the cost of lost generation.
Date 28/02/08
A detailed design for the project has been undertaken. It is found that a very useful
additional modification is to increase the size of the condenser, at an estimated cost of
$10,000, comprising labor $1,000 and components $9,000. This is expected to require a
turbine shut down of an extra 12 hours.
Provide a current FFC for the project, with a variance analysis.
Date 31/03/08
A contract was called to do all of the previously defined work. The two tenders considered
were:
126
Date 04/04/08
The contractor commenced work the previous day and immediately found that the existing
fittings for attaching the nozzles were different to those shown in the drawings. It was
decided to undertake appropriate extra work. The cost of this work was assessed at $2,000,
and it extended the shutdown period by 4 hours. Work on the nozzles has been completed
with no further problems. No work has yet been undertaken on the other two components.
Provide a current FFC for the project, with a variance analysis.
127
on 1 January 2006, with an implementation period of 12 months. The S curve factor is 0.6,
i.e. the weighted date for expenditure of the estimated cost is 60% through the duration.
Using the attached calculation tables, complete the FFC as at 1 Jan 2004, assuming the
escalation data below, and a contingency sum of $75,000.
Exercise 8: As at 30 June 2004 the cost estimate is revised following further design
definition. Including a new feature (required by the client to meet changed market conditions)
with an estimated cost of $175,000, the revised cost is $1,300,000. (Cost index as at Jun
04). It is now anticipated that implementation start will be delayed 6 months to 1 July 2006.
The forecast escalation noted above applies.
Using the same pro-forma or spreadsheet, complete the FFC as at 30 June 2004, assuming
a revised contingency sum of $125,000.
Exercise 9: At 30 June 2006 a contract is let for the work package. The value of the awarded
contract is $1,800,000. The estimated duration is now 18 months, with an S curve factor of
60% as before. The contract is on a fixed lump sum basis. The current cost index is 1250,
and escalation over the following 18 months is assumed to increase the index to 1350 at the
end of that time.
Using the same pro-forma or spreadsheet, complete the FFC as at 30 June 2006, assuming
a revised contingency sum of $100,000 at the contract cost index.
Case Study: Cost Management with Escalation
128
9.4 Integrated time and cost
9.4.1 Exercise 10
The objective of the following exercise is to develop expertise in the PMS technique of
project performance review, which will provide the basis for applying the PMS features on
project management software.
Use the planning sheet attached, or the excel spreadsheet ‘integr time cost
template.xls’ supplied.
129
Activity Three 200 hours resource D (25 hrs/week) $100/hr
Overhead 1 $1,000/day
Overhead 2 $50/Mh
9.4.2 Exercise 11
After 40 working days project progress and cost is reviewed. Activity 1 was completed in 25
days. Activity 2 started on time and is now 50% complete. Fixed cost was $25,000. Activity 3
is about to commence.
Calculate:BCWS
BCWP
ACWP
Cost Variance
Schedule Variance
130
9.5 Quality management
9.5.1 Exercise 12
With reference to the project for which you have developed a WBS, analyze the components
of Quality Assurance for special processes quality required for a quality system complying
with ISO 9000.
Identify specific processes to be subject to quality assurance processes. Develop an
Inspection and Test plan for the process.
There is no specific answer to this case study. The required procedures will be process
specific. An Inspection and Test Plan format is shown on the next page.
131
9.6 Risk analysis
9.6.1 Exercise 13
For your own project (exercises 3 and 4), identify all the risks facing your project and define
a plan of action for each. Use a matrix as on page 2 of the Risk Management chapter. It is
important to identify risks to the PROJECT and not necessarily occupational safety (e.g.
‘Work Safe’) issues.
9.6.2 Exercise 14
Using the ProjRisk software, calculate the required contingency (in % and $) for the FIXED
costs in exercises 3 and 4. According to historical data you can assume a triangular cost
distribution of minus 10% to plus 15% for each element in the WBS. Also work on an 85%
certainty factor as per the S-curve for your project.
9.7.1 Exercise 15
Joe and George are good friends. Joe is a Builder and George is in Underwear. George
wants a house built so after a few drinks Joe says, “You get the materials and I will build it
for you”. George accepts Joe’s offer, they shake hands and seal the offer with another round
of drinks. When George buys the materials he cannot get Joe to build because Joe is too
busy on another contract.
a) What is the legal position?
b) What advice do you give George?
c) If George sued what would be the likely result?
9.7.2 Exercise 16
132
George has plans prepared for a house. Joe offers to build the house for $50,000. George
agrees to have Joe build, it if he can do it for $45,000. With no further communication Joe
builds, and invoices the full $50,000 saying he had never agreed to any subsequent figure.
a) What is the legal position?
b) What advice do you give George?
9.7.3 Exercise 17
Party A advertises a car for sale for $10,000. Party B has a look at the car and says, “Looks
OK but your asking price is too high for me, I’m offering you $8,000.” Party A says, “I’ll be in
touch if I don’t get a better price.”
The next day Party A rings Party B and says, “Your price was the best, I’ll take it”. Party B
says, “Sorry, I bought another car this morning.”
What is the legal position?
9.7.4 Exercise 18
XYZ Co, who manufactures industrial pump sets, wrote to Mogul Ltd, an oil company,
offering to construct an item of plant for $100,000. The offer was made on a form containing
XYZ’s standard terms of business. One of the terms contained in the document was that the
initially agreed contract price might be varied according to the cost and availability of
materials.
Mogul replied in a letter dated April 29 containing their standard terms of business, stating
that they wished to order the plant. These terms did not include a price variation clause but
contained a statement that the order was not valid unless confirmed by return of post. XYZ
duly confirmed by a letter dated May 1. This letter was delayed in the post as it bore the
wrong address, and did not arrive until May 14. Meanwhile on May 12 Mogul posted a letter
to XYZ canceling the order. That letter arrived on May 13.
What is the legal position?
9.7.5 Exercise 19
Further to question 4. XYZ ignored the letter of May 12 and pressed on with the construction
of the plant. It was completed at a price of $125,000. Mogul refused to take delivery.
What is the legal position?
9.7.6 Exercise 20
Millicent owns a factory manufacturing clothing. In January, the heating system of the factory
broke down and she was forced to lay off the workforce. Millicent engaged Fixit Ltd to repair
the system. They agreed to complete the necessary work within one week.
Owing to supply problems, the work was not completed within the week and Fixit offered to
install a temporary system which would enable half day working at the factory. Millicent
rejected this offer. In the event, the repair work took two months and as a result Millicent lost
a highly remunerative contract to supply knitwear to the armed forces. Millicent is now
claiming a total of $80,000 by way of lost profits.
Advise Fixit Ltd as to their liability in damages.
133
9.7.7 Exercise 21
Further, consider the position above if the contract between Millicent and Fixit referred to
above had contained the following provision:
“If the repair work is not completed within one week, Fixit shall pay Millicent, by way of
agreed damages, the sum of $10,000 plus $12,500 for every week during which the work is
unfinished.”
9.8.1 Exercise 22
The objective of this exercise is to integrate the separate project planning and control
techniques discussed over the last two days and produce a Project Quality Plan in outline
form.
Work in the same groups and on the same project as for the WBS case study. Identify all the
components of the project quality plan.
Develop in outline form (i.e. headings only) the components of the control procedures.
10
Solutions
10.1 Work breakdown structures
10.1.1 Exercise 1
Everyone’s answer will be different so this is just an example.
We will develop several work breakdown structures for the implementation of a new
restaurant chain. This can be done with pen and paper, but we will use WBS Chart Pro, a
work breakdown structure development tool. Although this package allows the user to enter
a fair amount of information such as start dates, finish dates and interdependencies, we will
only use it for its graphical capabilities at this point in time. The information entered here can
be uploaded to MS Project if needed.
Break the project down to a point where the tasks can be administered and, if necessary,
allocated to a subcontractor. DO NOT CONFUSE LOW-LEVEL ACTIVITIES WITH WORK
PACKAGES ON THE WBS!!!! For example, in a building project, the foundation work could
be a task on the WBS. However, if you start breaking up this task into its various activities,
some of which can be performed by one person in an hour (e.g. knocking in the pegs to
indicate the height of the concrete), then the WBS becomes ridiculously complicated.
Start the program by clicking on the desktop icon.
134
The following will appear.
Double-click on the ‘WBSchart1’ box and edit it to read ‘Restaurants project’. Alternatively,
just type ‘Restaurants project’…the name of whichever box is highlighted (red border) will be
updated.
135
The first level is now complete.
136
Now click on the ‘V’ arrow (see below)
137
Click on the ‘Restaurants project’ rectangle again and add ‘Restaurant 2’. Continue until the
WBS is complete.
There are several ways to do a WBS for the same project. Ultimately the lowest rectangles
in any branch of the ‘tree’ must represent manageable work packages.
The following example shows the WBS of a project with geographical location at the second
level.
Alternatively, the various functions (design, build, etc) can be placed at the second level.
Note that using the conventional inverted tree structure could often lead to a very wide
drawing.
138
The third alternative shows a subsystem orientation.
139
A fourth alternative shows a logistics orientation as follows:
140
The WBS could also be drawn to show a timing orientation.
141
Note that ‘Design’ and ‘Execution’ are NOT work packages, they are just headings. ‘Start
up’, however, is a work package since it is at the lowest level in its branch. The WBS could
be broken down even further but the risk here is that the lowest-level packages could be too
small. If ‘advertising’, for example, could be accomplished in 100 hours it might be a good
idea to stop at that level. It could then be broken up into activities and tasks (and even sub-
tasks); the duration and resource requirements would then be aggregated at the ‘advertising’
level, but not individually shown on the WBS.
It is, of course, not necessary to use a sophisticated WBS package; Excel will work just fine
as the following example shows. The work packages (except ‘Start up’) are shown at level 3.
142
10.2 Time management
10.2.1 Exercise 2
Run ‘activity on node.xls’. Macros should be enabled; else this demo will not run.
(Use Tools->Macro->Security and adjust settings if necessary)
143
Click on ‘Start new analysis’. Note the ‘Start’ and ‘End’ nodes with zero duration.
First, the forward pass. Proceeding from left to right, click on each node and provide the
‘Earliest Start’ and ‘Earliest Finish’ times in the dialogue box. The program will alert you to
any mistakes.
The start and finish times of the ‘start’ node is obviously 0 and 0. For all nodes with only one
predecessor, the ‘Earliest Start’ time of that node equals the ‘Earliest Finish’ time of the
previous node. For nodes with more than one predecessor, the ‘Earliest Start’ time obviously
equals the biggest of the preceding ‘Earliest Finish’ times. The program will alert you to any
mistakes.
144
Note that that the tasks may seem to overlap as each node starts on the same day that its
predecessor finishes. This is, however, not the case. View each day as a 24-hour period
starting, say, at 08h00 and finishing at 08h00 the next morning. A 1-day task could therefore
start at 08h00 on Monday, day 7 of the project, and it will finish at 8h00 on Tuesday, day 8 of
the project. Its successor could start at 08h00 on Tuesday, day 8.
Now the reverse pass. Start with the ‘End’ node and enter the same value as ‘Earliest Start’
and ‘Earliest Finish’ (15 in this case) for both ‘Latest Start’ and ‘Latest Finish’. For nodes with
only one successor, the ‘Latest Finish’ equals the ‘Latest Start’ of its successor. In the case
of multiple successors, take the smallest (earliest) value. The result looks like this:
The next step is to fill in the float, by subtracting either ‘Earliest Finish’ from ‘Latest Finish’ or
‘Earliest Start’ from ‘Earliest Finish’, i.e. top left from bottom left or top right from bottom right.
Do this for all the nodes.
145
Finally, place the tip of the index finger (cursor) on each line that lies on the critical path
(zero float) and click on the left mouse button.
The whole exercise can now be repeated. The duration of the nodes as well as their
dependencies will be different for each pass.
10.2.2 Exercise 3
For this exercise we will use Plan Bee. The question might be asked why we do not use a
more ‘industry standard’ software package such MS Project. The answer is very
simple….the exercise is not about learning the operation of any software package…it is
about mastering CONCEPTS. And so we need software that is easy to master within
minutes (we do not have a lot of time!), even if it cannot handle large multimillion dollar
projects.
Run Plan Bee by clicking on the desktop icon.
146
The following will appear.
147
Type in the project details and select a starting date. Click OK. Edit the first task so it reads
‘START’.
Now type in all the names of all the tasks, headings, etc. Normally you will only see around
ten tasks, click on the ‘show more tasks’ button to obtain the following display. Click ‘show
task options’ to return. Note that at this point all entries are tasks by default. We will change
that later.
148
Enter the correct duration for tasks A to F.
The next step is to enter the precedence relationships. Highlight each task, and then click
‘add precedence’.
149
In the example shown here, the precedence for A is START. Do not forget to enter the
precedence for FINISH, which is F.
If you have not done this yet, select the three entries that are simply headers, not tasks, (viz,
PLANNING, INSTALLATION and COMMISSIONING) and change them to ‘header level 1’
by means of the radio buttons. Notice that they are now simply headings with no duration.
150
Now click on the Gantt chart icon.
Then do the same for the PERT chart. You will need to ‘auto align Pert nodes’ and also
select ‘critical this color’ to show the critical path.
151
10.2.3 Exercise 4
Now we have to allocate some resources. Click on the resource button.
152
Now click ‘Add a new resource’ and add ‘Engineer’ with the appropriate daily rate and the
number of people available. This particular screenshot shows 3 engineers but this number
eventually had to be increased to 4.
Now select task A and click ‘Add Engineer to A’. Do this for all the tasks and remember to
allocate 2 engineers to task C.
Then check the required resources for each day. Note that we have just enough resources
for some of the days. Had we been limited to, say, 3 engineers, we would have had to delay
the start date of some tasks.
Finally, click on the ‘Admin details’ button.
153
Once again select tasks A-F and type in the fixed cost for each task, together with cost
codes and notes as applicable.
When done, click ‘file->preview/print report’ to look at a cost summary for the project.
10.2.4 Exercise 5
The answer will depend on your specific example.
155
** Assumes original allowance was divided evenly across the three elements.
10.4.1 Exercise 10
156
Exercise 11
157
10.5 Quality management
10.5.1 Exercise 12
The answer will depend on your specific example.
158
10.6.1 Exercise 13
The answer will depend on your specific example.
10.6.2 Exercise 14
Click Start->Run->Risk Analysis->ProjRisk
Enter the likely cost for ‘A’, as well as the maximum and minimum values, either as Dollar
amounts or as percentages. Also select the type of distribution. Note that in real life the type
of distribution will have to be derived from historical data.
159
Work package ‘A’ will now appear as follows.
160
Enter the details for ‘B’ thru ‘F’ as well.
161
When done, click on the Analyze button. The software will perform a Monte Carlo simulation
with 1,000 iterations (i.e. ‘rolling the dice’ 1,000 times.) The statistical distribution of the
expected costs will be shown. Notice that it forms a normal distribution with a mean of
$12,700 and a standard deviation of $331. It also appears highly unlikely that the cost will
exceed $13,750 or that it will be less than $11,750.
162
Now click ‘swap graphs’ to show the cumulative probability or ‘S’ curve. This shows the
probability of the cost being less than the indicated cost.
163
For example, the probability of the cost being less than $13,000 is 80%. To determine the
amount for 85% certainty you will need to interpolate on the graph, or select the ‘statistics’
display.
164
Click on the ‘Cumulative Probabilities’ tab and find the value for 85%, which is $13,049.
165
Since the original estimate was ($1,000+$1,000+$5,000+$2,000+$3,000+$500) = $12,500, it
means that the contingency allocation for 85% certainty needs to be $13,049 - $12,500 =
$549
10.7.1 Exercise 15
Does a contract exist? Necessary factors present include offer, acceptance, consideration,
legality and capacity. A necessary factor that is absent is definite terms, i.e. “what was the
bargain”?
10.7.2 Exercise 16
What is the contract? The offer (i.e. $50,000) was extinguished by the counter offer. Joe’s
commencement of work is constructive acceptance of the counter offer.
10.7.3 Exercise 17
The issue is whether or not the counter offer of $8,000 remains open for acceptance until the
following day. If it does, it has not been revoked.
What is reasonable in these circumstances?
166
10.7.4 Exercise 18
Is there a binding contract between XYZ and Mogul? Mogul’s response to XYZ is a counter
offer because of the change in terms. Has the counter offer been accepted? If the postal rule
of acceptance applies here, XYZ’s confirmation of May 1 is effective. Two issues may affect
this rule here:
i) It could be argued that “confirm by return of post” required the communication of
acceptance within a short time to be effective.
ii) Incorrect address, if the fault of XYZ, may be sufficient to overturn the rule. If the fault lay
with Mogul supplying the wrong address, this is not the case.
10.7.5 Exercise 19
If the contract is binding due to a valid postal acceptance, Mogul’s letter of May 12 could be
regarded as a repudiatory breach of contract. XYZ are not bound to accept the repudiation
and in the circumstances are under no duty to mitigate. XYZ may be entitled to complete
performance and claim the contractual sum due, which would presumably be $100,000 as
the price variation clause is excluded from the contract.
This depends upon the test of “substantial interest” in proceeding with the work, rather than
claiming damages.
10.7.6 Exercise 20
Fixit are in breach of contract, thus liable for damages. The test for remoteness of damages
laid down in Hadley v Baxendale applies, i.e. was the loss of profit from losing the armed
forces contract within Fixit’s knowledge.
There is a duty on Millicent to mitigate her losses. An important issue will be whether or not it
was reasonable for her to reject Fixit’s offer to install a temporary system.
10.7.7 Exercise 21
The fact that the provision is referred to as ‘agreed damages’ will not prevent the Court
finding it to be a penalty if that is its true nature.
In this case damages claimed are $80,000. Applying the formula for 8 weeks’ delay gives
$110,000 under this provision. Likely to be considered a penalty, and thus non-recoverable.
The $10,000 by itself appears to be a penalty. It arises if Fixit is half a day late or 20 days
late, and therefore does not appear to be an assessment of losses in the normal course of
the business.
167
PROJECT CHARTER
MANAGEMENT AUTHORISATION
DELEGATED AUTHORITIES
PROJECT PLAN
SCOPE DEFINITION
WORK BREAKDOWN STRUCTRE
ORGANISATION STRUCTURE
RESPONSIBILITY MATRIX (optional - depending on complexity)
SCHEDULE
BUDGET
PROJECT CONTROLS
SYSTEM OUTLINES
PROJECT ADMINISTRATION
Filing systems
Document management
Correspondence management
Meetings
REPORTING
Specific Client requirements: e.g., specifically address Project Success Criteria by
defining relevant, measurable, project performance indicators.
RISK MANAGEMENT STRATEGIES
DESIGN MANAGEMENT
Quality Policies
Standards
Tendering strategies
Value Management strategies
PROCUREMENT MANAGEMENT (SUPPLY & INSTALLATION)
Procurement strategies
Quality policies
Standards
Tendering strategies
PROCEDURES
(This example is for a typical capital works project)
PROJECT ADMINISTRATION
Document management
Correspondence management
Meetings
REPORTING
Procedures & formats
SCOPE MANAGEMENT
Briefing
Change control
QUALITY MANAGEMENT
Design ITPs
Design quality verification
Construction ITPs
Construction quality verification
168
TIME MANAGEMENT
Schedule preparation
Schedule revisions
Monitoring & reporting
COST MANAGEMENT
Estimating
Budgeting
Financial authority
Monitoring & reporting
RISK MANAGEMENT
Risk management processes
CONTRACT MANAGEMENT
Tender & contract documentation
Tendering & tender evaluation
Variation procedures
Contract administration
Appendix A
A1.2 Budget
A budget is a quantitative expression of a plan of action relating to the forthcoming budget
period. It represents a written operational plan of management for the budgeted period. It is
always expressed in terms of money and quantity. It is the policy to be followed during the
specified period for attainment of specified organizational objectives.
In CIMA terminology, a budget is defined as follows:”A plan expressed in money. It is
prepared and approved prior to the budget period and may show income, expenditure, and
the capital to be employed. May be drawn up showing incremental effects on former
budgeted or actual figures, or be compiled by zero-based budgeting.”
170
headings: Product, Territory, Types of customers, Salespersons, Period (month, quarter or
week).
Illustration -1:
AARK Associates manufactures three products X, Y and Z and sell them through three
divisions, North, South and West. Sales budgets for the current year based on the estimates
of Division Managers (Sales), were:
Sales prices are $12, $8 and $10 in all areas. It was found by the market research team that
Product X finds favor with customers, but is under-priced. It is expected that if the price is
increased by $1.00, its sales will not be affected. The price of product Y is to be reduced by
$1.00 as it is overpriced. Product Z is properly priced, but extensive advertisement is
required to increase its sales.
On the basis of the above information, the Divisional managers estimate that the increase in
sales over the current budget will be
It is also expected that there will be a further rise in sales thanks to extensive advertising and
the figures could be
171
We have to prepare a sales budget along with budgeted and actual sales for the current
year.
AARK Associates
Presented By –
Checked By –
Submitted on –
172
A1.5.2 Production budget
This shows the quantities to be produced for achieving sales targets and for keeping
sufficient inventory. Budgeted production is equal to projected sales plus closing inventory of
finished goods minus opening stock of finished goods. It is a forecast of production for
budgeted period and is prepared in physical units. It is necessary to coordinate the
173
Production Budget with the Sales Budget to avoid imbalance in production. It is an important
budget and forms the basis for preparation of material, labor and factory overhead budgets.
Illustration -2:
The SPG & Co. plans to sell 108,000 units of a certain product line in the first quarter,
120,000 units in the second quarter, 132,000 units in the third quarter, 156,000 units in the
fourth quarter and 138,000 units in the first quarter of the following year. At the beginning of
the first quarter of the current year, there are 18,000 units in stock. At the end of each
quarter the company plans to have an inventory equal to one-sixth of the sale for the next
quarter. We have to calculate the number of units that must be manufactured in each quarter
of the current year.
Solution:
174
Material usage in quantities
Material purchases in quantity and value, including total value
Data available is tabulated below
Solution:
1. Sales Quantity and Value Budget
175
3. Material Usage Budget (Quantities)
176
Depreciation 7.4
Material 21.7
Labor 20.4
The fixed expenses remain constant for all levels of production; semi-variable expenses
remain constant between 45% and 60% of capacity, increase by 10% between 65–80% and
by 20% between 80–100% capacity.
We have to prepare a flexible budget for the year and forecast the profit at 60%, 75%, 90%
and 100% of capacity.
Solution:
177
Flexible Budget for the period
178
A1.5.10 Cash budget
It is a forecast of cash flows, i.e. receipts during the budget period. Preparation of cash
budget is based primarily on following information:
a. Receipt and payment method – Under this method, all receipts and payments which are
expected during the period should be considered. Accruals and adjustments are
excluded while preparing the cash budget by receipt and payment method. All
anticipated cash receipts are added to the opening balance of cash. The expected cash
payments are deducted from this to arrive at the closing balance of cash for the month.
b. Adjusted profit and loss account method – This method is based on the assumption that
profit is equivalent to cash. The adjustments made to arrive at the profit will be added
back, if these adjustments do not involve cash outflow. For example, depreciation on
fixed assets, accrued expenses, etc., will be added back to profit to arrive at the cash
balance available at the closing date. It is used for making long-term cash forecast.
c. Balance sheet method – In this method, a forecast balance sheet is prepared considering
changes in all items (except cash) of balance sheet like fixed assets, plant and
machinery, furniture and fixtures, debtors, share capital, debentures and creditors, etc.
The two sides of balance sheet are balanced and the balancing figure represents
closing balance of cash.
179
budget, costs are classified and summarized by types of expenses as well as by
departments. The advantages of a Master Budget are as follows,
180
Figure A1.1
System concepts relating to traditional and zero-based budgeting
A1.6.1 Advantages
It is a healthy process that promotes self-searching among the managers and is used with
the object of finding out most useful alternatives for available resources of a company. Main
advantages are as follows:
181
All proposals, old and new, compete equally for scarce resources
It drives managers to find out cost effective ways to improve operations
It requires less paper work than traditional budgeting because the proposal goes from
bottom all the way to top. It avoids successive appraisals at various levels of
management
It detects deliberately inflated budget request
It identifies complete impact of spending money on a particular project
Steps involved in the introduction of zero-base budgeting
Decision units
An organization is divided into decision units. Managers of decision units justify the relative
budget proposal. Any base may be adopted for dividing the organization into decision units;
products, markets, customer groups, geographical territories, capital projects. The division of
organization among its decision units should be logically linked with organizational
objectives.
Identification of ‘decision packages’
Each manager should break down his decision unit into smaller decision packages. A
decision package has been defined as a document that distinctly identifies a function,
operation or an activity. A decision package will be evolved with reference to particular
circumstances and should have the following elements:
Identification of data
Economic benefits
Alternative course of action
Intangible benefits
Ranking decision packages
182
By ranking decision packages, a company is able to weed out a lot of marginal efforts.
Scarce resources of an organization should be directed at the most promising lines only. The
ranking process is used to establish a rank priority of decision packages within the
organization. During the ranking process managers and their staff will analyze each of the
several decision package alternatives. The analysis allows the manager to select the one
alternative that has the greatest potential for achieving the objective(s) of the decision
package. Ranking is a way of evaluating all decision packages in relation to each other.
Since, there are any number of ways to rank decision packages managers will no doubt
employ different methods. The main point is that the ranking of decision packages is an
important process of Zero-Base Budgeting. A decision package is ranked keeping the
following points in view.
183
Figure A1.2
Format illustrating the idea of program budgeting
First resources over the period are identified for different elements and then the cost of these
elements is aggregated to arrive at the cost of a program. For appraising public projects
program budgeting uses micro-economies. The flow chart in Figure A1.3 illustrates the
difference between the traditional budget format and the program budget format.
Figure A1.3
Difference between traditional and program budget format
Program budgeting includes:-
184
Identification of program elements
Allocating resources to programs
Utilizing forecast studies analysis
Quantification of data in considerable detail is necessary because budgeting consequences
of approved programs are anticipated and actual performance is compared with budget
performance of programs. In non-profit organizations it is necessary to identify the output
indicators because output cannot be measured in terms of money e.g., for an on-the-training
program, output indicator may contain number of workers trained. Similarly for health
program, output indicator may lower disease incidence. Program budget does not eliminate
the need for traditional budget and very useful for multi-year forecasts. It adds new
dimension to planning analysis and budgeting. Traditional budgeting emphasizes the
method and means used, program budgeting emphasizes the purposes and the objectives
of the program.
Presents the purposes and objects for which funds are requested
The costs of activities proposed for achieving these objectives
Quantitative data measuring the accomplishments
Work performance under each activity.
In program budgeting, the principal emphasis is on programs, identification of their elements
and determination of cost for each program. In performance budgeting stress is on activities
approved by the company and determination of cost of each activity.
185
Performance budgeting requires that budgetary decisions be made by emphasizing
output categories such as goals, purposes, objectives and products or services, instead
of salaries, materials and facilities as in the case of traditional budgeting.
Performance budgeting focuses on future impacts of current major decisions whereas
traditional budgeting is retrospective, i.e., measuring what was done with current means
in estimating the next budget year.
186
Figure A1.4
Five stages of performance budgeting
The five stages of performance budgeting are shown in Figure A1.4 where the clockwise
arrows represent the activity flow of the system, while the anticlockwise broken arrows
represent the feedback process. The tangential arrows represent the interface with the
outside world.
187
It provides operating manager with a sort of challenge for implementation of the budget
Operating managers feel a sense of responsibility to implement the budget, when they
have been a party to the decisions
Operating staff consider the budget as their own goal and sense of achievement in
completion
Cost center
Responsibility in a cost center is restricted to cost alone. Cost center is a segment of the firm
that provides tangible or intangible service to departments. In manufacturing environments,
all production centers and service centers are termed as separate cost centers. CIMA
defines ‘cost center’ as a production or service location, function, activity or item of
equipment whose costs may be attributed to cost units.
Revenue center
The revenue center is the smallest segment of activity or an area of responsibility for which
only revenues are accumulated. A revenue center is a part of that organization whose
manager has the primary responsibility of generating sales revenues but has no control over
the investment in assets or the cost of manufacturing a product. CIMA defines revenue
center as a center devoted to raising revenue with no responsibility for production.
Profit center
CIMA defines profit center as a part business accountable for costs and revenues. It may be
called a Business Center, Business Unit (BU), or Strategic Business Unit (SBU). A profit
188
center is a segment of activity for which both revenues and costs are accumulated.
Generally, most responsibility centers are viewed as profit centers, taking the difference
between revenues and expenses.
Investment center
CIMA defines investment center as a profit center whose performance is measured by its
return on capital employed. Investment center is a segment of activity for areas held
responsible for both profits and investment. For planning purposes, the budget estimate is a
measure of rate-of-return on investment and for control purposes, the performance
evaluation is guided by a return on investment variance.
Contribution center
Contribution center is an area of responsibility for which both revenues and variable costs
are accumulated. CIMA defines contribution center as a profit center whose expenditure is
reported on a marginal or direct cost basis. The main objective of a contribution center
manager is to maximize the center’s contribution
Illustration -5:
In a cotton textile mill, the spinning superintendent, weaving superintendent and the
processing superintendent report to the Mill Manager who along with the Chief Engineer
reports to Director (Technical).The sales manager along with publicity manager reports to
the Director (Marketing) who along with the Director (Technical) reports to the Managing
Director.
The following monetary transactions ($) have been extracted from the books for a particular
period.
A = Adverse; F = Favorable
189
We have to prepare responsibility accounting reports for the Managing Director, Director
(Marketing), Director (Technical) and Mill Manager.
Solution:
Responsibility Accounting Reports
A. Spinning Superintendent
B. Weaving Superintendent
190
Labor 600,000 620,000 20,000(A)
C. Processing Superintendent
A. Sales Manager
191
Expenditure – Traveling 40,000 42,000 2,000(A)
B. Publicity Manager
C. Director Marketing
Case Study – A
I. Product X is produced from two materials: C and D. Data in respect of these materials is
as follows:
192
During January there is to be an intensive sales campaign and to meet the expected
demand, the production director requires the stocks of materials and product X to be at
maximum level at 31st December.
Data in respect of product X are as follows:
A) From the above data you are required to prepare for the month of December:
Production budget
Purchase budget
B) Calculate the optimal re-order quantities in respect of material C based on data given
above and that:
193
Maximum level = Re-order level/ (Re-order quantity – minimum consumption during the
period required to obtain delivery).
Re-order level = maximum usage × maximum re-order period.
Minimum level = minimum usage × minimum lead time.
II. A company manufacturing two products using only one grade of direct labor is shown
below from next year’s budget.
The stock of finished goods at the beginning of the first quarter is expected to be 3,000 units
of product M and 1,000 units of product N. Stocks of work in progress are not carried.
Inspection is the final operation for product M and it is budgeted that 20% of production will
be scrapped. Product N is not inspected and no rejects occur.
The company employs 210 direct operators working a basic 40-hour week for 12 weeks in
each quarter and the maximum overtime permitted is 12 hours per week for each operator.
The standard direct labor hour content of product M is 5 hours per unit and for product N, 3
hours per unit. The budgeted productivity (efficiency) ratio for the direct operatives is 90%.It
is assumed both M and N are profitable.
Calculate the budgeted direct labor hours required in each quarter of next year and to which
the direct labor available can meet these budgeted requirements. Also suggest alternative
action to minimize the shortfall or surplus of labor hours to achieve each quarter’s sales
budget.
A2 Variance Analysis
A2.1 Introduction
194
The comparison of actual performance with standard performance reveals the variances. A
variance represents a deviation of the actual results from the standard results. There can be
cost variances, profit variances, sales value and operational and planning variances.
Variance analysis is an exercise that tries to isolate the causes of variances in order to
report to management those situations which can be corrected and controlled by timely
action. Variance analysis is used for decision making e.g., an unfavorable price variance for
raw materials may lead to looking for an alternative supplier and/or to increase the price of
the product. It is also used for incentives/control. In accordance with the principal of
controllability those factors a manager can control by means of variances are isolate.
Variance analysis should be a continuous process for the following reasons:
Labor rates, salary levels etc., change due to union negotiations, policy decisions or
changes in composition of the work force
Selling price changes
In a multi-product company, product mix changes and different lines have different
margins; the overall profit position will change
Improvement in systems can bring about reduction in costs
Change in level of efforts of operators, supervisors etc. can affect existing cost levels
Investment in new capital equipment and scrapping of old
equipment/processes/methods can affect the operating cost levels
The prices of bought-out material may vary
Changes in product design may change cost-inputs
Changes in organizational structure may affect cost levels
The amount of idle time may change due to holdups, strikes, lockouts and power failure.
195
Figure A2.1
Variances
Favorable variance = the actual amount < the standard amount. Favorable variances are
credits; they reduce production costs.
Adverse variance = the actual amount > the standard. Adverse/Unfavorable variances are
debits; they increase production costs. This works for each individual cost variance and
when a total variance is computed.
A favorable variance does not necessarily mean it is desirable, nor does an unfavorable
variance mean it is not desirable.
It is for the management to analyze all variances to determine the cause in the following
manner
196
A2.3.1 Direct material cost variance
Direct material cost variance and its sub-divisions are illustrated in Figure A2.2
Figure A2.2
Direct material cost variance
Model steps
Four model steps are given here for calculating the material cost variances
M1 – Actual cost of material used = Actual quantity of material used × Actual rate.
M2 – Standard cost of material used = Actual quantity of material used × Standard rate.
M3 – Standard cost of material used if it had been used in the standard proportion.
M4-Standard material cost output = Standard quantity of material required for the
specified output × Standard rate.
Material cost variance
It is the difference between actual cost of material used and standard cost of material
specified for output achieved. Material cost variance arises due to variation in price and
usage of materials. Difference between M1 and M4 will be the material cost variance.
Material price variance
Material price variance is the difference between the actual price paid and standard price
specified for the material. It represents the difference between the standard cost of actual
quantity purchased and actual cost of these materials. Difference between M1 and M2 will be
the material price variance. The material price variance is uncontrollable and provides
management with information for planning and decision making purposes. It helps the
management increase product price, use substitute materials or find other (offsetting)
sources of cost reduction.
Material usage or volume variance
197
It indicates whether or not material was properly utilized and is also referred to as quantity
variance. Material usage or volume variance is the difference between actual quantity used
and standard quantity specified for output. A debit balance of material usage (variance)
indicates that material used was in excess of standard requirements and credit balance
indicates saving in the use of material. Difference between M2 and M4 will be material usage
or volume variance.
Material usage variance consists of material mix variance and material yield variance. The
favorable material usage variance is not always advantageous to the company as it may be
related to an unfavorable labor efficiency variance, e.g. labor may have conserved material
by operating more carefully at a lower output rate.
Material mix variance
It is the difference between the actual composition of the mix and the standard composition
of mixing the different types of materials. Short supply of a particular material is often the
common reason for material mix variance and it is the difference between M2 and M3.
Material yield variance
In certain cases, it is observed that output will be a particular percentage of total input of
material e.g., 80% of the total input of material will be the expected output. If the actual yield
obtained is different from that of the standard yield specified, there will be yield variance.
Difference between M3 and M4 will be the material yield variance.
Illustration –6:
The standard cost of a certain chemical mixture is as under:
198
Material B – 220 kgs × $ 30 = $6600
M2= $10,200
M3 – Standard cost of Material, if it had been in standard proportions.
Standard mix in kgs
Material A = (Weight in actual mix × standard rate of material A per kg × standard mix in kg)/
weight of standard mix = 400 kgs × $20 × 40 kgs/100 kgs = $3,200
Material B = (Weight in actual mix × standard Rate of material A per kg × standard mix in
kg)/ weight of standard. mix = 400 kgs × $30 × 60 kgs/100 kgs = $7,200
M3 = $3,200 + $7,200 = $10,400
M4- Standard cost of output.
199
Figure A2.3
Direct wage variance
Model Steps for direct wage variances are as follows,
L1 – actual payment made to workers for actual hours worked = Actual hours worked ×
Actual hourly wage rate
L2 – estimated payment if the workers had been paid at standard rate = Actual hours
worked × Standard hourly wage rate
L3 – estimated payment if workers had been used according to the proportion of standard
group, and payment had been made at standard rate. Example – in actual working, Grade-I
and Grade-II workers might have been used in the ratio of 75:25 instead of standard ratio of
50:50.Hence, this step will involve determining the wage payment keeping in view the
following points (see Figure A2.3).
L4 – Standard cost of labor hours utilized = Actual utilized hours × Standard hourly rate.
Note: Strike, power failure etc. should be deducted from the available hours.
L5 – Standard labor cost of output achieved = Standard labor cost per unit × Actual
production.
Direct Wage Variance represents the difference between actual wages paid and standard
wages specified for the production and is expressed by the difference
between (L1) and (L5 ).
It is due to the difference between actual wage rate paid and the standard wage rate
specified. It represents the difference between actual payment to the worker for actual hours
worked (L1) and estimated payment if the worker had been paid at standard rate (L2).
200
of that group. The difference between (L2) and (L3) will give rise to the direct wage group
(gang) variance.
Labor idle time variance
The difference between labor hours applied and labor hours utilized is the labor idle time
variance difference between (L3) and (L4). The idle time variance is to be calculated when
there is a difference between labor hours applied and labor hours utilized.
Labor yield variance
It is the difference between the actual output of a worker and standard output of the worker.
It is also termed as the labor efficiency variance. It can be divided into two parts:
1. When there is Idle Time variance
Yield variance = Standard cost of labor hours utilized (L4 ) – standard labor cost of output
achieved (L5 ).
2. When there is no Idle Time variance
In that case (L4 ) is to be considered as zero and the yield variance will be calculated from
the difference between the value of (L3) and (L5 ).
Illustration – 7:
The standard composition and standard rates of a group of workers are as follows,
According to given specifications, a week consists of 40 hours and standard output for a
week is 1000 units. It is observed in a particular week, a group of 13 skilled, 4 semi-skilled
and 3 unskilled worked and they were paid as follows,
The production line supervisor’s report recorded that two hours were lost due to breakdown
and actual production was 960 units in that week.
We have to find out –
201
Solution:
L1 – actual payment made to workers for actual hours worked
4 semi-skilled 40 0.425 68
3 unskilled 40 0.325 39
Total : 419
4 semi-skilled 40 0.400 64
3 unskilled 40 0.350 42
Total : 431
L3 – Payment involved if workers had been used according to the proportion of standard
group, and payment had been made at standard rate.
5 semi-skilled 40 0.400 80
5 unskilled 40 0.350 70
Total : 400
202
10 skilled 40 0.625 237.50
Total : 380.00
Variances
203
Figure A2.4
Variable overhead variance
Model steps are shown below,
VO1 – Actual overhead incurred.
VO2 – Actual hours worked at standard variable overhead rate
= Standard variable overhead rate per hour × Actual hours worked.
VO3 – Standard variable overhead for the production
= Standard variable overhead per unit × Actual production.
Variable overhead variance
It is the difference between actual overhead incurred during the period and standard variable
overhead for production i.e. VO1 – VO3. Variable overhead variance can also be determined
by taking the aggregate of variable overhead expenditure and variable overhead efficiency
variance.
Variable overhead expenditure variance
It is the difference between actual variable overhead and standard variable overhead
appropriate to the level of activity attempted. Variable overhead expenditure variance is the
difference between VO1 and VO2.
Variable overhead efficiency variance
Variable overhead efficiency variance is the difference between actual hours worked at
standard variable overhead rate (VO2) and standard variable overhead for the
production (VO3).
Illustration – 8:
Following information is obtained from a pre-cast concrete slab manufacturing company for
the year 1999.
204
From the above data we have to find out the following variance.
Variances
205
Figure A2.5
Fixed overhead variance
Fixed overhead variance arises when a company uses the absorption standard costing
system. In this system, a standard rate is ascertained for fixed overheads by dividing the
total fixed overhead by an appropriate base e.g. machine hours, units, labor etc. Fixed
overheads incurred differ from standard allowance for fixed overheads or standard fixed
overheads for production for various reasons. These give rise to different kinds of fixed
overhead variances (see Figure A2.5).
FO1 – Actual fixed overhead incurred.
FO2 – Budgeted fixed overhead for the period or std. fixed overhead allowance. It represents
the amount of fixed overhead which should be spent according to budget during the period.
The amount of standard allowance for fixed overhead does not change due to change in
volume.
FO3 – Fixed overhead for the days/hours available at standard rate during the period. It is
calculated by multiplying days/hours available and standard overhead rate.
FO4 – Fixed overhead for actual hours worked at standard rate.
FO5 – Standard fixed overhead for production. It is calculated by two ways,
Unit Method-Multiplying the actual production and standard fixed overhead rate per unit.
Hour Method-Multiplying the actual production in standard hours and standard fixed
overhead rate per hour.
Fixed overhead variance is the difference between actual fixed overhead incurred and
standard cost of fixed overhead absorbed and is calculated from the difference
between FO1 and FO5.
206
Fixed overhead volume variance arises due to the difference between budgeted fixed
overhead for the period and standard fixed overhead for actual production. This variance
indicates the degree of utilization of plant and facilities when compared to the budgeted level
of operation.
Fixed overhead volume variance consists of:
Calendar variance or idle time variance
Calendar variance is the difference between budgeted fixed overhead and fixed overhead
for days available during the period, at standard rate i.e. the difference
between FO2 and FO3.
Idle time variance is determined almost in the same way as calendar variance. The
difference between FO2 and FO3 is computed in hours as per budget and hours actually
available during the period.
Capacity variance
It is the difference between capacity utilized and planned capacity or available capacity.
Capacity variance is calculated from the difference between FO3 and FO4. An adverse
capacity variance indicates under utilization. It will lead to unabsorbed balance of fixed
overhead.
Efficiency variance
Efficiency variance reflects increased or reduced output arising due to the difference
between budgeted or standard efficiency and actual efficiency in utilization of fixed common
facilities. It is the barometer by which management comes to know how efficiently or
inefficiently fixed indirect facilities or services are being used and is the difference
between FO4 and FO5.
Illustration – 9:
We have to calculate fixed overhead variances from the following cost data of Naturextracts
Ltd.
Solution
Calculation of variances
207
Variances
Fixed Overhead Expenditure Variance = FO1 – FO2 = $8000 (A)
Fixed Overhead calendar variance = FO2 – FO3= $16000 (F)
Fixed Overhead Capacity Variance = FO3 – FO4 = $8800 (F)
Fixed Overhead Efficiency Variance = FO4 – FO5 = $18480 (A)
Fixed Overhead Variance= FO1 – FO5 = $1680 (A)
Fixed Overhead Volume Variance = FO2 – FO5 = $6320 (F)
Analysis
The variance analysis has already been discussed and the terms Two-Variance, Three-
Variance and Four-Variance Methods are not separate methods but these terms indicate the
extent to which variances are being analyzed in a particular organization.
208
A2.3.5 Material Cost Variances and Two-Variance, Three-Variance and
Four-Variance Approach
Two-variance
The term two-variance indicates that the analysis is restricted to two factors, i.e. price and
quantity only. It can be graphically illustrated as shown in Figure A2.6.
Figure A2.6
Two-variance
Then, Actual cost= PA QA
Standard Cost = P S QS
= PA QA – PS QS.
If the two-variance approach refers to material cost variance, then only material price
variance and material quantity variance will be determined.
Three-variance
In this approach material cost variance, i.e. PA QA – PS QS is to be equal to A+ B + C and
following variances are attempted.
a) Material price variance b) Material quantity variance c) Material mix variance.
Four-variance
209
The four variance approach takes the analysis still further and the area represented A+ B +
C in the above figure will be divided as follows.
We have to analyze the overhead variances and summarize those results according to the
‘Two-way’, ‘Three-way’ and ‘Four-way’ approach.
Solution:
Calculation of Fixed Overhead Variances
210
(Please refer Illustration-9 for details)
FO1 - $2500, FO2 - $2400, FO3 - $2592, FO4 - $2640 & FO5 - $2600
Variances
Fixed Overhead Expenditure Variance = FO1 – FO2 = $100 (A)
Fixed Overhead calendar variance = FO2 – FO3= $192 (F)
Fixed Overhead Capacity Variance = FO3 – FO4 = $48 (F)
Fixed Overhead Efficiency Variance = FO4 – FO5 = $40 (A)
Fixed Overhead Variance = FO1 – FO5 = $100 (F).
Fixed Overhead Volume Variance = FO2 – FO5 = $200 (F)
Variable overhead variance
Variances
Variable overhead expenditure variance = VO1 – VO2 = $ 100 (A).
Variable overhead efficiency variance = VO2 – VO3 = $ 200 (A).
Variable overhead variance = VO1 – VO3 = $ 300 (A).
The statement below shows the overhead variances under ‘Two-way’, ‘Three-way’ and
‘Four-way’ methods.
*In three-variance analysis, Fixed Overhead Capacity Variance will include Fixed Overhead
Calendar Variance also.
211
for in advance. Extraneous factors such as inflation arise outside the organization and are
uncontrollable, and require changes in a plan at short notice.
These factors invalidate the conventional variance analysis and necessitate the use of
operating and planning variances.
Operating and planning variances are subsets of material total variance replacing traditional
usage and price variances. These variances are used to isolate variances caused by
unforeseen circumstances (planning variance) and operational variance, which reflects non-
standard performance. This approach may also apply to labor and overhead.
Standard quantity of material specified for the output in the period: 20,000 kg
Actual material purchased and used: 21,000 kg
Actual purchase price paid: $ 2.80 due to material shortage.
At the end of the period, a price of $3 was agreed to have been an efficient buying price
in the period. The standard costing system shows a direct material total variance of
$18,800 made up of:
o Material usage Variance $ 2,000 (A)
o Material Price Variance $ 18,000 (A)
We need to distinguish between controllable and uncontrollable effects on performance.
Solution:
Material actually purchased and used at revised standard cost (21,000 × $3) = $ 63,000
212
Operational Usage variance (Controllable), (b-c) = $3,000 (A)
Planning Price variance (Un-Controllable), (c-d) = $ 20,000 (A)
Direct material total Variance = Planning variance + Operational variance
= 3000 (A) + 20000 (A) + 4200 (F).
= $ 18,800 (A).
Causes of different variances are given here:
213
prompt communication to management and the delay may render the analysis useless. Cost
control efforts are generally aided by using ratio analysis technique. The management
prefers to compute a number of ratios pertaining to liquidity, profitability and capital structure
etc. as opposed to absolute figures. Variance ratios Figure A2.7) help in comparing different
periods and highlighting abnormalities.
The following variance ratios are commonly used:-
Figure A2.7
Variance ratios
Illustration – 12
The following data is available in the books of GKW Ltd.
The related period of 4 weeks and there was a special one day holiday due to national
event. We have to calculate the following ratios: Efficiency Ratio, Activity ratio, Calendar
214
ratio, standard Capacity Usage Ratio, Actual Capacity Usage Ratio, Actual Usage of
Budgeted Capacity Ratio.
Solution:
Efficiency Ratio = 7,000 hrs. × 100 / 6,000 hrs. = 116.7%.
Activity ratio = 7,000 hrs. × 100 / 6,400 hrs. = 109.4%.
Calendar ratio = {(5 days × 4 weeks) – 1} × 100 / (5 days × 4 weeks) = 95.0%.
Standard Capacity Usage Ratio = 6,400 hrs. × 100 / 8,000 hrs. = 80.0%.
Actual Capacity Usage Ratio = 6,000 hrs. × 100 / 8,000 hrs. = 75.0%.
Actual Usage of Budgeted Capacity Ratio = 6000 hrs. × 100 / 6,400 = 93.75%.
Case studies – B
I. PRAX Inc. manufactures a single product for which the following information is available:
The following extracts were taken from the actual results and reports for two consecutive
accounting years.
215
We have to:
Interpret and explain every variance of Period –I showing the underlying calculations.
Calculate the sub-variances of the fixed overhead volume variance for Period-
II assuming that an alternative suggestion is adopted that such a subdivision would aid
management control.
II. A jewelry manufacturing company planned to manufacture 5,000 components in the
month of September. Each component requires four standard hours to complete. For the 22
working days in the month 120 workers were allocated to the job. The factory operates an 8-
hours day. The worker allocation included a margin for absenteeism and idle time.
Actual results were analyzed and it was revealed that absenteeism had averaged 5%, an
average of 5 hours overtime had been worked by those present and 4,800 components had
been completed at an average of 3.8 standard hours each.
During October, 120 of the completed components have been scrapped because of
defective material. It is also planned to produce another 5,000 units plus the shortfall from
September. From the information given, you are required to:
Calculate the following ratios for September,
Estimate the manpower required for November production using 21 working days with
one hour overtime per man per day, working at the budgeted efficiency level and the
same percentage of absenteeism, idle time and rejects as occurred in the September
production.
Calculate the bonus for November, on a 50:50 profit sharing basis, which could be paid
as an addition to the wage rate if the September production efficiency was achieved, idle
time reduced to 5% but all other features were the same as in (c) above.
A3 Cost reporting
A3.1 Common forms of reports
Narrative reports– These are descriptive and verbal reports.
Statistical reports – These reports rely on tables, numbers, graphs, charts, etc.
Periodic reports – Reports may be issued on regular scheduled basis, e.g. daily, weekly,
monthly, quarterly and annually.
Progress reports – Interim reports between the start and completion of a project and also
called follow-up reports.
Special reports – Generally these reports are sent irregularly in response to a specific, non-
routine request. In control reports, a subordinate summarizes the activities under the
jurisdiction and accounts to his superiors for results, that he has previously committed
himself to achieve. The practice of ‘reporting by results’ is primarily for management control
purposes.
216
Top management
Middle Management
Operating Management
217
A4 Value management
The inquisitive mind is never satisfied with things as they are and is always looking at ways
to make and do things better. It is considered that everything can be improved and value
analysis is an outcome of this philosophy. It has been defined as an organized creative
approach that emphasizes efficient identification of unnecessary cost, i.e., cost that provides
neither quality, nor use, nor life, nor appearance, nor a customer’s satisfaction. It was
applied as a method to improve value in existing products by Lawrence D. Miles of General
electric in 1947. Initially value analysis was used principally to identify and eliminate
unnecessary costs. However it is equally effective in increasing performance and addressing
resources other than cost and as it evolved the application of VA widened beyond products
into services, projects and administrative procedures.
Value Management (VM) has evolved out of previous methods based on the concept of
value and functional approach. It is a style of management particularly dedicated to
motivating people, developing skills and promoting synergies and innovation, with the aim of
maximizing the overall performance of an organization.
Use value – the monetary measure of the functional properties of the product or service
which reliably accomplish a user’s needs.
Esteem value – the monetary measure of the properties of a product or service which
contribute to its desirability or salability. Commonly answers the ‘How much do I want
something?’ question.
Cost value – the monetary sum of labor, material, burden, and other elements of cost
required to produce a product or service.
218
Exchange Value – the monetary sum at which a product or service can be freely traded
in the marketplace.
Value objectives
The overarching goals that define project success.
What is value management (VM)?
VM is a team based managerial approach which identifies the required performance to
satisfy the business and stakeholders needs; also determines the most appropriate solutions
to suit the performance. Performance includes business needs, quality, image, social
benefits, and revenue generation. Value management is a collection of processes or efforts
by which organizations can proactively pursue one or more identified project value objective.
Traditionally, this involves a formalized team decision-making and problem-solving process.
The Value Management Approach involves three root principles:
Figure A2.8
Concept of value
The value can be improved by influencing the performance and resources variables in a
number of ways:
219
Here P and R denote functional Performance and Life Cycle Cost/ NPV respectively.
It is important to realize that Value may be improved by increasing the satisfaction of need
even if the resources used in doing so increase, provided that the satisfaction of need
increases more than the increase in use of resources.
A4.3 Purpose
VM is the systematic application of recognized techniques used by a multi-disciplined team
to: identify the function of a product or service, establish a worth for that function, generate
alternatives through the use of creative thinking, and provide the needed functions to
accomplish the original purpose of the project. This should be accomplished reliably and at
the lowest life-cycle cost without sacrificing safety, necessary quality, and environmental
attributes of the project.
Analyze Information
Brainstorm Function
Organize Function
Generate ideas for alternatives
Evaluate alternatives and develop proposals
Appraise options
220
Recommend solutions
221
222
A2.9
The value methodology job plan
The VM Job Plan covers three major periods of activity: Pre-Study, the Value Study, and
Post-Study. All phases and steps are performed sequentially.
Pre-study
Preparation tasks involve six areas;
223
Information
Function Analysis
Creativity
Evaluation
Development
Presentation
Information stage
The objective of the Information Phase is to complete the value study data package started
in the Pre-Study Work and if a ‘site’ visit was not possible during Pre-Study, it should be
completed during this phase. The purpose is to establish a common understanding of the
project.
Users and stakeholders are identified in this stage. The scope statement is reviewed for any
adjustments due to additional information gathered during the Information Phase. Gathering
and sharing project information, background, constraints etc. are also the part of the
process.
Function analysis stage
Function definition and analysis is the heart of Value Methodology. It is the primary activity
that separates Value Methodology from all other ‘improvement’ practices. The objective of
this phase is to develop the most beneficial areas for continuing study where project
functions or objectives are analyzed to improve value, by considering: ‘What should this do?’
rather ‘What is this?’ focus on functions rather than product.
The team performs the following steps:
Identify and define both work and sell functions of the product, project, or process under
study using active verbs and measurable nouns. This is often referred to as Random
Function Definition.
Classify the functions as basic or secondary
Expand the functions identified in step 1 (optional)
Build a function Model – Function Analysis System Technique (FAST) diagram. The
function analysis captures requirements diagrammatically into a FAST diagram which is
a logical method that identifies hierarchy.
Assign cost and/or other measurement criteria to functions
Establish worth of functions by assigning the previously established user/customer
attitudes to the functions
Compare cost to worth of functions to establish the best opportunities for improvement
Assess functions for performance/schedule considerations
Select functions for continued analysis
Refine study scope (see Figure A2.10)
224
Figure A2.10
FAST diagram
Creative stage
The principle objectives of the Creative Phase are to harness the multidisciplinary team’s
experience and knowledge while developing a large quantity of ideas. This is a creative type
of effort, totally unconstrained by habit, tradition, negative attitudes, assumed restrictions,
and specific criteria. The teams are encouraged to think creatively to generate alternative
ideas for achieving the project functions without judgment. The quality of each idea will be
developed in the next phase, from the quantity generated in this phase.
There are two keys to successful speculation:
First, the purpose of this phase is not to conceive ways to design a product or service,
but to develop ways to perform the functions selected for study.
Secondly, creativity is a mental process in which past experiences are combined and
recombined to form new combinations. The purpose is to create new combinations
which will perform the desired function at less total cost and improved performance than
was previously attainable.
Methods of creative stage: Brainstorming, Crawford Slips.
Evaluation stage
The principle objective of the Evaluation Phase is to analytically judge and evaluate the
ideas/options generated at the Creative Stage against the performance criteria/functions, in
order to select the ideas/ options with worthwhile potential.
The process typically involves several steps:
225
Eliminate nonsense or ‘thought-provoker’ ideas.
Group similar ideas by category within long term and short term implications. Examples
of groupings are electrical, mechanical, structural, materials, special processes, etc.
Have one team member agree to ‘champion’ each idea during further discussions and
evaluations. If no team member volunteers, the idea or concept is dropped.
List the advantages and disadvantages of each idea.
Rank the ideas within each category according to the prioritized evaluation criteria using
such techniques as indexing, numerical evaluation, and team consensus.
If competing combinations still exist, use the decision analysis matrix to rank mutually
exclusive ideas satisfying the same function.
Select ideas for development of value improvement.
If none of the final combinations appear to satisfactorily meet the criteria, the value study
team returns to the Creative Phase.
Development stage
The objective of the Development Phase is to refine the key ideas/option, to select and
prepare the ‘best’ alternative(s) for improving value. The data package prepared by the
champion of each of the alternatives should provide as much technical, cost, and schedule
information as practical. This ensures that a designer and project sponsor(s) make an initial
assessment concerning their feasibility for implementation. The following steps are included:
Beginning with the highest ranked value alternatives, develop a benefit analysis and
implementation requirements, including estimated initial costs, life-cycle costs, and
implementation costs taking into account risk and uncertainty.
Conduct performance benefit analysis.
Compile technical data package for each proposed alternative:
o written descriptions of original design and proposed alternative(s)
o sketches of original design and proposed alternative(s)
o cost and performance data, clearly showing the differences between the original
design and proposed alternative(s)
o any technical back-up data such as information sources, calculations, and literature
o schedule impact
Prepare an implementation Plan, including proposed schedule of all implementation
activities, team assignments and management requirements.
Complete recommendations including any unique conditions to the project under study
such as emerging technology, political concerns, impact on other ongoing projects,
marketing plans, etc.
Implementation stage
The objective of the Implementation Stage is to obtain concurrence and a commitment from
the designer, project sponsor, and other management to proceed with implementation of the
recommendations. This involves an initial oral presentation followed by a complete written
report. As the last task within a value study, the VM study team presents its
recommendations to the decision making body. Through the presentation and its interactive
discussions, the team either obtains approval to proceed with implementation, or direction
226
for additional information needed. The written report documents the alternatives proposed
with supporting data, and confirms the implementation plan accepted by management.
Specific organization of the report is unique to each study and organization requirements.
Post study
The objective during Post-Study activities is to assure the implementation of the approved
value study change recommendations. Assignments are made either to individuals within the
VM study team, or by management to other individuals, to complete the tasks associated
with the approved implementation plan. While the VM Team Leader may track the progress
of implementation, in all cases the design professional is responsible for the implementation.
Each alternative must be independently designed and confirmed, including contractual
changes if required, before its implementation into the product, project, process or
procedure. Further, it is recommended that appropriate financial departments conduct a post
audit to verify to management the full benefits resulting from the value methodology study.
Better business decisions by providing decision makers a sounds basis for their choice
Improved products and services to external customers by clearly understanding, and
giving due priority to their real needs
Enhanced competitiveness by facilitating technical and organizational innovation
A common value culture, thus enhancing every member’s understanding of the
organization’s goals
Improved internal communication and common knowledge of the main success factors
for the organization
Simultaneously enhanced communication and efficiency by developing multidisciplinary
and multitask teamwork
Decisions which can be supported by the stakeholders.
227
Function Analysis is a common language, crossing all technologies. It allows multi-
disciplined team members to contribute equally and communicate with each other while
addressing the problem objectively without bias or preconceived conclusions. As an effective
management tool, FAST can be used in any situation that can be described functionally.
However, FAST is not a panacea; it is a tool that has limitations which must be understood if
it is to be properly and effectively used.
FAST is a system without dimensions – that is, it will display functions in a logical sequence,
prioritize them and test the dependency, but it will not tell how well a function should be
performed (specification), when (not time oriented), by whom, or for how much. However,
these dimensions can be added to the model. Which dimensions to use is dependent on the
objectives of the project. There is no ‘correct’ FAST model, but there is a ‘valid’ FAST model.
Its degree of validity is directly dependent on the talents of the participating team members,
and the scope of the related disciplines they can bring to bear on the problem. The single
most important output of the multi-disciplined team engaged in a FAST exercise is consensus.
There can be no minority report. FAST is not complete until the model has the consensus of
the participating team members and adequately reflects their inputs.
Figure A2.11
FAST components
A. Scope of the problem under study
Depicted as two vertical dotted lines, the scope lines bound the problem under study, or that
aspect of the problem with which the study team is involved.
B. Highest order function(s)
The objective or output of the basic function(s) and subject under study, is referred to as the
highest order functions, and appears outside the left scope line, and to the left of the basic
functions. Any function to the left of another on the critical path is a ‘higher’ order function
228
C. Lowest order function
These functions to the right and outside of the right scope line represent the input side that
‘turn on’ or initiate the subject under study and are known as lowest order functions. Any
function to the right of another function on the critical path A is a ‘lower’ order function. The
term ‘higher’ or ‘lower’ order functions should not be interpreted as relative importance, but
rather the input and output side of the process. As an example, ‘receiving objectives’ could
be the lowest order function with ‘satisfying objectives’ being the highest order function. How
to accomplish the ‘satisfy objectives’ (highest order function) is therefore the scope of the
problem under study.
D. Basic function(s)
Those function(s) to the immediate right of the left scope line representing the purpose or
mission of the subject under study.
E. Concept
All functions to the right of the basic function(s) describe the approach elected to achieve the
Basic function(s). The ‘concept’ either represents existing conditions or proposed approach.
Which approach to use (current or proposed) is determined by the task team and the nature
of the problem under study.
F. Objective or specifications
Objective or specifications are particular parameters or restrictions which must be achieved
to satisfy the highest order function in its operating environment. Although they are not
functions by themselves, they may influence the concept selected to best achieve the basic
function(s), and satisfy the user’s requirements.
G. Critical path function(s)
Any function on the How or Why logic is a critical path function. If a function along the Why
direction enters the basic function(s) it is a major critical path, otherwise it will conclude in an
independent (supporting) function and be a minor critical path. Supporting functions are
usually secondary, and exist to achieve the performance levels specified in the objectives or
specifications of the basic functions, or because a particular approach was chosen to
implement the basic function. Independent functions (above the critical path) and activities
(below the critical path) are the result of satisfying the When question.
H. Dependent functions
Starting with the first function to the right of the basic function, each successive function is
‘dependent’ on the one to its immediate left or higher order function, for its existence. That
dependency becomes more evident when the How question and direction is followed.
I. Independent (or supporting) function(s)
Functions that do not depend on another function or method selected to perform that
function. Independent functions are located above the critical path function(s), and are
considered secondary, with respect to the scope, nature and level of the problem, and its
critical path.
J. Function
This is the end or purpose that a ‘thing’ or activity is intended to perform, expressed in a
verb-noun form.
K. Activity
229
In recent years the activities are not normally shown on the FAST diagram, but rather used
in the analysis to determine when to stop listing functions; i.e. if when defining functions the
next connection is an activity, then the team has defined the functions to their lowest level.
Therefore, today’s teams place the independent functions both above and below the major
critical path. To those who are system oriented, it would appear that the FAST diagram is
constructed backwards, because in systems terms the ‘input’ is normally to the left side, and
the ‘output’ to the right. However, when a method to perform a function on the critical path is
changed, it affects all functions to the right of the changed function, or stating it in function
analysis terms, changing a function will alter the functions dependent upon it. Therefore, the
How (reading left to right) and Why (reading right to left) directions are properly oriented in
FAST logic.
Standard symbols and graphics
These symbols and graphics have become the accepted standard over the past 20 years.
The four primary directions in a FAST diagram are (see Figure A2.12):
Figure A2.12
Standard symbols and graphics
The HOW and WHY directions are always along the critical path, whether it be a major or
minor critical path. The WHEN direction indicates an independent or supporting function (up)
or activity (down). At this point, the rule of always reading from the exit direction of a function
in question should be adopted so that the three primary questions HOW,
WHY and WHEN are answered in the blocks indicated below in Figure A2.13.
230
Figure A2.13
Standard symbols and functional directions
HOW is (function) to be accomplished? By (B)
WHY is it necessary to (function)? So you can (A)
WHEN (function) occurs, what else happens? ( C ) or (D)
The answers to the three questions above are singular, but they can be multiple (AND), or
optional (OR).
ALONG THE CRITICAL PATH - AND
‘AND’ is represented by a split or fork in the critical path. In both examples the fork is read as
‘AND’. In Example A, How do you ‘Build System’? By ‘Constructing Electronics) AND
‘Constructing Mechanicals.
In Example B, how do you ‘Determine Compliance Deviations?’? By ‘Analyzing Design’ AND
‘Reviewing Proposals’. However, the way the split is drawn, Example A shows ‘Constructing
Electronics’ and ‘Constructing Mechanicals’ equally important; and in Example B, ‘Analyzing
Designs’ is shown as being considered more important than ‘Review Proposals.’ (see Figure
A2.14).
231
Figure A2.14
Along the critical path – ‘And’
Along the critical path - ‘OR’
‘Or’ - represented is represented by multiple exit lines indicating a choice.
Using Example A, the answer to the question, how do you ‘convert bookings’, is by
‘Extending bookings’ OR ‘Forecast Orders’. When going in the ‘Why’ direction one path at a
time is followed. Therefore: ‘Why’ do we ‘Extend Bookings’? So that you can ‘Convert
Bookings’. Also, why do you ‘Forecast Orders’? So that you can ‘Convert Bookings to
Delivery’. The same process applies to Example B, except as in the AND Example,
‘Evaluate Design’ is noted as being less important then ‘Monitor Performance’ (see Figure
A2.15).
232
Figure A2.15
Along the critical path – ‘OR’
‘AND’ Along the WHEN Direction
WHEN functions, applicable to independent functions and activities, AND is indicated by
connecting boxes above and/or below the critical path functions (see Figure A2.16).
233
Figure A2.16
‘And’ along the when direction
The above example states ‘When you influence the customer’, you ‘Inform the customer’
AND ‘apply skills’. If it is necessary to rank or prioritize the AND functions, those closest to
the critical path function should be the most important. It would appear that the same ‘fork’
symbol should be used to express AND in the WHEN as well as the HOW-WHY direction,
giving example A this appearance (see Figure A2.17).
234
Figure A2.17
However, to do so would cause some graphic problems in multiple AND functions, in
addition to building minor critical paths, such as (see Figure A2.18):
Figure A2.18
‘OR’ Along the WHEN Direction
OR is indicated by ‘flags’ facing right or left (see Figure A2.19).
235
Figure A2.19
‘OR’ Along the WHEN direction
Although Example B is an independent function (above the critical path) and Example C is
activities (below the critical path), the WHEN, OR rules are equally applicable to both.
Locating the ‘flags’ to the left or right of the vertical line bears on how we read back into the
critical path function.
In Example C, working from below the critical path function, it reads: When you ‘request
work’, you ‘order components’ OR ‘order collating’. Since the blocks are below the critical
path, they are activities. In reading activities back, the flags face the HOW direction and the
question reads: HOW do you ‘order Components’ OR ‘order collating’? By ‘requesting work’.
Once again the graphic considerations modified the OR notations as seen on the critical
path. Since OR is expressed in this form on the critical path:
Figure A2.20
It would appear that OR in the WHEN direction should follow this convention. However, the
same problem in building minor critical paths from the support functions would occur. Also,
the ‘flag’ OR, reading back into the critical path would be more difficult to express.
Figure A2.21
Other notations and symbols
236
Other notations and symbols used in expressing ideas and thoughts in the FAST Model are
as follows: Indicates that the network continues, but is of no interest to the team, or does not
contribute to the solution sought.
Figure A2.22
Indicates the Function (F) is further expanded on a lower level of abstraction.
Figure A2.23
Indicates that the line X connects elsewhere on the model, matching the same number.
Figure A2.24
A horizontal chart depicting functions within a project, with the following rules:
The sequence of functions on the critical path proceeding from left to right answers the
questions ‘How is the function to its immediate left performed?’
The sequence of functions on the critical path proceeding from right to left answers the
questions ‘Why is the next function performed?’
Functions occurring at the same time or caused by functions on the critical path appear
vertically below the critical path
The basic function of the study is always farthest to the left of the diagram of all functions
within the scope of the study.
Two other functions are classified:
o Highest Order – The reason or purpose that the basic function exists. It answers the
‘why’ question of the basic function and is depicted immediately outside the study
scope to the left.
237
o Lowest Order – The function that is required to initiate the project and is depicted
farthest to the right, outside the study scope. For example, if the value study
concerns an electrical device, the ‘supply power’ function at the electrical connection
would be the lowest order function.
Appendix B
Establishing budgets
Controlling costs and motivating and measuring efficiencies
Promoting possible cost reduction
Simplifying cost procedures and expediting cost reports
Assigning cost to materials, work-in-process and finished goods inventories
Forms the basis for establishing bids and contracts and for setting selling prices
Standard Costing: According to CIMA (London), ‘Standard costing is a control technique
which compares standard costs and revenues with actual results to obtain variances which
are used to stimulate improved performance.’ Use of standard costing is not confined to
industries having repetitive processes and homogeneous products only. It can be used in
238
non-repetitive processes like manufacture of automobiles, turbines, boilers and heavy
electrical equipment.
Use of standard costing leads to optimum utilization of men, materials and resources
Its use provides a yardstick for comparison of actual cost performance
Only distinct deviations are reported to management. It helps in the application of the
principle of ‘management by exception’
It is useful to management in discharging functions, like planning, control, decision
making and price fixing
It creates an atmosphere of cost consciousness
It motivates workers to strive for accomplishment of defined targets
It highlights areas, where a probe promises improvement
Its introduction leads to simplification of procedures and standardization of products
It reduces the time required for preparation of reports for pricing, control or quotation
purposes
239
It helps determine the cost of finished goods immediately after completion
This eliminates much clerical effort in pricing, balancing and posting on store ledger
cards and stock ledgers can be maintained in terms of quantities only
Its use may encourage action for cost reduction.
Specific advantages
Specific uses of standard costing in an organization are summarized below:
Accounting department is benefited by:
Planning and budgeting, valuation of inventories, cost control, pricing, sales and cost
estimates, developing monthly operating results.
Production department is benefited by:
Determining and checking selling prices, preparing quotations on special products and
determining the profitability of specific product lines
Current standards
Basic standards
Normal standards
Based on tightness and looseness
Ideal standards
Expected or attainable standards
240
Determination of standards for various elements of cost is an exercise, which requires skill,
imagination and experience. The job of setting the standards is done by a group, which is
represented by Engineering, Production, Purchase, Personnel and Cost Accounts
department. Setting of standards can be divided in two categories:
B2 Overhead allocation
B2.1 Introduction
A cost item may have a direct or indirect relationship with the cost objective, i.e., the purpose
or object for which the cost is being ascertained. Based on the direct and indirect
relationship of the cost object, the total cost is supposed to be composed of the following two
major categories: Prime and Overhead costs.
All direct costs are part of the prime cost, which is an aggregate of direct material cost and
direct wages. All indirect costs form a part of the overhead, which is an aggregate of indirect
material cost, indirect wages and costs of indirect services. Therefore overhead is a pool of
indirect costs, i.e., the costs which cannot be identified or linked or attributed or allocated to
the cost objective. CIMA defines overhead/indirect cost as expenditure on labor, materials or
services which cannot be economically identified with a specific saleable cost per unit. The
concept of overhead is shown in Figure B2.1.
Figure B2.1
The overhead concept
When traceability is the basis of cost classification, the terms direct costs and indirect costs
are used. On the other hand, if cost is classified based on elements, prime cost and
overhead are used. Overhead may include some direct costs, which are so small in amount,
that it will be inexpedient to trace them to specific units of production. Screws, bolts, glue
etc., are a few examples. These items can be directly traced to cost units, but the cost
involved may be so insignificant that it would be inexpedient to do so.
241
Indirect material cost is that material cost which cannot be assigned to specific units of
production; it is common to several units of production. A few examples of these are
consumable stores, lubricating oil, cotton waste and small tools for general use. Sometimes
indirect material cost includes direct material cost, which is so small or complex that direct
tracing to specific units is inexpedient; for example, glue, thread, rivets, chalk etc.
Indirect labor cost
Indirect labor cost is that portion of labor cost, which cannot be assigned to any specific units
of production; it is common to several units. Salaries of a foreman, supervisory staff and
works manager, wages for maintenance workers, idle time, and workmen compensation are
some of the examples of indirect labor cost. These, like some direct material cost, are not
assigned to the specific units of production for the sake of expediency. Employees’ social
security charge and unemployment payroll taxes are the two examples that fall under this
category.
Indirect services cost
A few examples of indirect services are:
Indirect material cost – Glue, thread, nails, rivets, lubricants, cotton waste etc
Indirect labor cost – salaries and wages of foreman and supervisors, inspectors,
maintenance, labor, general labor, idle time etc
Indirect services cost – Factory rent, factory insurance, depreciation, repair and
maintenance of plant and machinery, first-aid, rewards for suggestions for welfare, repair
and maintenance of transport system and apportioned administrative expenses etc.
Administration overhead
242
The term administration stands for formulation of policy, direction, control and management
of affairs. Administration overheads include the indirect costs incurred for the general
direction of an enterprise as a whole. It encompasses the cost of management, secretarial,
accounting and administrative services, which cannot be related to the separate production,
marketing or research and development functions.
Given below are a few examples of different items included in different groups of
manufacturing overhead:
Indirect material cost – Printing and stationary for selling, mailing literature, catalogues,
price lists, samples, free gifts, displays and exhibition material etc
Indirect labor cost – Salaries and commission of salesmen, technical representatives and
sales managers and salaries of selling department etc
Indirect services cost- advertisement expenses, bad debts, collection charges, rents,
rates and insurance of showrooms, cash discount, after-sales service, brokerage,
expenses in making quotation etc
Distribution overhead
Distribution overheads include indirect costs relating to distribution, which is a separate
function like manufacturing, administration and selling. The term distribution stands for
activities connected with sequence of operations that start from making the packed product
available for dispatch and end with making reconditioned returned empty package available
for reuse.
Given below are a few examples of different items included in different groups of distribution
overhead:
Indirect material cost – Packing cases, oil, greases, spare parts etc. for upkeep of
delivery vehicles
Indirect labor cost – Wages of packers, van drivers, dispatch clerk etc
Indirect services cost – Carriage and freight outwards, rent, rates and insurance of
warehouses, maintenance of transport vehicles and running expenses of the same etc
243
Fixed overhead
Fixed overhead represents indirect cost, which remains constant in total within the current
budget period regardless of changes in volume of activity. This concept of fixed overhead
remains valid within certain output and turnover limits. Fixed overhead does not vary in total.
The incidence of fixed overhead on unit cost decreases as production increases and vice
versa (rent of building, depreciation of plant and machinery and building, cost of hospital and
dispensary, pay and allowances of managers, secretary and accountants canteen expenses,
legal, fee, audit fee etc). Fixed cost will be incurred even when no production activity takes
place (see Figure B2.2).
Figure B2.2
Fixed overhead
Fixed overhead per unit is changing (decreasing with increase in volume), total fixed
overhead is constant.
There are three types of fixed overhead:
Long-run capacity fixed overhead – these are the expired cost of plant, machinery and
other facilities used
Operating fixed overhead – these overheads are incurred to maintain and operate the
fixed assets; heat and light, insurance and property taxes are examples of fixed
overhead of this category
244
Programmed fixed overhead – these are cost of special programs approved by
management. The cost of extensive advertising and cost of programs to improve the
quality of the firm’s products are examples of programmed fixed overhead
Fixed overhead is fixed within specific limit relating to time and activity. Fixed overhead is
dependent on the policy of management and the company’s activities. Policy of a particular
management may be opposed to discharging supervisors during lean periods. Accordingly,
supervision will be a fixed overhead.
Variable overhead
Variable overhead represents that part of indirect cost which varies with change in the
volume of activity. It varies in total but its incidence on unit cost remains constant.
The examples of variable overhead are indirect material cost, indirect labor cost, power and
fuel, internal transport, lubricants, tools and spares (see Figure B2.3).
Figure B2.3
Variable overhead
Total variable overhead is increasing whereas variable overhead per unit is constant.
Semi-variable overhead
It is that part of overhead which is partly fixed and partly variable. These overheads show
mixed relationship, when plotted against volume. Semi-variable overheads may remain fixed
within a certain activity level, but once that level is exceeded, they vary without having direct
relationship to volume changes. It does not fluctuate in direct proportion to volume. An
example of semi-variable overhead cost is the earning of an employee, who is paid a salary
245
of $500 per month (fixed) plus a bonus of $0.5 for each unit completed (variable). If he
increases his output from 1000 units to 1500 units, his earnings will increase from $1000 to
$1250. An increase of 50% in volume brought about only 25% increase in cost.
Figure B2.4 shows the behavior pattern of semi-variable overhead.
Figure B2.4
Semi-variable overhead
Semi-variable overheads present the biggest problem in cost analysis because there is a
readily ascertainable relationship between cost and volume. Semi-variable overhead must
be segregated into fixed and variable element. The following are methods of estimating fixed
and variable components.
Intelligent estimate of individual items
In this method, past overhead data relating to various activity levels are analyzed and
tabulated to show the pattern of overhead relationship with volume. Suitable adjustments are
made for anticipated changes. This approach is simple and inexpensive, but its simplicity is
its inherent weakness. This method lacks scientific basis required for decision making.
High and low method
It is also known as Range method. Here, levels of highest and lowest expenses are
compared with one another and related to output attained in those periods. Since the fixed
element of semi-variable overhead is expected to remain fixed for two periods, it is
concluded that changes in levels of expenses are due to variable elements. Variable cost
per unit is calculated as follows,
Change in expenses corresponding to these levels should be divided by changes in output
levels.
Illustration-1:
246
Considering highest and lowest levels of output:
247
E.g. Semi-variable overhead for October is $300. Variability element 60%. Variable element
will be $180 and fixed element will be $120.
It does not present scientific basis and determining variability may be influenced by personal
bias.
Scattergraph method
This is the statistical method, where a line is fitted to a series of data by observation. It is
explained below:
Illustration-2:
Drawing a line of best fit from the following data
248
Figure B2.5
Scattergraph diagram
The scatter graph is a simple method requiring no complicated formulae (see Figure B2.5). It
shows cost behavior pattern graphically and is easily understood. It has one serious
limitation: fitting the trend line may be influenced by personal bias. The advantage of this
method is speed and simplicity rather than accuracy.
249
Least square method
Under this method, ‘line of best fit’ is drawn for a number of observations with the help of a
statistical method. Here, the straight line formula, y = mx + c, where x and y are degree
variables, m and c are constant. C is the fixed element of cost, and m is the degree of
variability.
Thus for each period,
y1 = mx1 + c
y2 = mx2 + c
yn = mxn + c
By addition, Σy = mΣx + Nc …………………………………………...(i)
Again, multiplying both sides of linear equation y = mx + c by x, we get
x1y1 = mx12 + cx1
x2y2 = mx22 + cx2
xnyn = mxn2 + cxn
By addition, Σxy = mΣx2 + cΣx…………………………………………(ii)
From (i) & (ii) value of constant m and c can be obtained and a pattern of cost line can be
determined accordingly. Once m and c are known both x and y can be found out.
Illustration-3:
We have to find out the regression line by least square method. What will the semi-variable
overhead be in July if production level increases to 500 units?
Solution
250
We know,
Σy = mΣx + Nc …………………………………………...(i)
Σxy = mΣx2 + cΣx…………………………………………(ii)
Substituting the above (i) & (ii),
210 = 1800m +6c………………………………………… (iii)
67,000 = 580000 m + 1800c………………………………..(iv)
By multiplying (iii) with 300,
63,000 = 540000 m + 1800 c…………………………………(v)
From (iv) and (v), m = 0.10
Putting the value of m in (iii), c = 5
Putting the values of constants m and c the equation line reduces to y = 0.1x + 5.
This is the regression line of best fit, where y = semi-variable overhead;
X = volume of production.
If x = 500, y = 0.1(500) + 5 = 55.
Thus, if production level is 500 in July, semi-variable overhead will be $55.
Limitations of the ‘Line of best fit’
The most important limitation of this method is the assumption that there is an ongoing
stable relationship between costs and volume of activity. Therefore, with the increase of both
cost and volume it cannot be inferred that a rise in cost has been necessarily caused by an
increase in volume.
251
Figure B2.6
Distribution of overhead
It is shown here that items of overhead are distributed in production departments and service
departments disregarding their distinction (see Figure B2.7(a) and (b)).
Figure B2.7(a)
Under secondary distribution, cost of service departments is distributed among production
departments.
Figure B2.7(b)
The above figure shows that absorption of overhead involves distribution overhead of
production departments in units produced.
For the purpose of assignment and distribution, the following terms are relevant:
Allocation – the process identification of whole items of overhead to specific departments is
termed as allocation. The item of overhead cannot be identified with specific units of
production, but with the specific department for the purpose of allocation. For example, the
material issued to the repair department cannot be linked with specific units of production,
but this item of overhead can be allocated directly to the maintenance service cost center.
An item of overhead cannot be allocated to a department, till the two following conditions are
satisfied,
The concerned department should have caused the overhead item to be incurred
252
The exact amount of overhead should be known. Original records contain a lot of
information to enable the different items of overhead to be allocated to specific
departments. It is illustrated in the following table,
Basis adopted for apportionment for primary distribution should be equitable and
practicable
Charges should be made to different departments in relation to benefit received
Method adopted for primary distribution should not be time consuming and costly
The following bases are most commonly used for apportioning items of overhead among
production and service departments for primary distribution.
Illustration-4:
The SQC & Co. is divided into four departments. A, B, C are production departments and D
is a service department. The actual costs for the period are as follows:
253
The following data of four departments is available
Secondary distribution
The process of redistributing the cost of service departments among production departments
is known as secondary distribution. The distinction of production departments and service
departments dominates secondary distribution.
Criteria for secondary distribution
254
The available basis for determining the basis for apportionment of cost service departments
among production department.
Services received
Analysis of survey or survey of existing conditions
The ability to pay basis
Efficiency or Incentive Method
General use indices
Common bases for secondary distribution
The representative lists of bases, which are frequently used for apportioning cost of service
departments among production departments, are shown below;
255
Figure B2.8
Secondary distribution overhead
Direct redistribution method
In this method, service departments’ costs are apportioned to production departments (only)
ignoring service rendered by one service department to the other. Here, the number of
secondary distribution will be equal to number of secondary departments.
Illustration 5:
PJ & Co. Ltd. has three production departments and four service departments. The
expenses for these departments as per the primary distribution summary were the following:
256
The following information is also available in respect of the production departments:
Solution:
Step method
257
It is a method of secondary distribution on non-reciprocal basis. It gives cognizance to the
service rendered by one service department to another service department. In this method,
there is no two-way distribution of costs between two service departments. E.g., a portion of
power plant cost may be distributed to the tool room, because power plant provides service
to tool room. But no part of the tool room cost is distributed to the power plant, even if the
tool room renders service to it. The cost of service department, which serves the largest
number of other departments, is distributed first. After this, the next largest number of
department is apportioned and goes on till the cost of the last service departments is
apportioned among production departments only.
Illustration 5:
A manufacturing company has two production departments, X and Y, and three service
departments – stores, time-keeping and maintenance. The departmental distribution
summary showed the following expenses for January 2003.
Apportion the cost of service departments to production departments keeping in view that
the company makes secondary distribution on non-reciprocal basis.
258
Note:-
1. Bases taken for apportionment of cost of various service departments are
259
The cost of Dept. X and Y will be $2,638.29 and $3,191.49 respectively.
Repeated distribution method
It is also known as continued distribution of attrition method and service department costs
are distributed to other service departments and production departments on agreed
percentages. This process continues to be repeated, till the figures of service departments
are too small to be considered for further apportionment.
Illustration 7:
Clive & Co. has three production departments and two service departments. Departmental
distribution summary for the month of January 2003 is given below.
260
Distribution of service department cost under repeated distribution method has to be
ascertained.
Solution:
Secondary Distribution Summary
January 2003
261
B2.4 Overhead absorption
Overhead absorption is an exercise by which overheads are properly spread over to
production for the period. The selection of the correct method of overhead absorption is very
important for pricing, tenders and other managerial decisions. Overhead absorption is
accomplished by overhead rates.
The overhead absorbed in a period will be found out by multiplying overhead rate by total
number of units of the base for the period.
The main objectives of determining overheads rates are given below,
The disadvantages of the use of the actual overhead rate are as follows,
Actual overhead rate cannot be determined until the end of the period.
Seasonal and cyclical influences cause wide fluctuations in actual overhead cost and
actual volume of activity.
262
Due to frequent changes in product cost, the cost comparison for different periods is
very difficult.
Pre-determined overhead rate
Advantages:
Advantages:
These rates can be determined only after the end of the accounting period.
Use of supplementary rates requires a lot of clerical labor and cost.
Where normal rates are used application of supplementary rate defeats the basic
concept of normal cost.
Normal overhead rate
This overhead rate is based on the concept that actual cost is not necessarily the best
criterion of true cost. Proper overhead will be charged to production, when overhead rate is
263
linked with normal capacity i.e., normal overhead rate is a pre-determined rate determined
with reference to normal capacity.
Disadvantages:
Overhead rates for service departments are used when secondary distribution is not done,
i.e., costs of service departments are not allocated or apportioned to production
departments.
• Separate rate for each cost center
These rates are useful in making a comparative study of cost behavior of different
cost centers.
264
• Separate rate for fixed and variable overhead
• Separate rates for applying the material-related part, the labor-related part and the facility-related
part of overhead cost.
265
Figure B2.9
Basic capacity concept
Capacity based on sales expectancy- The capacity based on sales expectancy may be fixed to
be either above normal capacity or below normal capacity. It is always less than practical
capacity (see Figure B2.10).
Figure B2.10
Capacity based on sales expectancy
While normal capacity considers the long-term trend analysis of sales, which is based on
sales of a cycle of years, the capacity based on sales expectancy is based on sales for the
year only. It is influenced more by general economic conditions and forecast of industry than
long term sales trends. The main advantages of determining overhead rate based on sales
expectancy are:
266
bottlenecks in certain departments. Overhead rate includes cost of idle capacity; excess
capacity is excluded from overhead rate consideration.
Illustration 9:
Modern Electricals Ltd. manufactures motor engine parts at the rate of 2 units per hour. The
factory normally operates 6 days a week on a single eight- hour shift. During the year it is
closed on 16 working days due to holidays. Equipments are idle for 160 hours for cleaning,
oiling, etc. Normal sales demand averages 3,000 units over a year a five-year period. The
expected sales volume for the year 2003 was 2,800 units. Capacity actually utilized in 2003
turned out to be 1,400 units. The fixed cost is $ 110,376 per year. We have to calculate the
idle capacity costs assuming that overhead rates are based on maximum capacity, practical
capacity, normal capacity and expected actual capacity respectively.
Solution:
Statement showing idle capacity and machine hour rate for 2003
267
Production Unit Method – Actual or predetermined overhead rate is calculated by dividing
the overhead to be absorbed by the number of units produced or expected to be produced.
Advantage: It is very good for concerns having single product output.
Disadvantage: Not suitable for different products. It is neither time based nor time-related.
Percentage on Direct Wages – This percentage is determined by dividing the overhead to be
absorbed by direct wages and multiplying the result by hundred.
Advantage: It is very simple and economical. Labor can be better basis than material for
determining overhead.
Disadvantage: Ignores contribution made by other factors of production like machinery. It
ignores time-related overheads such as taxes, insurance and depreciation. It is not related to
the efficiency of a worker.
Percentage on Direct Material Cost – This percentage is computed by dividing the overhead
to be absorbed by direct material cost incurred or expected to be incurred and multiplying
the result by hundred.
Advantage: It is simple and easy to understand. This method is useful, when grades of
materials and prices of materials do not widely fluctuate and material cost forms a major part
of total cost.
Disadvantages: There exists no logical relationship between items of manufacture, overhead
and material cost. It ignores the items of overhead linked with time factor, such as rent, etc.
This percentage method is inequitable, when a part of the material passes all processes and
a part of material passes only through some processes. If prices differ widely, the products
made from materials with high prices will be charged with more than their share of overhead.
Percentage on Prime Cost – The percentage is computed by dividing the overhead to be
absorbed by prime cost incurred or expected to be incurred and multiplying the result by
hundred.
Advantages: Simple and easy to use, since all the data is available and no additional
information is required. It is useful in those cases, where there are no wide fluctuations in
processing.
Disadvantages: No logical relationship between items of overhead and prime cost. It ignores
the items of overhead linked with time factor. The use of prime cost for determining the
overhead rate further confuses overhead absorption.
Direct Labor Hour Method – When this method is used, actual pre-determined overhead rate
is determined by dividing the overhead to be absorbed by labor hours expended or expected
to be expended. It is a time based method of overhead absorption and is sometimes
preferred to the direct wage method.
Advantages: It is very useful for labor intensive situations and not affected by output related
remuneration schemes.
Disadvantages: This method ignores other factors of production like machines and will not be
a good method of overhead absorption where machines represent the prime production
factor. It requires collection of additional data, (hours worked) translating into extra clerical
labor and cost.
Machine Hour Rate – When this method is used, actual or predetermined overhead rate is
calculated by dividing overhead to be absorbed by the number of hours for which a machine
or machines are operated or expected to operate. It is used when production is machine
based and recognized as the most reliable method of overhead absorption.
268
The machine hour rate is of three types:
Ordinary Machine Hour Rate – It is calculated by taking into account all the indirect
expenses which are directly attributable to the machine. Expenses fall into two categories.
Proportionate to the running time of the machine i.e., power, fuel, repairs and
maintenance and depreciation, known as machine expenses.
Not having any relation to operating time i.e. insurance, taxes, lubricants etc.
Composite Machine Hour Rate –This rate takes into account not only the expenses directly
connected with the machine as mentioned earlier, but also other expenses like supervision,
rent, lighting, heating, etc. These indirect expenses are known as standing charges. Hence
standing charges plus ordinary machine hour rate gives the composite machine hour rate.
Group Machine Hour Rate – It is determined for a cost center which comprises a specific
machine or a group of machines. This method is applicable where identical machines are
grouped as a machine center such as a turning shop, milling shop, drilling shop, welding
shop, etc. All direct expenses are allocated to the cost center and all indirect expenses are
apportioned to each group of machines on an appropriate basis.
Illustration 9:
We have to find out the machine hour rate for the month of Jan, 2003, to cover overhead
expenses given below (relating to a machine).
Per annum
It is assumed that the machine will run for 1800 hrs. per annum and that it will incur $1,125
for repair and maintenance for life. It is also considered that the machine requires 5 units of
power per hour available at 6 cents per unit and that its life will be of 10 years.
Solution:
269
Sales Price Method – In this method, actual overhead rate or predetermined overhead rate is
determined by dividing the overhead to be absorbed by the sales realized or expected to
realize. The sale price method is considered very useful for absorption of administration
overhead and selling and distribution overhead.
Under or over absorption of overheads – When pre-determined overhead rate is used there
happens to be a difference between the overhead absorbed and the overhead incurred
during the period under consideration. If the overhead absorbed is less than the overhead
incurred, the excess incurred is termed as unabsorbed overhead. If overhead absorbed or
overhead charged is more than the overhead incurred during the period, the excess
absorbed over incurred is termed as over-absorbed overhead.
The following methods are often used to disposing under-over/absorbed overheads.
Direction Cost,
Works Cost,
Total Cost.
270
Factory overhead expenditure for the month was $162,000. Selling and distribution cost
should be assumed @ 20% of works cost. Factory overhead expenses should be allocated
to each brand on the basis of units which could have been produced in a month when single
brand production was in operation.
II. A factory has three production departments (two machine shops and one assembly) and
three service departments, one of which (engineering service department), only serves the
machine shops.
We have to;
Note:
Because of special fire risks, machine shop A is responsible for a special loading of
insurance on the building. This results in a total building insurance cost for machine shop
A as one third of the annual premium.
The general services department is located in a building owned by the company. It is
valued at $6,000 and is charged into costs at a notional value of 8% per annum. The
cost is additional to the rent and rates shown above.
The value of issues of material to the production departments are in the same proportion
as shown above for consumable supplies.
The following data is also available:
271
III. Ganesh Enterprises undertakes three different jobs A, B and C. All of them require the
use of a special machine and also the use of a computer. The computer is hired and the hire
charges work out to $420,000 per annum. The expenses regarding the machine are
estimated as follows;
Rent for the quarter $ 17,500
Depreciation per annum $ 200,000
Indirect charges per annum $ 150,000
During the first month of operation, the following details were taken from the job register:
Number of hours the machine was used: A B C
i) Without the use of Computer 600 900 -
ii) With the use of the computer 400 600 1,000
Here, we have to compute the machine hour rate:-
For the firm as a whole for the month when the computer was used and when the
computer was not used.
For the individual jobs A, B and C.
B3 Labor allocation
Labor cost incurred by a company is divided into two categories based on variability and
controllability:
There is a direct relationship between labor cost and the product or process or cost unit,
Labor cost may be measured in the light of this relationship,
Labor cost is sufficiently material in amount.
Indirect labor costs are those costs which are not identifiable in the production of specific
goods or services. These labor costs are commonly used for production activities. Indirect
labor cost consists of labor costs in service departments such as purchasing, engineering
and time-keeping. Labor cost of certain workers in the production departments will also
come in the category of indirect labor cost (foremen, material expediters and clerical
assistants). Direct labor cost forms a part of the prime cost and indirect labor cost becomes
a part of the overhead.
The importance of distinction between direct and indirect labor are as follows:
272
Measurement of efficiency
Preparation of informative labor cost analysis, avoiding serious errors in overhead
allocation.
Controllable and non-controllable labor cost – these terms are often used in budgeting and
variance analysis to describe costs that are either controllable or non-controllable with
reference to a particular department. These terms are also used within particular time
periods to reflect whether costs are controllable in the short run or whether they will require
long-term action. When standard costing system is in vogue, it is important to guide
management activity towards controllable areas of responsibility (see Figure B3.1).
Figure B3.1
Methods of remuneration
273
Employees have little control over the quantity of output or there is no clear-cut
relationship between effort and output
Work delays are frequent and beyond the control of employees
Quality of work is especially important
Earnings = Hours worked × Rate per hour
274
= 8 × $ 0.625 + 0.625 (16 - 8).
= $10.
The performance of Mr. S is below standard i.e., SH = 40 × 0.16 hrs. = 6.40 hours.
Earnings = 8 × 0.625 = $ 5.
Taylor System
Merrick System
Taylor differential rate system
Earnings = 80% of piece rate when below standard.
Earnings = 120% of piece rate when at or above standard.
Merrick differential system
It is not as harsh as the Taylor differential piece rate on workers of low efficiency. This
method encourages trainees and developing workers.
Both time rate system and piece work system have got some glaring deficiencies. These
deficiencies are sought to be overcome by adopting schemes which are combination of time
rate and piece work system.
In this method labor cost is high for low production. It is specially recommended for
application in heavy engineering and structural workshop, machine tool manufacturing
industries, contract costing etc.
Illustration 11:
The performance of workers A, B and C in a factory in as follows:
275
Standard production per day of 8 hours work is 10 units, day wage guaranteed $ 2 per hour,
and bonus rate is 20%. We have to calculate wages of A, B and C under the Gantt Task
Bonus Plan.
Solution:
276
Halsey plan
Earnings = Hours worked × Rate per hour + (Time saved × Rate per hour × 50/100).
Halsey-weir plan
Earnings = Hours worked × Rate per hour + (Time saved × Rate per hour × 33.33/100).
Rowan system
Earnings = Hours worked × Rate per hour + (Rate per hour × Hours worked × Time saved/
Time allowed).
Barth sharing plan
Earnings = Rate per hour × (Standard hours × Hours worked) ½
Scanlon plan
Earnings = Avg. annual salaries & wages × 100/Avg. annual salaries revenue.
Illustration 12:
Two employees, Mr. V and Mr. S, produce the same product using the same material and
their normal wage rate is also the same. Mr. V is paid bonus according to the Rowan
system, Mr. S is paid bonus according to the Hasley system. The time allowed to make the
product is 100 hrs. Mr.V takes 60 hrs. while Mr.S takes 80 hrs. to complete the product. The
factory overhead rate is $10 per man-hour actually worked. The factory cost for the product
for Mr. V is $7,280 and Mr.S is $7,600.
We have to:
277
From (I) & (II),
a) y = 20, i.e., Rate per hour = $ 20. (Normal wages)
b) Bonus paid to Mr. V = 24 × 20 = $480.
c) Bonus paid to Mr. S = 10 × 20 = $200.
d) The cost of material, x = 6680 – 84y = $ 5,000.
Comparative statement of the factory cost of the product made by two employees;
B3.3.1 Objectives
Analyzing the labor times into direct and indirect labor by departments, jobs, work orders
and processes
Charging direct labor consisting of piece work, overtime and incentive bonus as
production cost
Treatment of indirect labor cost as overhead expenses.
Production causes,
o – No power,
o – Machine breakdown,
o – Waiting for work
Administrative causes
Economic causes
o – Seasonal,
o – Cyclical,
Industrial reasons
278
Charge to factory overhead,
Debit to the P & L account.
Overtime
Causes of overtime – scheduling more production and rush orders.
Overtime can be treated as follows:
In addition an overtime premium is paid based on ‘time and one half’ with a basic week of 36
hours. The total minutes earned from the differential scheme are paid for at the time rate of $
2.50 per hour. If piecework earnings are less than hours worked at e time rate then time
earnings are paid. The following data relates to week 10:
We have to calculate the gross earnings for week 10 for each employee and also state why
management and employees consider this remuneration method acceptable.
B4 Costing principles
Costing is the technique consisting of principles and rules, which govern the procedure of
ascertaining the cost of products and services. Costing can be carried out by methods of
integrated accounts, because it emphasizes the principles and rules.
The following are the techniques of costing used in industries for ascertaining the cost of
products and services.
Historical costing – Ascertaining costs after they have been incurred. This costing is
based on recorded data and the costs arrived at are verifiable by past events.
279
Standard costing – CIMA defines it as ‘a control technique which compares standard
costs and revenues with actual results to obtain variances which are used to stimulate
improved performance.’
Marginal costing – It is the accounting system in which variable costs are charged to cost
units and fixed costs of the period are written-off in full against the aggregate
contribution. Its special value is in decision making.
Direct costing – Under costing, a unit cost is assigned only the direct cost. All indirect
costs are charged to the Profit and Loss account of the period in which they arise.
Absorption costing – It is a technique that assigns all costs, i.e., both fixed and variable
cost to product cost or cost of service rendered.
Uniform costing – CIMA defines it as ‘the use by several undertakings of the same
costing system, i.e., the same basic costing methods, principles and techniques.’
280
Figure B4.1
A sample production order
Job order number: The production planning department assigns each production order a
number, which is called job order number.
Job Cost Card: The job sheet is designed to collect cost of materials, labor and factory
overhead applicable to a specific job and is the most important document in job costing
system (see Figure B4.2).
281
Figure B4.2
A sample job card
When a job is completed, the cost is totaled on the job cost sheet and is used as the basis
for transferring the cost of a particular job order from work-in-process account to the finished
goods account or the cost of sales account.
Calculates billable amounts using either the percentage markup of cost or the unit price
282
Allocates overhead on a per job basis, based on labor hours, labor dollars, material
dollars or total job cost; can also record overhead directly to a phase
Allocates employee benefits on a per-job basis, either as a percentage of labor dollars or
as a flat rate per labor hour
Uses job-related labor hours and costs, from Payroll or daily timecard or timesheet
entries
Saves job configurations as templates to simplify bidding and budgeting on subsequent
jobs.
The Job Costing module monitors current jobs with budgeted and actual labor, material, and
overhead rates which can be used in structuring new projects.
The job does not disrupt normal activity levels, which are as follows:
283
Cooling – Labor hours
Batch Costing: Batch costing is essentially a variation of job costing. Instead of a single job,
a number of similar product units are processed or manufactured in a group as a batch.
Batch costing is used in following situations:
284
Economic batch size: The size of a batch chosen for a production tem is likely to be a critical
factor in achieving a least-cost operation. The economic batch size may be determined by
using the Economic Production Batch Size (similar to EOQ) (see Figure B4.3).
Figure B4.3
Economic production batch size
Illustration 14:
Asian Paints Company has an annual demand from a single customer for 50,000 liters of a
paint product. The total demand can be made up of a range of colors and will be produced in
a continuous production run after which a set-up of the machinery will be required to
accommodate the color change. The output of each shade of color will be stored and then
delivered to the customer as a single load immediately before producing the next color. The
costs are $100 per set-up (outsourced). The holding costs are incurred on rented storage
285
space which costs $50 per sq. meter per annum. Each square meter can hold 250 liters,
suitably stacked.
We have to :
Calculate the total cost per year where batches may range from 4,000 to 10,000 liters in
multiples of 1,000 liters and the production batch size which will minimize total cost.
Calculate the batch size using the economic batch size formula for lowest total cost.
Solution:
a)
286
B5 Process costing
Process Costing represents a type of cost procedure suitable for continuous and mass
production industries producing homogeneous products. It is a cost accumulation system
appropriate when products are manufactured through a continuous process. Each dept/cost
center, prepares a cost of production report/ performance report.
Overview of process costing
Manufacturing costs are accumulated in processing departments in a process costing
system. A processing department is any location in the organization where work is
performed on a product and where materials, labor, and overhead costs are added to the
product. Processing departments should also have two other features. First, the activity
performed in the processing department should be essentially the same for all units that
pass through the department. Second, the output of the department should be
homogeneous. In process costing, the average cost of processing units for a period is
assigned to each unit passing through the department.
Objectives
To determine how manufacturing costs be allocated to units completed, or to units started
but not completed. Also to compute total unit costs for income determination.
The same basic purpose exists in both systems in assigning material, labor, and
overhead cost to products
Both systems use the same basic manufacturing accounts: Manufacturing Overhead,
Raw Materials, Work in Process, and Finished Goods
The flow of costs through the manufacturing accounts is basically the same in both
systems.
The differences between the job-order and process costing occur because the flow of units
in a process costing system is more or less continuous and the units are essentially
indistinguishable from one another. Under process costing:
287
A single homogenous product is produced on a continuous basis over a long period of
time. This differs from job-order costing in which many different products may be
produced in a single period.
Total costs are accumulated by the department, rather than by an individual job
The department production report is the key document showing the accumulation and
disposition of cost, (instead of the job-cost sheet)
The pattern of cost accumulations under process costing is illustrated through a flow chart in
Figure B5.1.
Figure B5.1
Cost accumulations under process costing
288
o – Closing WIP
Step No. 2 – The inventory costing method to be followed
The effect of using LIFO method, FIFO method and average method will be different on the
unit cost of the process.
Step No. 3 – The state of introducing the material in process
Material can be introduced in the beginning, in the middle or at the end of the process. The
stage at which material is introduced will significantly affect the cost per unit of the process.
Step No. 4 – work done on unfinished units should be expressed in terms of
‘equivalent production’
In order to calculate the average cost per unit, the total number of units must be determined.
Partially completed units pose a difficulty that is overcome using the concept of equivalent
units. Equivalent units are the equivalent, in terms of completed units, of partially completed
units. The formula for computing equivalent units is:
Equivalent units = (Number of partially completed units × Percentage completion)
Equivalent units are the number of complete, whole units one could obtain from the
materials and effort contained in partially completed units.
A company which had no beginning inventory completed 100 units last month. In addition,
the company worked on another 60 units that were 40% complete. In terms of totally
completed units, the amount of effort expended was equivalent to the production of 124 units
([100 + 60] × 40%).
Step No. 5 – Determining element-wise details of total cost of process to be
accounted for
This can be done by preparing a statement of cost for each element.
The element-wise details of cost should be collected and they should be divided by the
number of equivalent units at cost per unit for each element.
Step No. 6 – Apportionment of process cost
This a statement prepared showing apportionment of process cost is as follows:
289
Determination of cost of process with no process loss
All material, labor, direct expenses and proportionate share of overhead is considered as the
raw material cost of that process.
Process costing with normal loss
Certain production techniques are of such a nature that some loss is inherent to the
production. If the loss is within the specified limit, it is referred to as normal loss. The cost of
normal loss in process is absorbed as additional cost by good production in the process. If
the loss forms a particular percentage of production in the process, then the following
equation is to be followed.
Production = (Opening Stock + units transferred in process – Closing Stock)
If the scrap fetches some value then the process cost per unit of the process is determined
as follows;
Cost per unit = (Cost transferred to the process + additional cost in the process – Scrap
value of units representing normal loss)/ (Unit in process – units scrapped.)
Process costing with abnormal loss
Abnormal loss refers to the loss which is not inherent to manufacturing operations. All cost
relating to abnormal loss is debited to abnormal loss account and credited to process cost
account so that the cost, (which could have been avoided according to norms of operations),
is kept separately to facilitate control action to be taken. The following steps are suggested
for valuation of abnormal loss in the process:
Material $4,000
Labor $1,000
The normal loss has been estimated @ 10% of the process input. Units representing normal
loss can be sold @ $1 per unit. Actual production in the process is 1700 units. We have to
prepare process B account.
290
Notes:
291
The producer has assessed the cost at $6.77 per kg based on input expenditure and the
finished output. Estimated profit is $36,500 with a selling price at $7.50 per kg. Normal
wastage is to be considered as 5% for each stage and no excess wastage is to be allowed
to inflate the cost of the end product.
Figure B6.1
Activity-based costing system
In order to achieve the major goals of business process improvement, process simplification
and improvement, it is essential to fully understand the cost, time, and quality of activities
performed by employees or machines throughout an entire organization. ABC methods
enable managers to cost out measurements to business simplification and process
improvement (see Figure B6.1).
292
Activity Based Costing is a modeling technique where costs are expressed in terms of
Resources, Activities, and Products. Work (activities) is performed to create products, and
resources are consumed by the work.
Figure B6.2
ABC model
Reasons for the introduction of ABC
Traditional costing is based on assumption: Relation between overhead and the volume
based measure
It often fails to highlight inter-relationship among activities in different departments
Arbitrarily allocates overhead to the cost objects
Total company’s overhead is allocated to the products based on volume based measure
e.g. labor hours, machine hours
Traditional costing which is based on averages and estimation
To determine the ‘true’ cost for a cost object (product, job, service, or customer).
In any business the ‘true’ cost of a product is important
293
To finding an economic break-even point
To compare different options
To discover opportunities for cost improvement
To prepare and actualize a business plan
To improve strategic decision making
Basics of activity based costing
Identify the major activities such as material handling, mechanical insertion of parts,
manual insertion of parts, wave soldering, quality testing etc
Determine the ‘cost drivers’ for each activity. The cost driver is the underlying factor(s)
which causes the incurrence of cost relating to that activity. Cost drivers link activities
and resource-consumption and thereby generate less arbitrary costs for decision making
Create cost pools to collect activity costs having the same cost driver
Attribute the cost of activities in the cost pools to products/services based on the cost
drivers
Illustration 16:
Figure B6.3
Two types of cost drivers--resource drivers and activity drivers (see Figure B6.3)
294
Identify activities
Determine cost for each activity
Determine cost drivers
Collect activity data
Calculate product cost
Illustration 17:
Traditional costing
In a company two products: product A and product B
Set-up $10,000
Machining $40,000
Receiving $10,000
Packing $10,000
Engineering$30,000
295
Number of Deliveries
Engineering Hours
Step-4: Activity data
ABC example
ABC is a powerful tool for measuring business performance, determining the cost of
business process outputs, and is used as a means of identifying opportunities to improve
business process effectiveness and efficiency. Below is a process diagram with the sub-
activities shown in a sequential activity order for Correspondence process, which comes
under the Executive Secretariat in the Office of the Administrator. For the ABC, the sub-
activities were then analyzed and a cost derived (see Figure B6.4).
296
Figure B6.4
A process diagram
Disadvantages of ABC
B7 Backflush costing
This is introduced in February 1991, focuses on output and then works back to apply
manufacturing costs to units sold and to inventories. It is an accounting system that applies
costs to products only when production is complete. Blackflush costing attempts to remove
non-value-added activities from costing systems, i.e., the cost of tracking work-in-process
exceeds the benefits for many companies. Material inventory of work-in-process is typically
small compared to the costs of goods produced and sold.
297
Backflush Costing is suitable only for JIT production system with virtually no direct
material inventory and minimum WIP inventories.
B8 Lifecycle costing
For understanding lifecycle costing, it is necessary to understand product life cycle. The
product life cycle starts from the time of initial research and development to the time, at
which sales and support to customers are withdrawn. Lifecycle costing tracks and
accumulates the actual cost attributable to each product from its initial research and
development to its final resourcing and support in the market place. It focuses on total costs
(capital cost + revenue cost) over the products life including design and development,
acquisition, operation, maintenance and servicing. CIMA defines lifecycle costing ‘as the
practice of obtaining over their life-times, the best use of physical assets at the lowest cost to
the entity.’ It is achieved through a combination of management, financial, engineering and
other disciplines. Life cycle costing emphasis to relate the total life cycle costs to identifiable
units of performance to arrive at the optimum decision.
Example:
We want to compare the cost of different power supply options such as photovoltaic, fueled
generators, or extended utility power lines. The initial costs of these options will be different
as will the costs of operation, maintenance, and repair or replacement. An LCC analysis can
help compare the power supply options. The LCC analysis consists of finding the present
worth of any expense expected to occur over the reasonable life of the system. To be
included in the LCC analysis, any item must be assigned a cost, even though there are
considerations to which a monetary value is not easily attached. For instance, the cost of a
gallon of diesel fuel may be known, the cost of storing the fuel at the site may be estimated
with reasonable confidence, but, the cost of pollution caused by the generator may require
an educated guess. Also, the competing power systems will differ in performance and
reliability. To obtain a good comparison, the reliability and performance must be the same.
This can be done by upgrading the design of the least reliable system to match the power
availability of the best. In some cases, we may have to include the cost of redundant
components to make the reliability of the two systems equal. For instance, if it takes one
month to completely rebuild a diesel generator, we should include the cost of a replacement
unit in the LCC calculation. A meaningful LCC comparison can only be made if each system
can perform the same work with the same reliability.
298
that may occur once or twice during the life of a PV system. Normally, these costs occur in
specific years and the entire cost is included in those years.
The salvage value (S) of a system is its net worth in the final year of the lifecycle period. It is
common practice to assign a salvage value of 20 percent of original cost for mechanical
equipment that can be moved. This rate can be modified depending on other factors such as
obsolescence and condition of equipment.
B9 Marginal costing
CIMA defines marginal costing as ‘the cost of one unit of product or service which would be
avoided if that unit were not produced or provided.’ Marginal costing is not a distinct method
of costing like job costing or process costing. It is a technique which provides presentation of
cost data in such a way that a true cost-volume-profit relationship is revealed. It is an
accounting system in which variable cost is charged to the cost units and fixed costs of the
period are written-off in full against the aggregate contribution. Its special value is in
decision-making. It is presumed that costs can be divided in two categories, i.e., fixed and
variable cost.
299
B9.2 Break-even point
Break-even point is the point of sale at which a company makes neither profit nor loss. The
marginal costing technique is based on the idea that difference of sales and variable cost of
sales provides for a fund, which is referred to as contribution. At break-even point, the
contribution is just enough to provide for fixed cost. If actual sales level is above break-even
point, the company will make profit. If an actual sale is below break-even point the company
will incur loss. When cost-volume-profit relationship is presented graphically, the point, at
which total cost line and total sales line intersect each other will be the break-even point (see
Figure B9.1).
Figure B9.1
Break-even point
300
= Change in contribution/ change in sales
We have to calculate:
P/V Ratio,
Break-even sales,
Sales to earn profit of $ 1,000
Profit at sales of $ 6,000
New break-even sales, if sales price is reduced by 10%.
Solution
Sales – Variable costs = Fixed costs + Profit
By multiplying and dividing left hand side by S,
S(S-V)/S = F + P,
Or, S X P/V Ratio = Contribution
Break-even sales,
S X P/V Ratio = Fixed Cost
(At break-even sales, contribution is equal to fixed cost)
301
By putting the values: S × 40/100 = 800, S = $2,000 or 100 units.
302
Illustration 18:
ABC Ltd. is considering renting additional factory space to make two products, L-1 & L-2.
Company’s monthly budget is as follows:
The fixed overheads in the budget can only be avoided if neither product is manufactured.
Facilities are fully interchangeable between products.
As an alternative to the manual production process assumed in the budget, the company
has the option of adopting a computer-aided process. This process would cut variable costs
of production by 15% and increase fixed costs by $12,000 per month.
The management is confident about the cost forecasts, but there is considerable uncertainty
over demand for the new products. The management believes the company will have to
depart from its usual cash sales policy in order to sell L-2. An average of three months credit
would be given and bad-debts and administration costs would probably amount to 4% of
sales revenue for this product. Both the products will be sold at the price assumed in the
budget. The company has a cost of capital of 2% per month. No stocks will be held.
We have to calculate:
303
The sales revenue at which operations will break-even for each process (manual and
computer-aided) and the sales revenues at which ABC Ltd. will be indifferent between
the two processes.
If L-1 alone is sold;
If L-1 & L-2 units are sold in the ratio 4:1, with L-2 being sold on credit
We need to explain the implications of our results with regard to the financial viability of
L-1 and L-2.
Solution:
Manual Production Computer Aided
Break-even point (sales revenue) = Break-even point in unit × Average selling price per
unit.
Average sales revenue per unit = {(4 × 20) + (1 × 50)}/5 = $26#
If L-1 alone is sold, budgeted sales are 4,000 units and break-even sales are 6,000 units
(Computer aided process) and 6,300 units (Manual process).Therefore, there is little point
producing L-1 on its own. Even if the two products are substitutes, total budgeted sales are
6,000 units and L-1 is still not worth selling on its own. Only if sales are limited to $180,000
(total budgeted sales revenue) L-1 will be worth selling on its own. However, the
presumption that products are perfect substitutes and $180,000 sales can be generated is
304
likely to be over-optimistic. In other words, a single-product policy is very risky. Launching
both-products policy is a profitable alternative. It is better to sell L-2 in preference to L-1
based on margin of safety. It is 1375 units of L-1 (34%) and 688 units of L-2 (34%). It is
recommended that both the products be sold and computer aided process be adapted.
Between 600 and 900 units per month if the price is $ 25 per unit
Between 900 and 1,250 units per month if the price is $ 22 per unit.
The cost of parts required would be $14 per completed component. However, if more than
1,000 units can be sold each month, a discount of 5% would be received from the parts’
suppliers on all purchases. Assembly cost would be $6,000 per month for assembly of up to
750 components. Beyond this level of activity, costs would increase to $7,000 per month. He
has already spent $3,000 on development, which he would write off over the first five years
of the venture on a straight line basis.
Calculate for each of the possible sales levels whether that person could expect to benefit by
going into business on his own. Also calculate the break-even point of the venture for each
of the selling prices.
II. XYZ Inc. manufactures four products, namely A, B, C and D, using the same plant and
process. The following information relates to a production period:
However, investigation into the production overhead activities for the period reveals the
following total:
It is required
305
To compute an overhead cost per product using activity based costing, tracing
overheads to production units by means of cost drivers
To comment briefly on the differences disclosed between overheads traced by the
present system and those traced by activity based costing.
306
The first step in developing an estimate is defining the estimating task and planning the work
to be accomplished. The definition and planning stage includes determining the ultimate use
of the estimate, understanding the level of detail required, outlining the total characterization
of the system being estimated, establishing ground rules and assumptions, selecting the
estimating methodologies, and finally, summarizing all of these in an estimating plan. The
task definition and planning is an integral part of any estimate and represents the initial work
effort and provides the framework for achieving a competent estimate efficiently. The
purpose of the estimate is determined by its ultimate use, which in turn will influence the
level of detail required and the scope it encompasses.
Scope of the estimate
The scope provides boundaries for the development of an estimate. It describes the breadth
of the analysis and provides a time frame for accomplishment. Several factors drive the
scope of the estimate:
Purpose or mission,
Physical characteristics,
Performance characteristics,
Maintenance concept, and
Identification of similar projects.
After learning how the estimate is to be used, the level of detail required, and the character
of the project being estimated, the estimator is in a better position to establish major ground
rules and assumptions (i.e., the conditions upon which an estimate will be based). Ground
rules usually are considered directive in nature and the estimator has no choice but to use
them. In the absence of firm ground rules, assumptions must be made regarding key
conditions which may impact the cost estimate results. The project schedule, if one exists, is
an example of a ground rule. If a schedule does not exist, the estimator must assume one.
Selecting the estimating methodologies to be employed is probably the most difficult part of
planning the estimate since methodology selection is dependent on data availability.
Therefore, the estimating methodologies selected during this planning stage may have to be
modified or even changed completely later on if the available data do not support the
selected technique. It is still helpful, however, to specify desired estimating methods
because doing so provides the estimator with a starting point.
It is important to understand that task definition and planning is an integral part of any
estimate. It represents the beginning work effort and sets the stage for achieving a
competent estimate efficiently.
307
B10.2.2 Research, collect, and analyze data
During the data research and analysis step, the estimator fine-tunes his estimating plan.
Planned methodologies may, turn out to be unusable due to lack of data. New
methodologies may have to be developed or new models acquired. Cost research may
reveal better methodologies or analogies than those identified in the original plan. During this
step, also, the estimator normalizes the data so that it is useable for the estimate.
During the process of data research, collection, and analysis, the estimating team should
adopt a disciplined approach to data management. The key to data research is to narrow the
focus in order to achieve a viable database in the time available to collect and analyze it.
Data collection should be organized, systematic, and well documented to permit easy
updating. The objective of data analysis is to ensure that the data collected are applicable to
the estimating task at hand and to normalize the data for proper application.
308
Figure B10.1
309
Estimating methodology
Entering data and methodologies into the physical structure of the estimate (the WBS)
Time phasing the estimate
Dealing with inflation.
Entering data and methodologies into the physical structure of the estimate
A computer program is essential to the task of assembling the estimate. Programs allow
efficient processing of data, electronic calculations, easier documentation, and simpler
updating. There are myriad software tools available to facilitate this process. The most
commonly used and widely available program, however, is the electronic spreadsheet. The
WBS is the structure of the estimate. The first step is to enter the WBS into the computer
program. Next, estimating methodologies are entered directly into the spreadsheet, or the
spreadsheet takes an input from a separate model.
Time phasing the estimate
Estimates reflect tasks that occur over time. Obviously, cost estimates will vary with the time
period (in which the work occurs), due to changes in labor rates and other factors. Say, the
number of man-hours needed to complete a software development effort may be higher if
the development time is shortened, or lower if it is lengthened. Time phasing is essential in
order to determine resource requirements, apply inflation factors, and arrange for resource
availability. Determining resource requirements is an important program management task.
The program manager must also ensure that money will be available to pay for the people
and the materials at the time they are needed. The first step to doing this successfully is the
scheduling step. The estimator estimates inflation for the future by using projected inflation
rates and time phases the rates over the period of performance of the task. This will let the
program manager know how much a task will cost in the dollars relevant at a future time.
This is essential for preparing a realistic budget.
Dealing with inflation
One of the primary purposes for time phasing estimates is that they may be expressed in
current dollars and included in budget requests. It is the process of translating base year
estimates into ‘other year’ dollars through the application of index numbers.
310
The cost estimator should understand that every estimate will not be documented to this
level of detail. Documentation must be tailored to align with the size and visibility of the
program estimates. Consequently, when documenting smaller programs or projects, this
tailoring provision would be employed to downscope the content structure provided below.
Specifics of this downscoping would be dictated by the size and nature of the program or
project involved. However, the requirement for enough detail to support replication must be
sustained by the tailored documentation.
Introduction
This portion of the cost estimate document will provide the reader a thumbnail sketch of the
program estimated, who estimated it, how it was estimated, and the data used in developing
the estimate. The introduction is a highly valuable overview for managers and an extremely
useful reference for estimators attempting to determine the applicability of the document’s
main body to a current estimate or research study.
The introduction should address the following areas:
Purpose of the Estimate – State why the estimate was done, whether it is an initial or
updated prior estimate and, if an update, identify the prior estimate.
Direction – Identify the requesting organization, briefly state the specific tasking, and cite
relevant correspondence. Copies of tasking messages can be included here, in the main
body, or as an appendix to the documentation package.
Team Composition – Identify each team member, his or her organization, and area of
responsibility.
Program Background and System Description – Characterize significant program and
system aspects and status in terms of work accomplished to date, current position, and
work remaining. Include information such as detailed technical and programmatic
descriptions, pictures of the system and major components, performance parameters,
support concepts, contract types, acquisition strategies, and other information that will
assist the document user in fully understanding the system estimated.
Scope of the Estimate – Describe acquisition phases, appropriations, and time periods
encompassed by the estimate. Further, if specific areas were not addressed by the
estimate, state the reason.
Program Schedule – Include the master schedule for development, production, and
deployment, as well as a detailed delivery schedule.
Ground Rules and Assumptions – List all technical and programmatic conditions that
formed the basis for the estimate.
Inflation Rates – Simply state which set of inflation rates were used for the basic
estimate. A detailed table portraying the rates used can be included either in the main
body or as an appendix to the documentation package.
Estimate Summary – Identify the primary methodology and techniques that were
employed to construct the estimate, along with a general statement that relates the
rationale for having selected these particular methodologies and techniques. Also, briefly
describe the actual cost data and its sources that were used to develop or verify the
estimate. The final portion of this section should portray estimate results by major cost
elements, in both constant year and current year dollars. A bottom-line track to the
previous estimate also should be included, if applicable. For each major cost element, a
311
page reference to the main body of the documentation where a complete description of
its estimate can be located should be included.
Main Body Overview – Provide an overview of how the document’s main body is
organized and describe any of its aspects that may facilitate its use.
Main body
This portion of the documentation should describe the derivation of the cost estimate in
sufficient detail to allow a cost estimator, independent of the original estimating team, to
replicate the estimate. Developing this portion of the document properly requires that
documentation be written in parallel with developing the estimate. The main body should be
divided into sections using the content areas and titles shown below. Following these
guidelines, pertaining to the document’s main body content structure, will allow the
estimating team to develop a comprehensive document efficiently.
Estimate description
This provides a detailed description of the primary methods, techniques, and data used to
generate each element of the estimate.
Data – Show all data used, its source (e.g., actuals on current contract/analogous
program), and normalization procedure.
Labor Rates – Identify direct and indirect labor rates as industrial averages or contractor
specific, their content, and how they were developed.
Labor Hours – Discuss how functional labor hours were developed (e.g., contractor
proposal, build-up from analogous program, engineering assessments).
Material/Subcontracts – Depict the material, purchased parts, and subcontracted items
that are required, and the development of their cost (e.g., vendor quotes, negotiated
subcontracts, catalog prices).
Cost Improvement Curves – Include the method used to describe the curve selected in
terms of its slope, source, and relevance to the cost element and program being
estimated. Any unique aspects of curve application must be included in this section.
Factors and Cost Estimating Relationships (CERs) – Provide the basis, development,
and/or source of all factors and CERs used for areas such as support equipment, data,
training, etc. This discussion must include a description of how the factor was applied
(e.g., against recurring manufacturing labor costs) and its relevance to the program
being estimated.
Cost Models – Describe all models used and their relevance to the estimate, along with
complete details regarding parametric input and output (include detailed runs here or as
an appendix to the documentation package) and any calibration performed to ensure the
model served as an appropriate estimating tool for the cost element and program
involved.
Inflation Index – Document the specific indices and computations used in the estimate
including those employed to normalize historical data. A detailed table portraying the
rates used can be included either here or as an appendix to the documentation package.
Timephasing – Identify/describe the approach used to phase the estimate.
Sufficiency Reviews and Acceptance – Discuss the process used for reviewing an existing
cost element estimate to determine its sufficiency and acceptability for incorporation into
312
the estimate. This process should be applied to existing government and contractor
estimates that are accepted as throughput to the estimate.
Estimator Judgment – Document the logic and rationale that led to specific conclusions
reached by the estimator regarding various aspects of the estimate.
Risk and Confidence – Show the details of all risk analysis conducted and how it formed
the basis for reaching conclusions regarding estimate confidence.
Documentation format
Documentation must be organized logically with clearly titled, easy to follow sections. The
following considerations will contribute towards achieving high quality, useable cost estimate
documentation:
The documentation package should include the program name, reason for the estimate,
the identity of both the tasking organization (and office symbol) as well as the
organization that accomplished the estimate, and the ‘as of’ date.
A table of contents should be included that identifies the titles of each numbered section
and subsection along with page numbers.
Pages should be numbered either sequentially or sequentially within each section.
Where the same data or method is used repeatedly, it should be described in detail at
the point of original use, and referenced by page number thereafter.
All terms and acronyms should be defined fully at the point of first use.
All figures and tables should be identified by numbers and clear descriptive titles.
Cross-references should be used to assist the reader in understanding where areas
addressing the same subject are located in the document.
The first time documented information is used, its source should be cited and added to
the reference list contained at the end of the documentation package. When the same
source is used thereafter, only the reference number needs to be cited.
Documentation process
To carry out the documentation process effectively, the team leader should develop an
outline. This estimate specific outline will provide a road map that depicts to the team the
planned structure and content of the final documentation package. With this blueprint and
the documentation requirements established in this chapter, the estimator can develop notes
that will form the basis for the estimate’s documentation. If accomplished properly, the time
to clean up and refine the estimator’s notes into final documentation form will be minimized.
A flow diagram (see Figure B10.2) of the documentation process is given below.
313
314
Figure B10.2
A flow diagram of the documentation process
315
ECO applies to both development and production and varies by both program and fiscal
year.
Requirements uncertainty refers to the variation in cost estimates caused by changes in the
general configuration or nature of an end item. This would include deviations or changes to
specifications, hardware characteristics, program schedule, operational/deployment
concepts, and support concepts.
Cost estimating uncertainty refers to variations in cost estimates when the configuration of
an end item remains constant. The source of this uncertainty results from errors in historical
316
data, cost estimating relationships, input parameter specification, analogies, extrapolation, or
differences among analysts.
Point estimates versus interval estimates
Development of a cost estimate usually involves the application of a variety of techniques to
produce estimates of the individual elements of the item. The summation of these individual
estimates becomes the singular, best and most likely estimate of the total system and is
referred to as a point estimate. The point estimate provides no information about uncertainty
other than the value judged more likely to occur compared to any other value. A confidence
interval provides a range within which the actual cost should fall, given the confidence level
specified.
Uncertainty in decision making
The point estimate provides a best single value, but with no consideration of uncertainty. In
contrast, the interval estimate provides significant information about the uncertainty but little
about the single value itself. However, when both measures are taken together, they provide
valuable information to the decision maker.
An example of the value of this information is in situations involving choice among
alternatives, as in the case of source selection. For instance, suppose systems A and B are
being evaluated; and because of equal technical merit, the choice will be made on the basis
of estimated cost. According to Paul Dieneman, in his report Estimating Cost Uncertainty
Using Monte Carlo Techniques, if the choice is made solely on the basis of the most
probable cost, the decision may be a poor one (depending upon which of the four situations
in Figure B11.1 applies.)
317
Figure B11.1
Cost uncertainty in decision making
In situation I, there is no problem in the choice, since all possible costs for A are lower than
B. A’s most probable cost is the obvious choice.
318
Situation II is not quite so clear because there is some chance of A’s costs being higher than
B’s. If this chance is low, A’s most probable cost is still the best choice. However, if the
overlap is great, then the most probable cost is no longer a valid criterion.
In situation III, both estimates are the same, but the uncertainty ranges are different. At this
point, it is the decision maker’s disposition towards risk that decides. If the preference is
willingness to risk possible high cost for the chance of obtaining a low cost system, then B is
the choice. If the preference is to minimize risk, then A is the appropriate choice.
Finally, situation IV poses a more complicated problem, since the most probable cost of B is
lower but with much less certainty than A. If the manager uses only the point estimates in
this case, the most probable choice would be the less desirable alternative. In the preceding
situations, uncertainty information was a method used to select between alternatives. A quite
different use of uncertainty information is when a point estimate must be adjusted for
uncertainty.
One particularly effective method of portraying the uncertainty implications of alternative
choices is to depict the estimate and its related uncertainty in the form of a cumulative
probability distribution, as shown in Figure B11.2 below. The utility of this approach is the
easy-to-understand, convenient manner in which the information is presented to the decision
maker. In the figure, panel A shows the cost estimate as it might normally be depicted with
the most likely value (point estimate at the center); panel B shows the same information in
the form of a cumulative curve. It is easy to see, for instance, that the selection of the
funding level, F, is at the 75 point, which means that there is only a 25 percent chance of
actual cost exceeding this funding level. The manager can see the implications of a
particular choice immediately.
Figure B11.2
Cumulative probability distribution
Dealing with uncertainty
When actually treating uncertainty in an estimate, several approaches are available, ranging
from very subjective judgment calls to rather complex statistical approaches. The order of
319
presentation of these techniques is intentional, because it tends to portray the evolution that
has taken place in terms of the tools used to handle uncertainty.
Subjective estimator judgment
This is perhaps one of the oldest methods of accounting for uncertainty and, is the basis for
most other approaches. Under this approach the estimator merely reflects upon the
assumptions and judgments that were made during the development of the estimate. After
evaluating all the ‘ingredients,’ a final adjustment is made to the estimate, usually as a
percentage increase. This yields a revised total cost, which explicitly recognizes the
existence of uncertainty. The logic supporting this approach is that the estimator is more
aware of the uncertainty in the estimate than anyone else, especially if the estimator is a
veteran and has experience in systems or items similar to the one being estimated. One
method for assisting estimators is to use a questionnaire, which provides a yardstick of their
uncertainty beliefs when arriving at their judgment.
Expert judgment/executive jury
It is a technique wherein an independent jury of experts is gathered to review, understand,
and discuss the system and its costs. The strengths of this approach are related directly to
the diversity, experience, and availability of the group members. The use of such panels or
juries requires careful planning, guidance, and control to ensure that the product of the group
is objective and reflects the best, unmitigated efforts of each member. Approaches have
been designed to contend with the group dynamics of such panels. One classical approach
is the Delphi technique, which was originally suggested by RAND. With this technique, a
panel of experts is drawn together to evaluate a particular subject and submit their answers
anonymously. A composite feedback of all answers is communicated to each panelist, and a
second round begins. This process may be repeated a number of times, and ideally,
convergence toward a single best solution takes place. By keeping the identities anonymous
rather than in a committee session, the panelists can change their minds more easily after
each round and provide better assessments, rather than defending their initial evaluation.
The principle drawback of Delphi is that it is cumbersome, and the time elapsed in
processing input may present some difficulty to respondents as to their reasons for the
ratings. However, it is possible to automate the process with online computer terminals for
automatic processing and immediate feedback.
Sensitivity analysis
Another common approach is to measure how sensitive the system cost is to variations in
non-cost system parameters. For instance, if system weight is a critical issue, then weight
would be varied over its relevant range, and the influence on cost could be observed.
Analysis of this type helps to identify major sources of uncertainty.
It also highlights;
320
to total system cost estimates. The most likely values establish the central tendency of the
system cost, while the sums of the lowest possible values and highest possible values
determine the uncertainty range for the cost estimate. It exaggerates the uncertainty of
system cost estimates because it is unlikely that all system element costs will be at the
lowest (or highest) values at the same time. While the high/low approach is plausible, its
shortcoming is that it restricts measurement to three points, without consideration to
intermediate values.
Mathematical approaches
Beta and triangular distributions are required to determine the probability distribution for
each cost element. Then the individual cost elements are combined and their measures of
uncertainty into a total estimate of cost and uncertainty are done by summation of moments
and Monte Carlo simulation.
The beta distribution
This distribution is particularly useful in describing cost risk because it is finite, continuous,
can easily accommodate a unimodal shape requirement (α > 0, β > 0), and allows virtually
any degree of kurtosis and skewness.
Kurtosis characterizes the relative peaks or flats of a distribution as compared to the normal
distribution. Skewness characterizes the degree of asymmetry of a distribution around its
mean. A few of the many shapes of the Beta are shown in Figure B11.3.
321
Figure B11.3
The beta distribution
The Generalized Beta Family of Distributions is defined over an interval (a, a+b) as in
Equation below.
322
In the case of skews,
323
Figure B11.4
Specific beta distributions
Triangular distribution
An alternative approach to assigning a distribution shape to a cost element is the triangular
distribution and it can take on virtually any combination of skews and kurtosis, but the
distribution represented by a triangle rather than the smoother curve of Beta, as shown in
figure below. The triangular distribution is specified by the lowest, most likely (usually the
point estimate), and the highest value. Any point within the range of the distribution can be
chosen to locate the mode and the relationship among the three values specifies the amount
of kurtosis. The triangular distribution is much easier to use and produces equally
satisfactory results (see Figure B11.5).
324
Figure B11.5
The triangular distribution
The summation of moments
This method takes the approach of measuring or describing a distribution through the use of
moment statistics. The first moment is the mean (x) and the second, third, and fourth
moments (about the mean) take the form of an equation
325
The second moment is the variance. The third and fourth moments are used to calculate two
measures that provide additional insight into the shape of a particular distribution.
Those measures are:
Figure B11.6
Summation of moments
The critical assumption in this approach is that, the total cost distribution will be normal even
though the individual cost element distributions may not be normal. This is shown above.
Monte carlo simulation
An alternative to the summation approach is to use the Monte Carlo Simulation Technique.
In this approach, the distribution defined for each cost element (using beta, triangular, or an
empirical distribution) is treated as a population from which several random samples are
drawn.
For example, a single cost element has been estimated and its uncertainty described as
shown in A of Figure below From the probability density function, Y=f(X), a cumulative
distribution is plotted, as shown in B of Figure B11.7 below.
326
Figure B11.7
Monte Carlo simulation
Qualitative indices of uncertainty
The use of qualitative indices has been proposed as a method of communicating a cost
estimate’s goodness, accuracy, or believability. Most of the indices are based upon the
quality of the data and the quality of the estimating methodology.
John D. Hwang proposed the rating scheme using a two-digit code with ratings of l to 5 for
data and for methodology.
327
Figure B11.8
Qualitative indices of uncertainty
328
Figure B12.1
Estimation methodology
329
into a post processor phase. The post processor allows for the conversion of parametric
output into a cost proposal (see Figure B11.2).
Figure B12.2
Parametric cost estimation
330
Anomalies
Historical cost data should be adjusted for anomalies (unusual events), prior to CER
analysis, when it is not reasonable to expect these unusual costs to be present in the new
projects. The adjustments and judgments used in preparing the historical data for analysis
should be fully documented.
Improved technology
Cost changes, due to changes in technology, are a matter of judgment and analysis. All
bases for such adjustments should be documented and disclosed.
Inflation
Inflation indices are influenced by internal considerations as well as external inflation rates.
Therefore, while generalized inflation indices may be used, it may also be possible to tailor
and negotiate indices used on an individual basis to specific labor rate agreements and the
actual materials used on the project. It should be based on the cost of materials and labor on
a unit basis (piece, pounds, and hour) and should not include other considerations like
changes in manpower loading or the amount of materials used per unit of production. The
key to inflation adjustments is consistency.
Learning curve
The learning curve analyses labor hours over successive production units of a manufactured
item. The curve is defined by the following equation:
Hours/Unit = First Unit Hours × Ub
or
Fixed Year Cost/Unit = First Unit Cost × Ub
Where: U = Unit number
b = Slope of the curve
In parametric models, the learning curve is often used to analyze the direct cost of
successively manufactured units. Direct cost equals the cost of both touch labor and direct
materials-in fixed year dollars. Sometimes this may be called an improvement curve. The
slope is calculated using hours or constant year dollars.
Production rate
Production rate effects (changes in production rate, i.e., units/months) can be calculated in
various ways. For example, by adding another term to the learning or improvement curve
equation we would obtain:
Hours/Unit = Ub × Rr
or,
Fixed Yr $/Unit = First Unit $ × Ub × Rr
Where:
U = Unit number
b = Learning curve slope
R = Production rate
r = Production rate curve slope
The net effect of adding the production rate effect equation (Rr) is to adjust the First Unit $
for rate. The equation will also yield a different ‘b’ value. The rate effect can vary
considerably depending on what was required to effect the change. When preparing a cost
estimate, it is preferable to use primary sources of data. The two types of data are:
331
Primary data is obtained from the original source. Primary data is considered the best in
quality, and ultimately the most useful.
Secondary data is derived from primary data. It is not obtained directly from the source.
Since it was derived (actually changed) from the original data, it may be of lower overall
quality and usefulness.
There are nine main sources of data and they are listed in the chart below.
8. Contracts Secondary
The information needed to use a parametric data model is listed on the chart that follows.
Size of database
Range of database
332
Realistic Estimates of Most Likely Range for Independent Variable Values
Top Functional Experts Knowledgeable about The Program You are Estimating
333
Figure B12.3
Matching aggregation levels of CERs
Uses of CERs
CERs are used to estimate costs at many points in the acquisition cycle when little is known
about the cost to be estimated. CERs are of greatest use in the early stages of a system’s
development and can play a valuable role in estimating the cost of a design approach.
CERs can serve as checks for reasonableness on bids proposed by contractors. Even after
the start of the development and production phases, CERs can be used to estimate the
costs of non-hardware elements. For example, they can be used to make estimates of O&S
costs. This may be especially important when trying to determine downstream costs of
alternative design, performance, logistic, or support choices that must be made early in the
development process.
Developing CERs
A CER is a mathematical equation that relates one variable such as cost (a dependent
variable) to one or more other cost drivers (independent variables). The objective of
constructing the equation is to use the independent variables about which information is
available or can be obtained to predict the value of the dependent variable that is unknown.
When developing a CER, the analyst must first hypothesize a logical estimating relationship.
After developing a hypothetical relationship, the analyst needs to assemble a database.
Hypothesis testing of a logical CER
The analyst must structure the forecasting model and formulate the hypothesis to be tested.
The work may take several forms depending upon forecasting needs. It involves discussions
with engineers to identify potential cost driving variables, scrutiny of the technical and cost
proposals, and identification of cost relationships. Only with an understanding of hardware
requirements can an analyst attempt to hypothesize a forecasting model necessary to
develop a CER.
The CER model
Once the database is developed and a hypothesis determined, the analyst is ready to
mathematically model the CER – both linear and curvilinear, considering one simple model -
the LSBF model.
When to use a CER
334
When a CER has been built from an assembled database based on a hypothesized logical
statistical relationship, one is ready to apply the CER. It may be used to forecast future costs
or it may be used as cross checks of an estimate done with another estimating technique.
CERs are a fundamental estimating tool used by cost analysts.
Strengths and weaknesses of CER’S
Some of the more common ones are presented below:
Strengths
One of the principle strengths of CERs is that they are quick and easy to use. Given a
CER equation and the required input data, one can generally turn out an estimate
quickly.
A CER can be used with limited system information. Consequently, CERs are especially
useful in the RDT&E phase of a program.
A CER is an excellent (statistically sound) predictor if derived from a sound database,
and can be relied upon to produce quality estimates.
Weaknesses
CERs are sometimes too simplistic to forecast costs. Generally, if one has detailed
information, the detail may be reliably used for estimates. If available, another estimating
approach may be selected rather than a CER.
Problems with the database may mean that a particular CER should not be used. A cost
model should not be used without reviewing its source documentation.
Regression analysis
The purpose of regression analysis is to improve the ability to predict the next ‘real world’
occurrence of our dependent variable. Regression analysis may be defined as the
mathematical nature of the association between two variables. The association is
determined in the form of a mathematical equation. Such an equation provides the ability to
predict one variable on the basis of the knowledge of the other variable. The variable whose
value is to be predicted is called the dependent variable. The variable about which
knowledge is available or can be obtained is called the independent variable. The dependent
variable is dependent upon the value of independent variables. The relationships between
variables may be linear or curvilinear. By linear, the functional relationship can be described
graphically (on a common X-Y coordinate system) by a straight line and mathematically by
the common form:
y = a + bx
where y = (represents) the calculated value of y - the dependent variable
x = the independent variable
b = the slope of the line, the change in y divided by the corresponding change in x.
a and b are constants for any value of x and y.
For the bi-variate regression equation - the linear relationship of two variables can be
described by an equation, which consists of two distinctive parts, the functional part and the
random part. The equation for a bi-variate regression population is:
Y = A + BX + E
where;
A + BX is the functional part (a straight line) and E is the random part.
A and B are parameters of the population that exactly describe the intercept and slope of the
335
relationship.
E represents the ‘error’ part of the equation i.e., the errors of assigning values, the errors of
measurement, and errors of observation due to human limitations, and the limitations
associated with real world events.
The above equation is adjusted to the form:
y = a + bx + e,
where;
a + bx represents the functional part of the equation and e represents the random part.
Curve fitting
There are two standard methods of curve fitting. One method has the analyst plot the data
and fit a smooth curve to the data. This is known as the graphical method. The other method
uses formulas or a ‘best-fit’ approach where an appropriate theoretical curve is assumed and
mathematical procedures are used to provide the one ‘best-fit’ curve; this is known as
the Least squares best fit (LSBF) method.
We are going to work the simplest model to handle, the straight line, which is expressed as:
Y = a + bx
Graphical method
To apply the graphical method, the data must first be plotted on a graph paper. No attempt
should be made to make the smooth curve actually pass through the data points which have
been plotted; rather, the curve should pass between the data points leaving approximately
an equal number of points on either side of the line. For linear data, a clear ruler or other
straightedge may be used to fit the curve. The objective in fitting the curve, i.e. to ‘best-fit’
the curve to the data points plotted; that is, each data point plotted is equally important and
the curve you fit must consider each and every data point.
LSBF method
The LSBF method specifies the one line, which best fits the data set we are working with.
The method does this by minimizing the sum of the squared deviations of the observed
values of Y and calculated values of Y i.e., (Y1 − YC1)2 + (Y2 − YC2)2 + (Y3 − YC3)2 + ... + (Yn −
YCn)2 (see Figure B12.4).
336
Figure B12.4
LSBF line
Therefore, the LSBF line is one that can be defined as:
ΣE2 is a minimum because Σ (Y − YC)2 = ΣE2
For a straight line,
Y = a + bx
and, with N points,
The sum of the squares of the distances is a minimum if,
ΣY = AN + BΣX and
ΣXY = AΣX + BΣX2
These two equations are called the normal equations of the LSBF line. LSBF regression
properties are:
337
X1 Y1 X1 × Y1 X12 Y12
X2 Y2 X2 × Y2 X22 Y22
X3 Y3 X3 × Y3 X32 Y32
- - - - -
- - - - -
X Y X Y X2 Y2
4 10 40 16 100
3 8 24 9 64
9 12 108 81 144
7 9 63 49 81
2 3 6 4 9
where
338
Therefore, the regression equation (calculated y) is
Multiple regressions
In simple regression analysis, a single independent variable (X) is used to estimate the
dependent variable (Y), and the relationship is assumed to be linear (a straight line). This is
the most common form of regression analysis used in contract pricing. However, there are
more complex versions of regression equations that can be used to consider the effects of
more than one independent variable on Y. That is, multiple regression analysis assumes that
the change in Y can be better explained by using more than one independent variable. For
example, the number of miles driven may largely explain automobile gasoline consumption.
However, it might be better explained if we also considered factors such as the weight of the
automobile. In this case, the value of Y would be explained by two independent variables.
Yc = A + B1X1 + B2X2
where:
Yc = the calculated or estimated value for the dependent variable
A = the Y intercept, the value of Y when X = 0
X1 = the first independent (explanatory) variable
B1 = the slope of the line related to the change in X1, the value by which change when
X1 changes by one
X2 = the second independent variable
B2 = the slope of the line related to the change in X2, the value by which change when
X2 changes by one.
Curvilinear regression
In some cases, the relationship between the independent variable(s) may not be linear.
Instead, a graph of the relationship on ordinary graph paper would depict a curve.
‘Goodness’ of fit, R and R2
After the development of the LSBF regression equations, now we need to determine how
good a forecast we will get by using our equation. In order to answer this question, we must
consider a check for the ‘goodness’ of fit, the coefficient of correlation (R) and the related
coefficient of determination (R2).
Correlation analysis
One indicator of the ‘goodness’ of fit of a LSBF regression equation is correlation analysis.
Correlation analysis considers how closely the observed points fall to the LSBF regression
equation and more closely the observed values are to the regression equation, the better the
fit; hence, the more confidence in the forecasting capability. Correlation analysis refers only
to the ‘goodness’ of fit or how closely the observed values are to the regression equation but
nothing about cause and effect.
Coefficient of determination
The coefficient of determination (R2) represents the proportion of variation in the dependent
variable that has been explained or accounted for by the regression line. The value of the
coefficient of determination may vary from zero to one. A coefficient of determination of zero
indicates that none of the variation in Y is explained by the regression equation; whereas a
coefficient of determination of one indicates that 100 percent of the variation of Y has been
explained by the regression equation. Graphically, when R2 is zero, the observed values
appear as in Figure B12.7 (bottom) and when the R2 is one, the observed values all fall right
on the regression line as in Figure B12.8 (top).
339
Figure B12.7
Coefficient of determination
In order to calculate R2 we need to use the equation:
R2 tells us the proportion of total variation that is explained by the regression line. Thus R2 is
a relative measure of the ‘goodness’ of fit of the observed data points to the regression line.
If the regression line perfectly fits all the observed data points, then all residuals will be zero,
which means that R2 = 1.00. The lowest value of R2 is 0, which means that none of the
variation in Y is explained by the observed values of X.
Coefficient of correlation
The coefficient of correlation (R) measures both the strength and direction of the relationship
between X and Y. R takes the same sign as the slope; if b is positive, use the positive root of
R2 and vice versa. For example, if R2 = 0.81; then R = ±0.9 and we determine whether R
takes the positive root (+) or the negative root (−) by noting the sign of b. If b is negative,
then we use the negative root of R2 to determine R. So to calculate R it is required to know
the sign of the slope of the line. R only indicates the direct / an inverse relationship and the
strength of the association.
Learning curve effect
340
Wright first observed the learning curve in the 1930s in the American aircraft industry, and
Crawford confirmed his pioneering work in the 1940s. The learning effect is not concerned
with reduction in unit cost as production increases and /or production facilities are scaled up
to manufacture larger batches of products.
Learning effect
The learning effect is concerned with cumulative production over time-not the manufacture of
a single product/batch at a particular moment in time-and recognizes that it takes less time
to assemble a product, the more that product is made by the same worker, or group of
workers. That is the learning effect.
Cost reduction tool
It is important to appreciate that the learning curve is not a cost-reduction technique since
the learning curve model can predict the rate of future time reduction accurately. Cost
reduction only occurs if management action is taken, for example, to increase the rate of
time reduction by providing additional training, provision of better tools etc. The learning
effect occurs because people are inventive, learn from earlier mistakes, and are generally
keen to take less time to complete tasks, for a variety of reasons. It should also be noted that
the learning process might be done consciously and/or intuitively. The learning curve
consequently reflects human behavior.
Learning curve sectors
While the learning curve can be applied to many sectors, its impact is most pronounced in
sectors, which have repetitive, complex operations where people principally determine the
pace of work, not machines. Examples of sectors where the learning effect is pronounced
include:
Aerospace
Electronics
Shipbuilding
Construction
Defense
The learning curve is also being utilized by rail operators; for example, seek to extend cost-
effectively the lives of their assets. Another sector, which makes considerable use of this
technique, is the space industry. NASA uses the learning curve to estimate costs for the
production of space shuttles, time to complete tasks in space etc.
Learning curve model
Wright observed that the cumulative average time per unit decreases by a fixed percentage
each time cumulative production doubles over time. The following table illustrates this effect:
341
The above table indicates that the cumulative average time per unit falls by 10% each time
cumulative production doubles, i.e. it is depicting a 90% learning curve. The above
relationship between cumulative output and time can be represented by the following
formula:
Basic form of the ‘learning curve’ equation is,
y = a.xb or, Log y = Log a + b Log x
where,
y = Cost of Unit #x (or average for x units)
a = Cost of first unit
b = Learning curve coefficient
The equation Log y = Log a + b Log x is of precisely the same form as the linear equation y
= a + bx. This means that the equation Log y = Log a + b Log x can be graphed as a straight
line on log-log graph paper and all the regression formulae apply to this equation just as they
do to equation y = a + bx. In order to derive a learning curve from cost data (units or lots) the
regression equations need to be used whether or not the calculations are performed
manually or using a statistical package and the learning curve equation is a special case of
the LSBF technique.
Since in learning curve methodologies cost is assumed to decrease by a fixed amount each
time quantity doubles, then this constant is called the learning curve ‘slope’ or percentage
(i.e., 90%). For example,
For unit #1
Y1 = A (1) b = A (First Unit Cost) and
For unit #2
Y2 = A (2) b = Second Unit Cost
So,
Y2/ Y1 = A. (2) b / A = 2b = a Constant, or ‘Slope’
Slope = 2b, and, Log Slope = bLog2
Therefore, b = Log Slope/ Log2
For a 90% ‘Slope,’
b = Log.9/ Log 2 = −0.152
If we assume that A = 1.0, then the relative cost between any units can be computed.
Y3 = (3)−0.152 = 0.8462
Y6 = (6)−0.152 = 0.7616
342
Note that:
Y6 /Y3 = .7616 /.8462 = 0.9
Statistical package i.e., Stat View can perform all the calculations shown and greatly
simplified using these tools.
Decision-making
While a great deal has been written about the use of the learning curve for control purposes,
this technique can also be used to determine costs for potential contracts in sectors, which
exhibit the learning effect. For example, Above & Beyond Ltd, which produces high-
technology guidance systems, is preparing a tender for the Aurora project, the new
generation of space shuttles. The guidance systems for the Aurora project will be very
similar to those recently supplied by the company for the Dark Star project, experimental
Stealth aircraft capable of flying outside the earth’s atmosphere. The company has been
asked to submit a tender to install ten guidance systems for this project. While a tender
would take account of all costs that would apply to a particular project, one of the key costs
for a high-technology project is labor time, as highly skilled personnel (who are highly paid)
are required to assemble and test such systems. The following analysis will focus on the
labor time required for this project.
Aurora project
Above & Beyond Ltd’s engineers believe it is possible to estimate the time required to install
the guidance systems for the new generation of space shuttles from the learning derived
from the Dark Star project. The same system will be installed in the space shuttles. The
following data was consequently obtained in respect of the Stealth project:
343
It is now possible to estimate the time required to install the guidance systems for this
project.
344
Please note that the learning curve would produce an estimate of 75,715 hours to install the
guidance systems using a learning rate of 87% if no account was taken of the learning
derived from Dark Star. This difference of 18,783 hours is extremely significant since the
costs of hiring specialist engineers, and supporting these engineers, could be £100+ per
hour, i.e. £1.8m+.
Limitations, errors and caveats of LSBF techniques
Extrapolation
A LSBF equation is truly valid only over the same range as the one from which the
sample data was initially taken.
Cause and Effect
Regression and correlation analysis can in no way determine cause and effect.
Illustration of CER use
Assume an item of equipment in 1980 cost $28,000 when an appropriate Consumer Price
Index (CPI) was 140. If the current index is now 190 and an offer to sell the equipment for
$40,000 has been suggested, how much of the price increase is due to inflation? How much
of the price increase is due to other factors?
Solution
345
$38,000 now is roughly the equivalent of $28,000 in 1980. Hence the price difference due to
inflation is $38,000 − $28,000 = $10,000. The difference due to other causes is $2,000
($40,000 − $38,000).
The above example would illustrate the use of CPI numbers for a material cost analysis. The
steps were:
If we know what the price of an item was in the past, and we know the index numbers for
both that time period and today, we can then predict what the price of that item should
be now based on inflation alone.
If we have the same information as above, and we have a proposed price, we can
compare that price to what it should be based on inflation alone. If the proposed price is
higher or lower than we expect with inflation, then we must investigate further to
determine why a price or cost is higher or lower.
Illustration:
A house to be purchased. Historical data for other houses purchased, may be examined
during an analysis of proposed prices for a newly designed house. Table below shows data
collected on five house plans so that we can determine a fair and reasonable price for a
house of 2100 square feet and 2.5 baths.
Solution:
Using this data, we can demonstrate a procedure for developing a CER.
346
Step 1: Designation and definition of the dependent variable. In this case we will attempt to
directly estimate the cost of a new house.
Step 2: Selection of item characteristics to be tested for estimating the dependent variable. A
variety of home characteristics could be used to estimate cost. These include such
characteristics as: square feet of living area, exterior wall surface area, number of baths, and
others.
Step 3: Collection of data concerning the relationship between the dependent and
independent variable.
Step 4: Building the relationship between the independent and dependent variables.
Step 5: Determining the relationship that best predicts the dependent variable.
Figure B12.8 graphically depicts the relationship between the number of baths in the house
and the price of the house. The relationship may not be a good estimating tool, since three
houses with a nearly $8,000 price difference (12 percent of the most expensive house) have
the same number of baths.
Figure B12.8
Relationship between number of baths and prices of a house
Figure B12.9 graphically relates square feet of living area to price. In this graph, there
appears to be a strong linear relationship between house price and living area.
347
Figure B12.9
Square feet of living area to price
Figure B12.10 graphically depicts the relationship between price and exterior wall surface
area. Again, there appears to be a linear relationship between house price and this
independent variable.
Figure B12.10
Relationship between price and exterior wall surface area
Based on this graphic analysis, it appears that square feet of living area and exterior wall
surface have the most potential for development of a cost estimating relationship. We may
now develop a ‘line-of-best-fit’ graphic relationship by drawing a line through the average of
the x values and the average of the y values and minimizing the vertical distance between
the data points and the line (Figure B12.11 and B12.12).
348
Figure B12.11
Linear trend of cost to living area (Sq. Ft.)
Viewing both these relationships, we might question whether the Ambassador model data
should be included in developing our CER. However, we should not eliminate data just to get
a better-looking relationship. Here, we find that the Ambassador’s size is substantially
different from the other houses for which we have data and price estimation. This substantial
difference in size might logically affect the relative construction cost. The trend relationship
using the data for the four other houses would be substantially different than relationships
using the Ambassador data. Based on this information, we may decide not to consider the
Ambassador data in CER development.
Figure B12.12
Linear trend of cost to exterior wall surface (Sq. Ft.)
If we eliminate the Ambassador data, we will find that the fit of a straight-line relationship of
price to the exterior wall surface is improved.
349
Figure
B12.13
Relationship of price to square feet of living area
If we have to choose one relationship, we would probably select the one shown in the table
(square feet of living area) over the relationship involving exterior wall surface because there
is so little variance shown about the trend line. If the analysis of these characteristics does
not reveal a useful predictive relationship, we may consider combining two or more of the
characteristics already discussed, or explore new characteristics. However, since the
relationship between living area and price is so close, we may reasonably use it for our CER.
In documenting our findings, we can relate the process involved in selecting the living area
for price estimation. We can use the graph developed as an estimating tool. The cost of the
house could be calculated by using the same regression analysis formula discussed herein:
Y = $117,750 + $45,500
Common CERS
A list of some common CERs used to predict prices or costs of certain items are given
below. In addition to CERs used for estimating total cost and prices, others may be used to
350
estimate and evaluate individual elements of cost. CERs are frequently used to estimate
labor hours. Tooling costs may be related to production labor hours, or some other facet of
production. Other direct costs may be directly related to the labor effort involved in a
program.
351
Figure B12.14
CER development process
352
When analogy cost estimating methods is employed, the new system is broken down into
components, which are compared to similar existing components. The basis for comparison
can be in terms of capabilities, size, weight, reliability, material composition, or design
complexity. Analogy cost estimating usually requires the services of technical specialists.
Figure B12.15
Analogy estimate activities
Activity A
Determine estimate needs and ground rules
Cost estimates differ widely from detailed life cycle cost estimates, which cover activities
occurring over periods up to 20 years, to simple estimates for a one time purchase of a
single piece of equipment hardware of an existing design. Estimates differ by level of detail
or accuracy required. Some estimates need to be detailed so that costs can be tracked and
managed at a lower level. Ground rules and assumptions e.g., inflation rates to be used, buy
quantities, schedules, interactions with other programs, test requirements, etc. must be
defined. Estimate objectives, assumptions, and ground rules should be documented at the
outset and agreed upon by all, especially program management.
353
Activity B
Define the system
Defining the system includes determining:
Design or physical parameters such as weight, size, material type, and design approach
Required performance characteristics such as speed, range, computation speed,
reliability and maintainability
Interface requirements with other systems, equipment, and organizations
Unusual training, operations, and support requirements
Unusual testing or certification requirements
Level of technology advance, if any, required
Known similar systems.
Activity C
Plan breakout of system for analogy estimating
The overall objective of this activity is to break out the overall system into components in the
following way:
354
Collect prior system component cost data
The prior system component cost data must be for the same items for which the design and
performance data was collected. Cost values should explain the following requirements:
355
Obtain complexity factor values
This activity is the foundation of analogy estimating methodology and result in an
understandable and traceable reason for each complexity factor developed.
Activity M
Obtain miniaturization factor values
The smaller the subsystem is for a given level of performance, the more costly it is to
produce. The question of ‘how much more’ should be presented in terms of the ratio of the
expected cost of the new system to the expected cost of designing a new system with the
same level of performance but with no space and weight constraints.
Activity N
Obtain productivity improvement factor values
Productivity improvements should drive costs down or at least somewhat offset inflation cost
increases. Technical specialists should be asked if there has been significant productivity
improvement between production of the prior and new systems. It is very desirable to obtain
separate factor judgments for complexity, miniaturization, and productivity changes.
Activity O
Apply factors to obtain new system costs
In applying the factors, the following equation is used. Where two or more factors are
combined, the equation will change accordingly.
CN = CP × FC × FM × FP
where
CN = the equivalent cost for the new system
CP = any T1, FSD, or production nonrecurring cost for the prior system or system
component
FC = Complexity factor ratio
FM = Miniaturization factor ratio
FP = Productivity component ratio
Activity P
Develop new system PME cost estimates
Recurring and nonrecurring costs are added to develop total prime mission equipment
(PME) costs for each WBS component addressed. Costs for the various components or
groups of components involved are summed to get the total new system PME cost for FSD
and the specified production quantity of interest.
Activity Q
Develop other new system costs with factors
A common approach is to use the differences in characteristics to extrapolate from the PME
costs of prior systems to the PME costs of the new system. When this is done, other
elements of cost such as Systems engineering/Program management, spares, support
equipment, training, and data must be added to complete the estimate for the new system.
Activity R
Develop total program costs
Completion of activities P and Q should provide cost data that can be summed to get the
total cost for a contractor to provide the new system. If the program has several contractors,
the total program cost must combine the costs associated with all contractors.
Activity S
356
Review the estimate
Analogy cost estimates should be reviewed before preparing final documentation and is best
performed by other cost estimators or supervisors experienced in analogy cost estimating.
Activity T
Document the estimate
Analogy cost estimate documentation has much in common with documentation required for
any cost estimate.
Engineering estimating
This method generally involves a more detailed examination of the new system and
program. Engineering estimates prepared by contractors usually do not include other
government costs and engineering change costs. Most significant estimating efforts are a
combination of several methods and the best combination of methods is the one which
makes the best possible use of the most recent and applicable historical data and system
description information and which follows sound logic to extrapolate from historical cost data
to estimated costs for future activities. The smaller the extrapolation gap in terms of
technology, time, and activity scope the better.
Where a and b are positive constants to be determined on the basis of historical data. A
fixed cost of y = a, at x = 0 is implied as shown in Figure B12.18. In general, this relationship
is applicable only in a certain range of the variable x, such as between x = c and x = d. If the
values of y corresponding to x = c and x = d are known, then the cost of a facility
corresponding to any x within the specified range may be obtained by linear interpolation.
357
Figure B12.18
Linear cost relationship with economies of scale
A nonlinear cost relationship between the facility capacity x and construction cost y can often
be represented as:
Taking the logarithm of both sides in this equation, a linear relationship can be obtained as
follows:
A nonlinear cost relationship often used in estimating the cost of a new industrial processing
plant from the known cost of an existing facility of a different size is known as the exponential
rule. Let yn be the known cost of an existing facility with capacity Qn, and y be the estimated
cost of the new facility, which has a capacity Q.
where m usually varies from 0.5 to 0.9, depending on a specific type of facility. A value of m
= 0.6 is often used for chemical processing plants.
The exponential rule can be reduced to a linear relationship if the logarithm of Equation is
used:
The exponential rule can be applied to estimate the total cost of a complete facility or the
cost of some particular component of a facility.
358
Cost indexes
An index is a dimension-less number that indicates how a cost or a price has changed with
time (typically escalated) with respect to a base year. It shows how prices / costs vary with
time-a measurement of inflation or deflation. Changes usually occur as a result of:
Technological advances
Availability (scarcity) of labor and materials
Changes in consumer buying patterns
It establishes a reference from some base time period (i.e., a base year). When compared to
a current-year index measures the amount (%) change from the base period.
Examples:
Engineering News-Record Construction Index
Producer Prices and Price Indexes
Consumer Price Index Detailed Report
Yn = Yk (In/Ik)
where:
Yn = estimated cost or price of item in year n
Yk = cost or price of item in year k for k < n
In = index value in year n
Ik = index value in year k
Developing indexes
For a single item, the index value is simply the ratio of the cost of the item in the current year
to the cost of the same item in the reference year, multiplied by the reference year factor.
Averaging the ratios of selected item costs in a particular year to the cost of the same items
in a reference year can create a composite index. Weights can be assigned to the items
according to their contribution to total cost.
Unit technique
Utilizes a ‘per unit’ factor that can be estimated effectively.
359
where n is the number of units.
Factor technique
An extension of the unit method where the sum of products of component quantities and
corresponding unit costs plus component costs is estimated directly.
where:
SA =size of item A
SB =size of item B
X =cost-capacity factor
Illustration
The total construction cost of a refinery with a production capacity of 200,000 bbl/day in
Indiana, completed in 2001 was $100 million. It is proposed that a similar refinery with a
production capacity of 300,000 bbl/day be built in California, for completion in 2003. For the
additional information given below, make an order of magnitude estimate of the cost of the
proposed plant.
In the total construction cost for the Indiana plant, there was an item of $5 million for site
preparation, which is not typical for other plants.
360
The variation of sizes of the refineries can be approximated by the exponential rule, with
m = 0.6.
The inflation rate is expected to be 8% per year from 1999 to 2003.
The location index was 0.92 for Indiana and 1.14 for California in 1999. These indices
are deemed to be appropriate for adjusting the costs between these two cities.
New air pollution equipment for the LA plant costs $7 million in 2003 dollars (not required
in the Indiana plant).
The contingency cost due to inclement weather delay will be reduced by the amount of
1% of total construction cost because of the favorable climate in LA (compared to
Indiana).
Solution
On the basis of the above conditions, the estimate for the new project may be obtained as
follows:
Typical cost excluding special item at Indiana is
$100 million − $5 million = $95 million
Adjustment for capacity based on the exponential law yields
($95)(300,000/200,000)0.6 = (95)(1.5)0.6 = $121.2 million
Adjustment for inflation leads to the cost in 2003 dollars as
($121.2)(1.08)4 = $164.6 million
Adjustment for location index gives
($164.6)(1.14/0.92) = $204.6 million
Adjustment for new pollution equipment at the California plant gives
$204.6 + $7 = $211.6 million
Reduction in contingency cost yields
($211.6)(1−0.01) = $209.5 million
Since there is no adjustment for the cost of construction financing, the order of magnitude
estimate for the new project is $209.5 million.
Depreciation: $0.210
Gasoline and oil: $0.059
Finance charges (based on 20% down and 48 months financing at A.P.R.): $0.065
Insurance costs (including collision): $0.060
Taxes, license and registration fees: $0.015
Tire costs: $0.011
(a) If a person who owns this ‘average’ automobile plans to drive 15,000 miles during 1998,
how much would it cost to own and operate the automobile?
(b) If the person actually drives 30,000 miles in 1998, give some reasons why his/her actual
361
cost may not be twice the answer obtained in part (a).
(c) Attempt to develop an estimate of the cost per mile of owning and operating this
automobile in the year 2002.
2. You must build a new and larger factory. You know those 30 years ago, it cost $10 million
to build a 10,000 sq ft facility. Today, you wish to build a 20,000 sq ft facility. Assume that
the cost index was 200, 30 years ago and is 1,200 today. Let X = 0.6. What is the cost to
build the new and larger factor?
Heavy and highway projects – Roads and bridges, dams and canals and power plants
Equipment intensive and engineering design
Residential Contractor
General Building Contractor
Specialty Contractor (Subcontractors)
Heavy and Highway Contractor
B12.7.1 Project estimation
Determination of quantities of materials, equipment, and labor for a given project and apply
proper unit costs to these items.
Why estimation?
To estimate,
The probable real cost to build a project (direct costs, indirect costs, contingency, profit)
The probable real time to build a project (activity duration and project duration)
Construction phases and type of estimates
362
Feasibility estimates
May be based on total cost of project, including land cost, professional fess, finance
cost, construction cost, and operating costs
Considerable construction knowledge, experience and good judgment required.
Approximate estimate
It is also called preliminary, conceptual, or budget estimates. It is used to evaluate design
modifications (value engineering) and contractors’ bids.
363
Flow chart of estimating process
364
Figure B12.19
Estimating process
Quantity take-off (Quantity surveying)
Quantity take-off is the determination of quantity of work to be performed based on drawings
and specifications for a proposed project. Most basic and most important element of the
estimating process, i.e., the contract amount mostly depends on its accuracy and efficiency.
Choice of work items or cost items for quantity take-off should relate to those for pricing
work, planning & scheduling and for job control.
Basic process of quantity take-off
Step 1) Identification of specific packages of work
Estimating formats (organization of the estimate)
365
1. Construction Specification Institute Code (CSI) Format (16-divisions ) – building
construction
2. Standard forms (W.B.S) - Heavy & Highway construction agencies, companies, contractor
C.S.I 16 divisions
General requirements
Site work
Concrete
Masonry
Metals
Wood & plastics
Thermal & moisture protection
Doors & windows
Finishes
Specialties
Equipment
Furnishings
Special construction
Conveying systems
Mechanical
Electrical
Work breakdown structure - Work items
Same productivity
Identical operation
366
Step 2) Define units of measure for work items
– Floor plans
– Elevation drawings
– Section drawings
c. Plumbing
e. Electrical
FORMS:
367
b.General estimate sheet - Take-off & pricing
Be compatible with how the work will be done and how costs and schedules will be
managed
Give visibility to important or risky work efforts
Allow mapping of requirements, plans, testing, and deliverables
Foster clear ownership by managers and task leaders
Provide data for performance measurement and historical databases
Make sense to the workers and accountants.
There are usually many ways to design a WBS for a particular project, and there are
sometimes as many views as people in the process.
The U.S. defense establishment initially developed the WBS, and it is described in Military
Standard as follows:
‘A work breakdown structure is a product-oriented family tree composed of hardware,
software, services, data and facilities.... It displays and defines the product(s) to be
developed and/or produced and relates the elements of work to be accomplished to each
other and to the end product(s).’
Beginning with a simple ‘to-do’ list and then clustering the items in a logical way can develop
a task-oriented WBS. The logical theme could be project phases, functional areas, or major
end products.
368
A sample of a standard WBS is shown in Figure B12.20.
Figure B12.20
A standard WBS
A WBS for a large project will have multiple levels of detail, and the lowest WBS element will
be linked to functional area cost accounts that are made up of individual work packages.
Whether it is three levels or seven, work packages should add up through each WBS level to
form the project total.
B12.7.3 Basics of WBS
Fundamental structure of a work breakdown structure
A WBS is a numerical, graphic representation that completely defines a project by relating
elements of work in that project to each other and to the end product. The WBS is comprised
of discrete work packages, called elements that describe a specific item of hardware,
service, or data. Descending levels of the WBS provide elements of greater and greater
detail. The number of levels of a WBS depends on the size and complexity of the project.
Examples of the first three levels of a WBS are as follows.
Level 1 contains only the project end objective. The product at this level shall be
identifiable directly to elements of the Budget and Reporting Classification Structure.
Level 2 contains the major product segments or subsections of the end objective. Major
segments are often defined by location or by the purpose served.
Level 3 contains definable components, subsystems or subsets, of the Level 2 major
segments.
A WBS shows the relationship of all elements of a project. This provides a sound basis for
cost and schedule control. During that period of a project’s life from its inception to a
completed project, a number of diverse financial activities must take place. These activities
include cost estimating, budgeting, accounting, reporting, controlling and auditing. A WBS
establishes a common frame of reference for relating job tasks to each other and relating
project costs at the summary level of detail. Since the WBS divides the package into work
packages, it can also be used to interrelate the schedule and costs. The work packages or
their activities can be used as the schedule’s activities. This enables resource loading of a
369
schedule, resource budgeting against time, and the development of a variety of cost budgets
plotted against time.
Preparing a work breakdown structure
The initial WBS prepared for a project is the project summary work breakdown structure
(PSWBS), which contains the top three levels only. Lower-level elements may be included to
clearly communicate all project requirements.
370
summary levels. As the detail of a project increases, more detail levels can be developed.
The COA is used during the estimate stage to organize the costs. As a project progresses,
the same COA are used but the elements of data are updated. By comparing the changes in
the elements of the COA, variances and trends can be identified. Using the same COA once
construction work begins will provide consistency between the estimate and actual cost data
for cost control purposes.
Fundamental structure of a cost code system
A direct cost system generally includes three levels of codes. The ‘first-level’ codes,
sometimes called ‘primary levels,’ represent the major cost categories. The major
components or categories of work for each of the primary levels are listed and assigned a
‘second-level’ or sub-summary code. These ‘second-level’ codes are then broken down by
work elements or bills of material, which is assigned a ‘third-level’ or fine-detail-level. The
cost estimate will list the labor and material required at the ‘third-level’ code, and then all
‘third-level’ codes will be summarized by their respective ‘second-level’ codes. Their
respective ‘primary levels’ will summarize likewise all ‘second-level’ codes. The ‘primary
levels’ will be summarized by each ‘subproject’ or ‘project’ total to obtain the ‘project’ overall
cost estimate.
Subproject designation
Subproject is a term used to divide a project into separately manageable portions of the
project. A subproject is generally used to identify each separately capitalizable identity, such
as a building. A subproject can also be used to identify a specific geographical area or
separate physical features of a project. A matrix should be drawn for each project listing the
subprojects designated and indicating all the second-level cost codes for the construction
work required by each.
Interface of systems
Even though the numeric systems established for the WBS and COA differ, they are both
based on a structure that increases in detail as the levels increase. A correlation exists
between the WBS and COA levels. This relationship is inherent since there are costs
associated with the execution of each work package or element of the WBS. This correlation
is shown in Figure 11.21.
Incorporating the cost codes into the WBS will provide:
A framework for basic uniformity in estimating and accounting for the costs of
construction work
A means for detecting mission and duplication of items in budget estimates
A basis for comparing the cost of similar work in different projects or at different locations
A record of actual costs incurred on completed projects in a form that will be useful in the
preparation of estimates for other projects
A means of establishing the cost of property record units for continuing property
accounting records.
WBS and code of account relationship (see Figure B12.21)
371
Figure B12.21
WBS & COA relationship
372
B12.8.1 MicroFASTE
The MicroFASTE model is used to help the analyst to develop a parametric model to be
used to estimate the costs associated with implementing a project. The project may be the
production and installation of a hardware system, a software system (or a combination of
both), financial funding program, construction and operation of an underground coal or
uranium mine, construction of nuclear power stations, radar systems or manned space
stations etc. are possible through the techniques of parametric systems analysis. However,
the MicroFASTE model is exclusively for use in performing parametric analyses for
Hardware systems. MicroFASTE classifies common implementation phases into the
following categories and sub categories:
Equipment acquisition phases and life cycle (O&S)
Engineering
Design/Drafting: Involves the detail design engineering and drafting effort that
implements the governing specification
Systems Engineering: Establishes the equipment’s design, performance and test
specifications, predicated on the controlling requirements
Documentation: The recording of engineering activities, preparation of equipment
manuals and required management reports
Prototype and Testing: Covers all charges connected with the manufacture and testing
of engineering prototypes, and includes all brass and breadboard models
Special Engineering Tooling: Embodies the special tooling charges affiliated with the
prototype efforts. It does not include capital or amortized equipment that may be related
to the tooling. When there are no prototypes, there will be no tooling charges
Project Management: Takes in the overall management of all areas connected with the
engineering efforts such as planning, budgeting, operations and financial controls.
Production
Manufacturing: Involves the direct production charges. This is the same cost value as
calculated when total production is specified without the detail breakdowns.
Engineering Support: Embodies the engineering effort that is related to the
manufacturing activity such as material design reviews, corrections of defects, etc.
Documentation: The recording of production events as well as changes to equipment
manuals as necessitated by design modifications caused by production problems.
Production Tooling: Covers special required tooling. It does not include the cost of
capital equipment, or tools that are amortized in overhead accounts.
Project Management: Takes in the management of all areas associated with production
such as planning, budgeting, operations and financial control.
B12.8.2 Price parametric models
PRICE models use the parametric approach to cost estimating. Parametric cost modeling is
based on cost estimating relationships (CERs) that make use of product characteristics
(such as hardware weight and complexity) to estimate costs and schedules. The CERs in
the PRICE models have been determined by statistically analyzing thousands of completed
projects where the product characteristics and project costs and schedules are known. An
example of parametric modeling is the technique used for estimating the cost of a house.
373
The actual cost of building a house is, of course, the total cost of the materials and labor.
However, defining the required materials and labor for developing this cost estimate are time
consuming and expensive. So, a parametric model that considers the characteristics of the
house is used to estimate the cost quickly. The characteristics are defined quantitatively
(floor area, number of rooms, etc.) and qualitatively (style of architecture, location, etc.).
PRICE does not require labor hours and a bill of material. This early on estimating capability
makes PRICE a tool for design engineers and can provide engineers with cost information
needed to develop minimum cost designs.
Descriptions of the PRICE models
PRICE H is used to estimate the cost of developing and producing hardware. Most
manufactured items and assemblies can be estimated using PRICE H. PRICE H uses
product characteristics to develop the cost estimate. This makes the model a good tool to
use at the product concept stage, when there is insufficient definition to quantify the product
labor and material required for a conventional estimate. Key inputs to the PRICE H Model
are:
Weight - tells the model the size of the product being estimated.
Manufacturing Complexity - a coded value that characterizes product and process
technologies and the past performance of the organization.
Platform - a coded value that characterizes the quality, specification level, and reliability
requirements of the product application.
Quantities - the number of prototypes and production items to be estimated.
Schedule - the dates for the start and completion of the development and production
phases may be specified. The model will compute any dates that are not specified. Only
the date for the start of development is required.
Development Costs - effort associated with drafting, design engineering, systems
engineering, project management, data, prototype manufacturing, prototype tooling, and
test equipment.
Production Costs - effort associated with drafting, design engineering, project
management, data, production tooling, manufacturing, and test equipment.
All costs are reported at the material, labor, overhead, and dollar level.
With PRICE H, engineers and managers are able to develop cost estimates for each
alternative to select minimum cost product designs. It can compute the unit production cost
of a product.
B12.8.3 SEER
The SEER cost model estimates hardware cost and schedules and includes a tool for risk
analysis. It is sensitive to differences in electronic versus mechanical parameters and makes
estimates based on each hardware item’s unique design characteristics. The SEER
hardware life cycle model (SEER HLC) evaluates the life cycle cost effects due to variations
in; reliability, mean time to repair and repair turnaround times. SEER HLC complements
SEER H, and both models will run on a personal computer. The models are based on actual
data, utilize user friendly graphical interfaces, and possess built in knowledge bases and
databases that allow for estimates from minimal inputs (see Figure B12.22).
374
Figure B12.22
SEER
SEER-H can generate the best possible estimate at any stage in the hardware acquisition
process. SEER-H contains both the estimating software and the knowledge bases to provide
expert inputs and estimating acuity. The knowledge bases are built on extensive real-world
data and expertise. These can be used to form an unbiased expert opinion, particularly when
specific knowledge is not yet available. Early on, knowledge bases save estimators’
guesswork. As a project progresses and more specific design data becomes available,
estimates can be quickly upgraded (see Figure B12.22).
375
Figure B12.22
Parameters of SEER-H
SEER DFM (Design for Manufacturability) is a tool designed to assist the engineer produce
and assemble products with efficiency, in a manner designed to exploit the best practices of
an organization. Two fundamental analysis steps are taken in a DFM regime: the gross and
detailed trade off analysis.
Gross analysis involves product design decisions, and also fundamental process and tooling
decisions. Factors that influence gross analysis include the quantity of the planned product,
the rate at which it will need to be produced, and the investment required. There are also
machinery, assembly and setup costs to contend with.
376
Detailed analysis takes place once many of the primary production parameters, such as
design and basic processes, have been fixed. Factors that can be adjusted for and balanced
at the detailed level include tolerances, the proportion of surface finishes, secondary
machining options and the age and degree of process.
SEER DFM integrates the following models:
Machining (turning, boring milling, shaping, chemical milling, grinding...): The model
explores the tradeoffs from starting with raw stock vs. sand or investment casting, etc.
The material may also be varied. Tooling, setup and other costs hinge on these choices.
Sheet metal fabrication (presses, shears, die press).
Mechanical assembly (spot welding, bolting, bracing).
Electrical assembly (PC board assembly, parts preparation, soldering & wave soldering,
fasteners).
Injection molding: Parameters include weight, cycle time, and cavities in the mould.
SEER DFM performs the analysis from standard work measurements (standard times).
Appendix C
Reference cases
Index of cases
1. Byrne vs van Tienhoven, (1880)
2. Cahill vs Carbolic Smoke Ball Co., (1893)
3. D. & C. Builders Ltd vs Rees, (1965)
4. Dickinson vs Dodds, (1876)
5. Felthouse vs Bindley, (1862)
6. Fisher vs Bell, (1961)
7. Hadley vs Baxendale, (1854)
8. Hedley Byrne & Co. Ltd vs Heller & Partners Ltd, (1963)
9. Hyde vs Wrench, (1840)
10. Pharmaceutical Society of Great Britain vs Boots Cash Chemists(Southern) Ltd, (1953)
11. Roscorla vs Thomas, (1842)
12. Stevenson vs McLean, (1880)
13. Victoria Laundry Ltd vs Newman Industries Ltd, (1949).
14. March Construction Limited -v Christchurch City Council, (1994)
15. Blackpool & Fylde Aero Club v Blackpool Borough Council, (1990)
16. The Queen in Right of Ontario v Ron Engineering, (1981)
377
17. Ben Bruinsma v Chatham, (1985)
18. Megatech Contracting Ltd v Municipality of Ottawa-Carleton, (1989)
19. Chinook Aggregates Ltd v District of Abbotsford, (1990)
20. Davis Contractors v Fareham UDC, (1956)
378
plaintiffs then said: “We have no choice but to accept.” Mrs. Rees gave the plaintiffs a check
and insisted on a receipt “in completion of the account”.
The plaintiffs, being worried, brought an action for the balance. The defense was bad
workmanship and also that there was a binding settlement. The question of settlement was
tried as a preliminary issue and the judge, following Goddards v. O’Brien, (1880) 9 Q.B.D.33,
decided that a check for a smaller amount was a good discharge of the debt, this being the
generally accepted view of the law since that date. On appeal it was held (per The Master of
the Rolls, Lord Denning) that Goddards v. O’Brien was wrongly decided. A smaller sum in
cash could be no settlement of a larger sum and “no sensible distinction could be drawn
between the payment of a lesser sum by cash and the payment of it by check.”
In the course of his judgment Lord Denning said of Hightrees: “It is worth noting that the
principle may be applied, not only so as to suspend strict legal rights, but also so as to
preclude the enforcement of them. This principle has been applied to cases where a creditor
agrees to accept a lesser sum, in discharge of a greater. So much so that we can now say
that, when a creditor and a debtor enter on a course of negotiation, which leads the debtor to
suppose that, on payment of the lesser sum, the creditor will not enforce payment of the
balance, and on the faith thereof the debtor pays the lesser sum, and the creditor accepts it
as satisfaction: then the creditor will not be allowed to enforce payment of the balance when
it would be inequitable to do so.... But be is not bound unless there has been truly an accord
between them.” In the present case there was no true accord. “The debtor’s wife had held
the creditors to ransom, and there was no reason in law or Equity why the plaintiffs should
not enforce the full amount of debt.
379
the parties, and the property in the horse was not vested in the plaintiff at the time of the
auction sale.
D.8 Hedley Byrne & Co. Ltd v. Heller & Partners Ltd, (1963) 2 All
E.R. 575
The appellants were advertising agents and the respondents were bankers. The appellants
had a client called Easipower Ltd, who was customers of the respondents. The appellants
had contracted to place orders for advertising Easipower’s products on television and in
newspapers, and since this involved giving Easipower credit, they asked the respondents,
who were Easipower’s bankers, for a reference as to the creditworthiness of Easipower.
Hellers replied ‘without responsibility on the part of the bank or its officials’ that “Easipower is
a respectably constituted company, considered good for its ordinary business engagements.
Your figures are larger than we are accustomed to see”.
Bankers normally use careful terms when giving these references, but Heller’s language was
so guarded that only a very suspicious person might have appreciated that he was being
warned not to give credit to the extent of £100,000. In fact Heller’s were trying to alert the
plaintiffs because Easipower had an overdraft with Hellers, which Hellers knew they were
about to call in and that Easipower might have difficulty in meeting the payment. One week
after the reference was given Hellers began to press Easipower to reduce their overdraft.
Relying on this reply, the appellants placed orders for advertising time and space for
Easipower Ltd, and the appellants assumed personal responsibility for payment to the
television and newspaper companies concerned. Easipower Ltd went into liquidation, and
the appellants lost over £17,000 on the advertising contracts. The appellants sued the
respondents for the amount of the loss, alleging that the respondents had not informed
themselves sufficiently about Easipower Ltd before writing the statement, and were therefore
liable in negligence.
Held - In the present case the respondents’ disclaimer was adequate to exclude the
assumption by them of the legal duty of care, but, in the absence of the disclaimer, the
380
circumstances would have given rise to a duty of care in spite of the absence of a contract or
fiduciary relationship. The dissenting judgment of Denning, L.J., in Candler v. Crane,
Christmas 1951, was approved, and the majority judgment in that case was disapproved.
381
The defendant received the telegram at 10.01 a.m. but did not reply, so the plaintiffs, by
telegram sent at 1.34 p.m., accepted the defendant’s original offer. The defendant had
already sold the iron to a third party, and informed the plaintiffs of this by a telegram
dispatched at 1.25 p.m. arriving at 1.46 pm. The plaintiffs had therefore accepted the offer
before the defendant’s revocation had been communicated to them. If, however, the
plaintiffs’ first telegram constituted a counter offer, then it would amount to a rejection of the
defendant’s original offer.
Held - The plaintiffs’ first telegram was not a counter offer, but a ‘mere enquiry’ for different
terms which did not amount to a rejection of the defendant’s original offer, so that the offer
was still open when the plaintiffs accepted it. (Note: the defendants offer was not revoked
merely by the sale of the iron to another person).
382
D.15 Blackpool & Fylde Aero Club v Blackpool Borough Council
[1990] 1 WLR 1195
A local authority put an airport concession to operate pleasure flights up to tender. They sent
tender documents to the previous concessionaire and six other parties. The form of tender
stated that the Council “do not bind themselves to accept all or any part of any tender. No
tender which is received after the last date and time specified shall be admitted for
consideration.” The plaintiffs submitted a tender within time, but the Council’s staff failed to
empty the letterbox when they should have and as a result the plaintiff’s tender came to their
attention too late. A tender was accepted that was lower than the plaintiffs’ tender. The
plaintiffs sued.
Held - On the issue of liability it was found that the express request for tenders by the
Council gave rise to an implied obligation on them to perform the service of considering all
tenders that were properly submitted.
383
had acted improperly. It said that the naming of subcontractors was an important and
fundamental aspect of the tendering process in that it was intended to prevent ‘bid
shopping’. It said that the lowest tender should have been rejected as invalid.
The Court held that the discretionary provisions in the instructions to tenderers combined
with the fact that the price was significantly lower and the tender complied substantially with
the tender document requirements made the awarding of the contracts entirely proper. The
employer’s actions were said not to have been arbitrary. They did not, at any point, breach
the evaluation process set out in the tender specifications it distributed. The Court laid
considerable emphasis on provisions in the instructions to tenderers that “the Corporation
reserves the right to reject any or all tenders or to accept any tender should it be deemed in
the interest of the Corporation so to do...” and that tenders ‘that contain irregularities of any
kind may be rejected as informal’. Without these provisions the result may well have been
different.
384
Instructors
© EIT Engineering Institute of Technology 2019
385