chpt7
chpt7
Project management is the discipline of planning, organizing, motivating, and controlling resources to
achieve specific goals. A project is a temporary endeavor with a defined beginning and end undertaken to meet
unique goals and objectives, typically to bring about beneficial change or added value.
The primary challenge of project management is to achieve all of the project goals and objectives while
honoring the preconceived constraints. The primary constraints are scope, time, quality and budget The
secondary and more ambitious challenge is to optimize the allocation of necessary inputs and integrate them to
meet pre-defined objectives.
Two forefathers of project management are Henry Gantt, called the father of planning and control
techniques, who is famous for his use of the Gantt chart as a project management tool and Henri Fayol for his
creation of the five management functions that form the foundation of the body of knowledge associated with
project and program management.
In 1956, the American Association of Cost Engineers (now AACE International; the Association for the
Advancement of Cost Engineering) was formed by early practitioners of project management and the associated
specialties of planning and scheduling, cost estimating, and cost/schedule control (project control)
The International Project Management Association (IPMA) was founded in Europe in 1967, as a federation of
several national project management associations.
Tactical
Question Answered Sample Indicator
Measure
Time How are we doing against the Schedule Performance Index (SPI) = Earned Value ÷
schedule? Planned Value
Cost How are we doing against the Cost Performance Index (CPI) = Earned Value ÷ Actual
budget? Cost
Resources Are we within anticipated limits Amount of hours overspent per software iteration
of staff-hours spent?
Quality Are the quality problems being Number of defects fixed per user acceptance test
fixed?
Action Are we keeping up with our Number of action items behind schedule for resolution
Items action item list?
Metric is an inevitable part of any piece of work being performed. It's a system in place to measure the
excellence or rather performance of work delivered. Any work that is not controlled and measured can prove
equivalent of incorrect work being delivered. Technology grows in a tremendous pace that enterprises always
strive in keeping defined project metrics.
Project metrics can be stated as pre-defined or identified measures or benchmarks, which the deliverable
is supposed to attain in order to get the expected value. With clearly defined project metrics, the business groups
are able to assess the success of a project. Though certain unstated measures like, if the project was delivered on
time and within the budget, existed ever since the advent of enterprises, the need for more analytics in this area
has seen a high spike. There are different types of project metric analysis systems in place across the industry like
costing, resource, hours based etc. Let me take you through some common project metrics that is much related to
the person hours delivered in a project.
Effort Variance (Ev): It's a derived metric, which gives you an alert of having a control on the project. Let
there be a project A with the below current attributes:
X = (100*150)/50 = 300 Hrs. where X is a predicted value for effort within which the project is going to complete.
Hence, the variance Ev = ((Actual - Planned)/Planned)*100
= ((300-100)/100)*100 = 200 %
The variance predicted indicates that the project requires attention or it would complete at a much higher cost in
terms of the effort delivered.
Schedule Variance (Sv): The Schedule variance also has same calculation in which the number of days is
considered instead of the hours.
Weighted Defect Rate: or WDR is a defect metric calculated based on weightage assigned to the reported bugs. The
weightage depends on two factors - severity and reporter.
And the rate is calculated against the total planned hours for the project
Quality Costs:
Cost of Quality: It's the total time spent on review activities in the project. For example, requirements review,
design review, code review, test plan review, team meetings for clarifications and client calls etc.
Cost of Detection: The total time spent on testing activity is considered as the cost of detection.
Cost of Failure: The total time spent on rework in the project is considered as the cost of failure. Rework include
bug fixing, design change, test plan change etc.
Cost of Failure (Cost of Poor Quality- CoPQ) = (Total Rework or bug fixing Hrs/ Total Project Planned hours)*100
Estimation Technique
Estimation of resources, cost and schedule of software development are very important.
To make a good estimate requires experience and expertise to convert qualitative measures to quantitative form.
Factors like Project size, Amount of risk involved etc are affecting the accuracy and efficacy of estimates.
Decomposition Technique
Here we subdivide the problem into small problems. When all the small problems are solved the main problem is
solved.
Lines of Code
Function Point
LOC (Lines of Code), FP(Function Point) estimation methods consider the size as the measure. In LOC the cost is
calculated based on the number of lines. In FP the cost is calculated based on the number of various functions in
the program.
Empirical Estimation Models
Estimation models use empirically derived formulas to predict the estimates. Here we conduct a study on some
completed projects. From those observation we form some statistical formulas. We can use these formulas to
estimate the cost of other projects. The structure of empirical estimation models is a formula, derived from data
collected from past software projects, that uses software size to estimate effort.Size, itself, is an estimate,
described as either lines of code (LOC) or function points (FP).
COCOMO Models
One very widely used algorithmic software cost model is the Constructive Cost Model (COCOMO).
The basic COCOMO model has a very simple form:
MAN-MONTHS = K1* (Thousands of Delivered Source Instructions) K2
Where K1 and K2 are two parameters dependent on the application and development environment.
Estimates from the basic COCOMO model can be made more accurate by taking into account other factors
concerning the required characteristics of the software to be developed, the qualification and experience of the
development team, and the software development environment. Some of these factors are:
Complexity of the software
Required reliability
Size of data base
Required efficiency (memory and execution time)
Analyst and programmer capability
Experience of team in the application area
Experience of team with the programming language and computer
Use of tools and software engineering practices
Many of these factors affect the person months required by an order of magnitude or more. COCOMO assumes that
the system and software requirements have already been defined, and that these requirements are stable. This is
often not the case.
COCOMO model is a regression model. It is based on the analysis of 63 selected projects. The primary input is
KDSI. The problems are:
In early phase of system life-cycle, the size is estimated with great uncertainty value. So, the accurate cost estimate
can not be arrived at.
The cost estimation equation is derived from the analysis of 63 selected projects. It usually has some problems
outside of its particular environment. For this reason, the recalibration is necessary.
The first version of COCOMO model was originally developed in 1981. Now, it has been experiencing increasing
difficulties in estimating the cost of software developed to new life cycle processes and capabilities including
rapid-development process model, reuse-driven approaches, object-oriented approaches and software process
maturity initiative.
For these reasons, the newest version, COCOMO 2.0, was developed. The major new modeling capabilities
of COCOMO 2.0 are a tailorable family of software size models, involving object points, function points and source
lines of code; nonlinear models for software reuse and reengineering; an exponent-driver approach for modeling
relative software diseconomies of scale; and several additions, deletions, and updates to previous COCOMO effort-
multiplier cost drivers. This new model is also serving as a framework for an extensive current data collection and
analysis effort to further refine and calibrate the model's estimation capabilities.
There are some cost estimation methods which are based on a function point type of measurement, such as
ESTIMACS and SPQR/20. SPQR/20 is based on a modified function point method. Whereas traditional function
point analysis is based on evaluating 14 factors, SPQR/20 separates complexity into three categories: complexity
of algorithms, complexity of code, and complexity of data structures. ESTIMACS is a propriety system designed to
give development cost estimate at the conception stage of a project and it contains a module which estimates
function point as a primary input for estimating cost.
The advantages of function point analysis based model are:
Function points can be estimated from requirements specifications or design specifications, thus making it
possible to estimate development cost in the early phases of development.
Function points are independent of the language, tools, or methodologies used for implementation.
Non-technical users have a better understanding of what function points are measuring since function points are
based on the system user's external view of the system
LOC
Lines of code (often referred to as Source Lines of Code, SLOC or LOC) is a software metric used to measure the
amount of code in a software program. LOC is typically used to estimate the amount of effort that will be required
to develop a program, as well as to estimate productivity once the software is produced. Measuring software size
by the number of lines of code has been in practice since the inception of software.
There are two major types of LOC measures: physical LOC and logical LOC. The most common definition of
physical LOC is a count of "non-blank, non-comment lines" in the text of the program's source code. Logical LOC
measures attempt to measure the number of "statements", but their specific definitions are tied to specific
computer languages (one simple logical LOC measure for C-like languages is the number of statement-terminating
semicolons). It is much easier to create tools that measure physical LOC, and physical LOC definitions are easier to
explain. However, physical LOC measures are sensitive to logically irrelevant formatting and style conventions,
while logical LOC is less sensitive to formatting and style conventions. Unfortunately, LOC measures are often
stated without giving their definition, and logical LOC can often be significantly different from physical LOC.
There are several cost, schedule, and effort estimation models which use LOC as an input parameter, including the
widely-used Constructive Cost Model (COCOMO) series of models invented by Dr. Barry Boehm. While these
models have shown good predictive power, they are only as good as the estimates (particularly the LOC estimates)
fed to them.