0% found this document useful (0 votes)
6 views

cost estimation

The document outlines principles and techniques for project cost estimation, emphasizing the importance of including all activities, basing estimates on past experiences, and anticipating worst-case scenarios. It discusses various estimating techniques such as Guesstimating, Delphi Technique, Time Boxing, and Function Points, providing detailed steps and considerations for each method. Additionally, it highlights the significance of refining estimates as project work progresses and managing scope through function point analysis.

Uploaded by

storesandrana
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

cost estimation

The document outlines principles and techniques for project cost estimation, emphasizing the importance of including all activities, basing estimates on past experiences, and anticipating worst-case scenarios. It discusses various estimating techniques such as Guesstimating, Delphi Technique, Time Boxing, and Function Points, providing detailed steps and considerations for each method. Additionally, it highlights the significance of refining estimates as project work progresses and managing scope through function point analysis.

Uploaded by

storesandrana
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Project Cost Estimation

Project Cost Estimation Principles


Include all activities when making estimates.
 The time required for all development activities must
be taken into account.
 Including:
 Prototyping, Designing, Debugging, Testing, Writing user
documentation, Deployment etc.
Base your estimates on past experience
combined with knowledge of the current project.
 If you are developing a project that has many
similarities with a past project:
 You can expect it to take a similar amount of work.
 Base your estimates on the personal judgement of
your experts
 Use algorithmic models developed in the software
industry as a whole by analyzing a wide range of
projects.
 They take into account various aspects of a project’s size and
complexity, and provide formulas to compute anticipated cost
Project Cost Estimation Principles
Anticipate the worst case and plan for
contingencies.
Develop the most critical use cases first
 If the project runs into difficulty, then the critical

features are more likely to have been completed


Make three estimates:
 Optimistic (O)
 Imagining a everything going perfectly

 Likely (L)
 Allowing for typical things going wrong

 Pessimistic
 Accounting for everything that could go wrong
Project Cost Estimation Principles
Combine multiple independent estimates.
Use several different techniques and compare
the results.
If there are discrepancies, analyze your
calculations to discover what factors causing
the differences.
Revise and refine estimates as work
progresses
As you add detail.
As the requirements change.
As the risk management process uncovers
problems.
Project Estimating Techniques
Following are some of the project estimation techniques:
 Guesstimating
 Delphi Technique
 Time Boxing
 Analogous OR Top-Down Estimating
 Bottom-Up Estimating
 Lines of Code (LOC)
 Function Points
 Heuristics
 Constructive Cost Model (COCOMO)
 Traditional
 Intermediate
 Advanced
Guesstimating
Estimation by guessing
Its quick and easy
Based on feeling rather than evidence
Over optimistic
Just a guess, not much calculation
Confidence is very low
Delphi Technique
Technique for group decision making
Involves multiple experts who arrive at a
consensus
Several individuals initially make cost estimates in
private.
They then share their estimates to discover the
discrepancies.
Each individual repeatedly adjusts his or her
estimates until a consensus is reached
All experts make estimates and then compared
 If estimates are reasonably close, averaged to reach a
decision
 If estimates are distributed then after discussion, a
decision is finalized
Very effective
Time taking and Very Costly method
Delphi Technique Steps
The Delphi technique has eight basic steps:
1. Identify the terms that need to perform the
estimation activity. In an estimation activity
meeting, three distinct groups of people need to be
present.
 Estimation experts
 Estimation coordinator
 Author
2. The author presents the project details including
clients’ needs and system requirements to the
group of experts.
3. The author and experts arrive at a consensus that
any estimation with a specific variance value will
not be accepted.
4. The coordinator prepares a list of tasks jointly
decided by the team and distributes the list to all
experts.
Delphi Technique Steps
5. The experts independently make their estimates
for each task. After recording their estimates,
they hand over their estimates to the
coordinator.
6. The coordinator prepares a summary of
estimates for each task in a table. After
calculating the percentage of variance, the
coordinator marks each task as accepted or not
accepted based on the agreed accepted value.
7. The coordinator hands over the summary to the
group of experts and the author. The group of
experts and the author discuss tasks and
assumptions where the percentage of variance
is more than the acceptable level.
8. Revert to step 5 and repeat the steps. You do
this until all tasks are assigned estimates that
Time Boxing
Time box is assigned to activity
Not a guess work rather on requirement
basis
If used effectively, time boxing can help focus
the project team's effort on an important and
critical task.
If used inappropriately or too often, the
project team members become burned out
and frustrated
May result in long hours and pressure to
succeed
Top-down Estimating
Very effective approach for cost & schedule planning (Royce)
Involves estimating the schedule and/or cost of the entire
project in terms of how long it should take or how much it
should cost
Starts with the largest item of the project and then break it
into subordinate items
Top-down estimating is a very common occurrence that often
results from a mandate made by higher management
The schedule and/or cost estimate is a product of some
strategic plan
After time / cost allocation, PM assigns time / cost to each
activity/task
top-down estimating works well when the target objectives
are reasonable, realistic, and achievable
Bottom-Up Estimating
Also called Analogous estimation which refers to developing
estimates based upon one's opinion that there is a significant
similarity between the current project and others
Most real-world estimating is made using bottom-up
estimating (Royce)
Bottom-up estimating involves dividing the project into
smaller modules and then directly estimating the time and
effort in terms of person-hours, person-weeks, or person-
months for each module
WBS is the basis of this technique
The total time and associated cost for each activity provides
the basis for the project's target schedule and budget
The project manager in consultation with project team can
provide reasonable time estimates for each activitys
Lines of Code (LOC)
Counting the number of lines of code in computer
programs is the most traditional and widely used
software metric for sizing the application product
It is the most controversial
Do we include comments?
Do we include declaring variables?
Coding Technique (Sr. Programmer can write less
and efficient code to perform same functionality
than a Jr. Programmer)
Difficult to estimate how many lines of code will be
required to develop this software
Heuristics
Heuristics are rules of thumb
Heuristic approaches rely on the fact that the same
basic activities will be required for a typical software
development project and these activities will require a
predictable percentage of the overall effort (Roetzheim
and Beasley)
For example, when estimating the schedule for a
software development task one may, based on previous
projects, assign a percentage of the total effort as
follows:
 30 percent Planning
 20 percent Coding
 25 percent Component Testing
 25 percent System Testing
Function Points
focus on the functionality and complexity of an application
system or a particular module
independent of the computer language, development
methodology or technology
most popular technique used to estimate the size of a
software project.
analysis can be conducted based on the project's scope
function point analysis should be conducted at various stages
of the project life cycle
 A function point analysis conducted based on the project's scope
definition can be used for estimation and developing the project's
plan.
 During the analysis and design phases, function points can be used
to manage and report progress and for monitoring scope creep.
 a function point analysis conducted during or after the project's
implementation can be useful for determining whether all of the
functionality was delivered.
Function Points
Function point analysis is a structured technique
for breaking up or modularizing an application by
categories or classes based on functionality.
A Function point provides an idea of the size and
complexity of a particular application or module
of that application.
Do you think:
An application that has 4,000 LOC has twice as
many lines of code as a 2,000 LOC application. But
will a 4,000 LOC application take twice as long and
cost twice as much to build as a 2,000 LOC
application?
Function Point Analysis
A 4,000 function point application will, in fact, be larger,
have more functionality, and be more complex than a
2,000 function point application. Since function points
are independent of the technology, we can compare
these two applications regardless of the fact that one
application is written in Java and the other in COBOL.
The size of the application is based upon functionality in
terms of:
 Inputs
 Outputs
 Inquiries
 Internal files
 External interfaces
 The complexity of the general characteristics of the system
function points can be useful for:
 Managing Scope
 Scope changes will change an application's total function point
count. As a result, the project manager and project
sponsor/client use function point analysis to determine the
impact of a proposed scope change in terms of the project's
schedule and budget.
 Benchmarking
 The value of function point analysis is that data can be
collected and compared to other projects.
 Reliability
 Once knowledgeable and experienced in function point
counting, different people can count function points for the
same application and obtain the same measure within an
acceptable margin of error.
 The process of conducting a function point
analysis can be summarized in seven steps:
1. Determine the function type count to be
conducted.
2. Define the boundary of the application.
3. Define all data functions and their degree of
complexity.
4. Define all transactional functions and their
complexity.
5. Calculate the Unadjusted Function Point Count.
6. Calculate the Value Adjustment Factor based on
a set of General System Characteristics.
7. Calculate the final Adjusted Function Point
Count.
Components of Function Points
 Data Functions:
 Internal Logical File (ILF)
 External Interface File (EIF)
 Transactional Functions:
 External Input (EI)
 External Output (EO)
 External Query (EQ)
Internal Logical File (ILF)
 An ILF is a logical file that stores data within the application
boundary.
 For example, each entity in an Entity Relationship Diagram
(ERD) would be considered as an ILF. The complexity of an ILF
can be classified as low, average, or high based on the number
of data elements and subgroups of data elements maintained by
the ILF. An example of a subgroup would be new customers for
an entity called customer. Examples of data elements would be
customer number, name, address, phone number, and so forth.
In short, ILFs with fewer data elements and subgroups will be
less complex than ILFs with more data elements and subgroups.
 Logical groupings of data
 Databases or data sets, master files, tables
 Maintained by an end user
 Utilized within application boundary
 Represent application’s maintainable data storage
requirements
 Exist when software in use, Dynamic, not hard coded
 Not considered ILF’s:
 temporary files, work files, sort files; backups, recovery
or archive files (unless legal or regulatory required)
External Interface File (EIF)
 An EIF is similar to an ILF; however, an EIF is a
file maintained by another application system. The
complexity of an EIF is determined using the same
criteria used for an ILF.
 Externally maintained logical groups of data
 Data resides in another system (each EIF is an ILF
in another system)
 Outside the application boundary
 User of system being counted requires data for
reference purposes only
 Examples: help messages, error messages,
reference data
 Not considered EIF’s: transaction data, data
format/processing
ILF/EIF Complexity Matrix (IFPUG)
External Input (EI)
 An El refers to processes or transactional data that originate
outside the application and cross the application boundary from
outside to inside.
 The data generally are added, deleted, or updated in one or more
files internal to the application (i.e., internal logical files).
 A common example of an El would be a screen that allows the user to
input information using a keyboard and a mouse. Data can, however,
pass through the application boundary from other applications. For
example, a sales system may need a customer's current balance from
an accounts receivable system. Based on its complexity, in terms of
the number of internal files referenced, number of data elements (i.e.,
fields) included, and any other human factors, each El is classified as
low, average, or high.
 Maintains internally stored data coming into the application
boundary
 Gives the user the capability to add, change, or delete data
contents (e.g., mouse, touch screen, sensor)
EI Complexity Matrix (IFPUG)
External Output (EO)
 An EO is a process or transaction that allows
data to exit the application boundary.
 Examples of EOs include reports, confirmation
messages, derived or calculated totals, and
graphs or charts. This data could go to screens,
printers, or other applications. After the number
of EOs are counted, they are rated based on
their complexity, like the external inputs (El).
 Data passing out of application boundary
 Gives user ability to produce outputs (e.g.,
reports, graphics, displays)
EO Complexity Matrix (IFPUG)
External Inquiry (EQ)
 An EQ is a process or transaction that includes a
combination of inputs and outputs for retrieving
data from either the internal files or from files
external to the application.
 EQs do not update or change any data stored in a
file. They only read this information. Queries with
different processing logic or a different input or
output format are counted as a single EQ.
 Once the EQs are identified, they are classified
based on their complexity as low, average, or high,
according to the number of files referenced and
number of data elements included in the query.
 Combination of input (request) and output
(retrieval)
 Allows user to select and display specific data
from files
 Once all of the ILFs, EIFs, Els, EOs, and EQs, are
EQ Complexity Matrix (IFPUG)
Application Boundary
Function Point Calculations
 Identify the unadjusted function points.
 Calculate total GSC’s (General System
Characteristics ).
 Calculate Value Adjustment Factor (VAF)
 Apply a formula to calculate Adjusted FP
(AFP)
Function Point Calculation Table
Function Low Average High
ILF 7 10 15
EIF 5 7 10
EI 3 4 6
EO 4 5 7
EQ 3 4 6
Example
 After reviewing an application system, following was
determined:
 ILF: 3 Low, 2 Average, 1 Complex
 EIF: 2 Average
 El: 3 Low, 5 Average, 4 Complex
 EO: 4 Low, 2 Average, 1 Complex
 EQ: 2 Low, 5 Average, 3 Complex
Complexity
Low Average High Total
ILF 3*7 2*10 1*15 56
EIF 0 2*7 0 14
EI 3*3 5*4 4*6 53
EO 4 2 1
EQ
Total Unadjusted Function Points Add all
(UFP)
Example:
 Computation of Value Adjustment Factor (VAF).
 The VAF is based on the Degrees of Influence
(DI), often called the Processing Complexity
Adjustment (PCA), and is derived from the
fourteen General Systems Characteristics (GSC)
by rating it using the following table:
 0 = not present or no influence
 1 = incidental influence
 2 = moderate influence
 3 = average influence
 4 = significant influence
 5 = strong influence
General System Characteristics

You might also like