0% found this document useful (0 votes)
6 views

SPM_UNIT-2.2

The document outlines the syllabus for a Software Project Management course, focusing on project life cycles and effort estimation techniques. It discusses various software estimation methods, including algorithmic models, expert judgment, and analogy, as well as the importance of function points and COCOMO models for estimating project efforts. Additionally, it highlights the challenges of software estimation and the need for effective sizing and measurement of software projects.

Uploaded by

kpsingh.ai-ds
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

SPM_UNIT-2.2

The document outlines the syllabus for a Software Project Management course, focusing on project life cycles and effort estimation techniques. It discusses various software estimation methods, including algorithmic models, expert judgment, and analogy, as well as the importance of function points and COCOMO models for estimating project efforts. Additionally, it highlights the challenges of software estimation and the need for effective sizing and measurement of software projects.

Uploaded by

kpsingh.ai-ds
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 71

SOFTWARE

SOFTWARE PROJECT
PROJECT
MANAGEMENT
MANAGEMENT
(KOE-068)
(KOE-068)
Faculty: K P Singh
Course: B.Tech 6th Semester
Session 2021-22
Department of Computer Science & Engineering
02/13/2025 1
University Syllabus
Unit-2: Project Life Cycle and Effort Estimation:

Software process and Process Models – Choice of Process models – Rapid


Application development – Agile methods – Dynamic System Development Method –
Extreme Programming– Managing interactive processes – Basics of Software
estimation – Effort and Cost estimation techniques – COSMIC Full function points –
COCOMO II – a Parametric Productivity Model.
.

02/13/2025 2
UNIT-2
UNIT-2
LECTURE-
LECTURE-
Effort
Effort Estimation
Estimation
Faculty: K P Singh
Department of Computer Science &
Engineering
02/13/2025 3
Software Effort Estimation
 Successful project is that the system is
delivered on time and within budget and
with the required quality.
Software effort estimation
Difficulties in Software estimation
 Novel Application of Software
 Changing Technology
 Lack of homogeneity of project experience
 Subjective Nature of estimating
 Political Implications
Note:
SLOC - Source Number of Lines of Code
Project Data WM - Work in Month
Where are estimates done?

Estimates are carried out at various stages of software project .
 Strategic Planning
 Decide priority to each project.
 Feasibility Study
 Benefits of potential system
 System Specification
 Detailed requirement analysis at design stage.
 Evaluation of Suppliers Proposals
 Tender Management
 Project Planning
 Detailed estimates of smaller work components
during implementation.
Software Effort Estimation Techniques
 Algorithmic Models - use ‘effort drivers’ representing
characteristics of the target system and the implementation
environment to predict effort
 Expert Judgment - Advice of knowledgeable staff is solicited
 Analogy – Similar Completed Project
 Parkinson – Staff Effort available to do project
 Price to Win – Sufficiently low to win a contract.
 Top-down – Overall estimate is formulated
 Bottom-up – Individual components are aggregated
Expert Judgment
 Asking for estimate of task effort from someone who
is knowledgeable about either application or
development environment.
 Experts use the combination of informal
analogy approach where similar Projects from past
are identified and b ottom up estimating.
Estimating by Analogy
 Called “ C a s e Based Analogy”
 Estimator identifies completed projects source cases
with similar characteristics to new project (target case)
 Effort of the source case used as base estimate for
target.
 TOOL – ANGEL software tool
 Measuring Euclidean Distance between the cases
Problems with Over and Under Estimates
 Parkinson’s Law
 “Given an easy target staff will work less hard”

 Brook’s Law
 Effort required to implement a project will go up
disproportionately with the number of staff
assigned to the project
 “ Putting more people on a late job makes it later”
Bottom-up Estimating
 Work Breakdown Structure
 Assumptions about characte ristics of final system
 Number and Size of software modules.
 Appropriate at detailed sta ges of project
planning.
 When a project is completely novel or no
historical data available.
Top-down Approach and Parametric Models
 Effort = (system size ) * (productivity rate)
 System size in the form of KLO C
 Productivity rate 40 days per KLO C
 Software module to b e constructed is 2 KLO C
 Effort = 2 * 80 = 160 days
Note:
KLOC- Thousands of Lines of Code
Measure of Work
 Measure such as
🞑 SLOC ( Source Lines of Code)
🞑 KLOC ( Thousand Lines of Code)
Need for Software Sizing
1. Estimation and Budgeting
2. Phasing Development Work
3. Prioritization of Work
4. Monitoring the Progress
5. Bidding for Projects
6. Allocating Testing Resources
7. To measure and Manage Productivity
8. Risk Assessment
9. Software Asset Valuation
10. CMMi Level 2 and 3 require that a valid sizing method be
used.
1
Software Sizing – Lines of Code
The easiest and historically the most common method in Sizing
Software project has been counting the number of lines of code
and / or the number of screens.
Advantages
Automation of the counting process can be done
Intuitive as the measurements are easily understood
Disadvantages
Lines of Code is language and the skill set of the developer
The coding phase is only around 30-35% of the actual software
development.
Lack of counting standards (What about comments, notes etc.?)
1
Lines of Code
• Language Dependent
• Skill dependent
• Unknown until written
• No Standards
• Function Points are better
History
1979 – Function Points introduced by Alan Albrecht
1984 – First FP guidelines
1986 – First IFPUG Board of directors
1994 – CPM Release 4.02003 ISO Standard
2003 – ISO Standard
1
Best in Class Software Companies
• Higher Productivity Best In Class
• 30% Requirement
• Higher Quality • 35% Design
• 25% Coding
• Less over time • 10% Testing

• Less Redundancy Worst In Class

• More Specialization • 10% Requirement


• 15% Design
• 37% Coding
• Software Measurement Programs • 38% Testing

1
Objectives of Function Point
Measures software by quantifying the functionality
requested by and provided to the customer based
primarily on logical design.
Measures software development and
maintenance independently of technology used
for implementation.
Measures software development and
maintenance consistently across all projects and
organizations.

19
Types of Function Point Counts
Development
All Phases through development
Forms a baseline

Enhancement
In Production,
has a baseline
Count the size of the successive enhancements

Application
In Production, o baseline Forms a baseline
20
Major App Sizes (2010)
Applications Approximate Size in
Function Points
Star Wars Missile Defense 350,000
ERP (SAP, Oracle etc.,) 300,000
Microsoft Windows Vista 159,000
Microsoft Office 2007 98,000
Airline Reservation System 50,000
NASA Space shuttle 25,000

21
Changes to requirements
Requirements Functional Detail
Design Design
Delivered
Application
100 FPs 120 FPs 130 FPs 135 FPs
• State code input screen • New regulatory Summary report
changed (3 FPs) • added (5 FPs)
• Interface to N&A file table added (10 FPs)
added (10 FPs)
• N&A inquiry and state
Impact FPs)
code inquiry added (7
Effort + 1 month + .5 month + .25 month
Schedule + 2 weeks + 1 week + 2.5 days
Cost + $5 K + $2.5 K + $1.25 K

Source: International Function Point User Group 2001


Steps in FP Counting
1. Determine Type of Count
2. Identify Counting Scope and Application
Boundary
3. Count Data Functions
4. Count Transaction Functions
5. Determine Unadjusted Function Points
6. Determine Value Adjustment Factor
7. Calculate Adjusted Function Point Count

23
FP - Functionalities
Data Functionality • Transaction Functionality
◦ Internal Logical Files (ILF) • External Inputs (EI)
◦ External Interface Files (EIF) • External Outputs (EO)
• External Queries (EQ)
FP Data & Transaction
External Input (EI)
Functionality
External Input (EI) PURCHASE
USER USER ORDER
PURCHASE SYSTEM
ADD, CHG INVOICES PAYMENTS ORDER INFO
External
Interface File
Services Layer (EIF)
External Query (EQ)
Repository (Persistence Layer) USER
PAYMENT
STATUS
Invoices Vendor Payments
(Table) (Table) (Table) External Output (EO)
USER
Internal Logical Files (ILF) PAID
INVOICES
Accounts Payable App APP BOUNDARY

25
Albrecht Function Point Analysis
 Top-down method devised by Allan Albrecht(IBM)
 Developed the idea of Function Points(FPs)
 Basis of function point analysis has five
components:
🞑 External Input Types
🞑 External Output Types
🞑 External Inquiry Types – US spelling inquiry
🞑 Logical Internal File Types – Data store
🞑 External Interface File Types – To & Fro (BACS)
BACS-Bank Automation Clearing System
External Input Types: are input transactions that update internal computer
files
External Interface Files (EIF):
The second Data Function a system provides an end user is also
related to logical groupings of data. In this case the user is not responsible for
maintaining the data. The data resides in another system and is maintained
by another user or system. The user of the system being counted requires this
data for reference purposes only.
For example, it may be necessary for a pilot to reference position data from a
satellite or ground-based facility during flight.
The pilot does not have the responsibility for updating data at these sites but
must reference it during the flight.

Groupings of data from another system that are used only for reference
purposes are defined as External Interface Files (EIF).
Transactional Functions
These functions address the user's capability to access the data contained in ILFs
and EIFs. This capability includes maintaining, inquiring and outputting of data.
These are referred to as Transactional Functions.
External Inputs (EI)
This Transactional Function allows a user to maintain Internal Logical Files
(ILFs) through the ability to add, change and delete the data. For example, a
pilot can add, change and delete navigational information prior to and during
the mission.

In this case the pilot is utilizing a transaction referred to as an External Input


(EI). An External Input gives the user the capability to maintain the data in
ILF's through adding, changing and deleting its contents.

28
External Output (EO)
This Transactional Function gives the user the ability to produce outputs. For example
a pilot has the ability to separately display ground speed, true air speed and calibrated
air speed. The results displayed are derived using data that is maintained and data
that is referenced. In function point terminology the resulting display is called an
External Output (EO).

External Queries (EQ)


This Transactional Functions addresses the requirement to select and display specific
data from files. To accomplish this a user inputs selection information that is used to
retrieve data that meets the specific criteria.
In this situation there is no manipulation of the data. It is a direct retrieval of
information contained on the files. For example if a pilot displays terrain clearance data
that was previously set, the resulting output is the direct retrieval of stored
information. These transactions are referred to as External Inquiries (EQ).

29
External Queries (EQ)
Input Side Output Side
• Click of a the mouse • Values read from an internal
• Search values logical file or external interface file
• Action keys (command buttons) • Color or Font changes on the
• Error Messages screen
• Confirmation Messages • Error Messages
(searching) • Confirmation Messages
• Clicking on the an action key • Recursive fields are counted only
• Scrolling once.
Recursive fields are
counted only once.

30
Albrecht Complexity Multipliers
Unadjusted Function Point Calculator
Complexity of Components
Component Type Low Average High L+A+H Total

External Inputs 3 x3 9 1 x4 4 1 x6 6 9+4+6 19


External Outputs 5 x4 20 x5 1 x7 7 20+0+7 27
External Queries 5 x3 15 x4 1 x6 6 15+0+6 21
Internal Logical Files 1 x7 7 2 x10 20 1 x15 15 7+20+15 42
External Interface Files x5 x7 1 x10 10 0+0+10 10
Total Unadjusted Function Points 119

32
IFPUG File Type Complexity
Function Points Mark II
 Sponsored by CCTA(Central Computer and
Telecommunications Agency)
 Mark II – Improvement and replacement in
Albrecht method
 In Albrecht method
 Information
Processing Size is measured in
UFPs(Unadjusted Functional Points)
 Then TCA(Technical Complexity Adjustment) is
applied
Model of Transaction

Data Store

Input O utput
From User Process Return to
User
For each transaction UFPs are calculated
 UFPs = Wi * (number of input data element types)+ We
* (number of entity types referenced)+ Wo * (number of
output data element types)
 W i We Wo are weightings derived by asking the
developers the proportions of effort spent.
 FP counters use industry averages which are:
 Wi = 0.58
 We = 1.66
 Wo = 0.26
COSMIC Full Function Points
 C o smic deals with decomposing the system architecture
into hierarchy of software layers.
 Inputs and outputs are a gg reg ated into data groups
 Each data group brings together data items that relate to
the same object of interest.
 Data Groups can be moved in 4 ways:
 Entries(E)
 Exits(X)
 Reads ( R)
 Writes(W)
 B.W. Boehm Introduced C O C O M O model in 1981.
 Based on a cost database of more than 60
different projects
 This model estimates the total effort in terms
of
“person-months” of the technical project staff.
 C O C O M O is a hierarchy of cost estimation models it

includes three forms of cocomo: basic, intermediate and


detailed sub model.
It Can be of 3 Types :
1.Organic mode :
Relatively simple & small projects with a small team are handled.
Such a team should have good application experience to less
rigid requirements. relatively small and requires little innovation
2.Semidetached mode:
For intermediate software projects(little complex compared to
organic mode projects in terms of size) in which teams with mixed
experience levels must work to a set of rigid and less than rigid
requirements
3.Embedded mode:
When the software project must be developed within a tight set
of hardware and software operational constraints.
Ex of complex project: Air traffic control system
Comparison of three C O C O M O modes
Mode Project Nature of In novation Deadline Development
Size Project of the Environment
Typically Small Size
Project
Organic Little Not tight Familiar And
2 – 50 Projects, In house
KLOC experienced
developers .

Semi- Typically Medium size Medium Medium Medium


Detached 50 – 300 project, average
KLOC previous
experience on
similar projects .
Large projects, Complex
Em bedded Typically complex Significant Tight Hardware /
over 300 interfaces, very Customer
KLOC little previous interfaces
experience . required
• The basic model aims at estimating, in a quick and rough
fashion, most of the small to medium sized software
projects.
• Depending on the problem at hand, the team might
include a mixture of experienced and less experienced
people with only a recent history of working together.
• It does not account for differences in hardware constraints,
personnel quality and experience, use of modern tools and
techniques, and other project attributes known to have a
significant influence on software costs, which limits its
accuracy
The Basic C O C O M O equations take the E = effort applied in terms
form: of person months
E = a b (KLOC) b b persons-months D = Deployment time
D = cb (E)db months S S = staff size
S S = E/D persons P = productivity
a b , bb , cb , db = Coefficients
P = KLOC/E
T D E V = cb Ed b T DEV = Development Time
Basic C O C O M O Coefficients
Project ab bb cb db
Organic mode 2.4 1.05 2.5 0.38

Semidetached mode 3.0 1.12 2.5 0.35

Embedded mode 3.6 1.20 2.5 0.32


Example :
Suppose that a project was estimated to be 400 K L O C . Calculate the effort
and development time for each of the three modes i.e. organic , semidetached
and embedded.
The basic C O C O M O equations take the form:
Solution E = a(KLOC) b
D = c (E)d
•Esti mated size of the project = 400 K L O C
1. Organic Mode
• E = 2.4 (400)1.05 =
1295.31 P M
• D = 2.5 (1295.31)0.38 = 38.07 M
2. S e m i det ached M o d e
• E = 3.0 (400)1.12 =
2462.79 P M
• D = 2.5 (2462.79)0.35 = 38.45 M
3. Embedded Mode
Example: c o nsider a software proje c t using semi- detached
mode with 30,000 lines of code . We will obtain estimation for this
project as follows:
(1)Effort estimation
E= a b (KLOC) bb person-months
E=3.0(30)1.12 where lines of code=30000= 30 KLOC
E=135 person-month

(2) Duration estimation


D= cb (E)db months
=2.5(135)0.35
D=14 months

(3) Person estimation SS=E/D


=135/14
SS=10 persons approx.
Example: We have determined our project fits the characteristics
of S e m i-Detached mode & We estimate our project will have
32,000 Delivered Source Instructions.
Using the formulas, we can estimate:
 E f f o r t = 3.0*(32) 1.12 = 146 man-months
 D u rat io n = 2.5*(146) 0.35 = 14 months

 P ro d u ct iv ity = 32,000 D S I / 146 M M


= 219 DSI/MM
 Person estimation = 146 M M /14 months
= 10 F S P
Merits of Basic Cocomo model:
Basic cocomo model is good for quick, early, rough order of magnitude
estimates of software project.

Limitations :

1. The accuracy of this model is limited because it does not consider certain
factors for cost estimation of software. These factors are hardware
constraints, personal quality and experiences, modern techniques and
tools.

2. The estimates of Cocomo model are within a factor of 1.3 only 29% of the
time and within the factor of 2 only 60% of time.
In the Intermediate model Boehm introduced an additional set of 15
predictors called cost drivers in the intermediate model to take account of
the software development environment. Cost drivers are used to adjust
the nominal cost of a project to the actual project environment to increase
the accuracy of the estimate.
The cost drivers are grouped into 4 categories:-
1. Product attributes
a. Required software reliability (RELY)
b. Database size (DATA)
c. Product complexity (CPLX)
2. Computer attributes
a. Execution time constraint (TIME)
b. Main store constraint (STOR)
c. Virtual machine volatility (VIRT)
d. Computer turnaround time (TURN)
3. Personnel attributes
a. Analyst capability (ACAP)
b. Application experience (AEXP)
c. Programmer capability (PCAP)
d. Virtual machine experience (VEXP)
e. Programming Language experience (LEXP)
4. Project attributes
a. Morden programming practices (MODP)
b. Use of software tool (TOOL)
c. Required development schedule (SCED)
Each cost driver is rated for a given project environment. The rating
uses a scale very low, low, nominal, high, very high, extra high which
describes to what extent the cost driver applies to the project being
estimated.
This model Identifies personnel, product, computer and project
attributes which affect cost
02/13/2025 49
The Intermediate C O C O M O equations E A F = Effort
take the form: Adjustment factor
E = a i (KLOC) b i * EAF E = effort
D = ci (E)d i D = Deployment time
S S = E/D persons S S = staff size
P = KLOC/E P = productivity
a i , bi , ci , di = Coefficients

Coefficients for Intermediate C O C O M O


Project ai bi ci di
Organic mode 3.2 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
mode
Embedded mode 2.8 1.20 2.5 0.32
Example :
A new project with estimated 400 K L O C embedded system has to be
developed. Project manager has a choice of hiring from two pools of
developers : with very high application experience and very little
experience in the programming language being used or developers of
very low application experience but a lot of experience with the
programming language. What is the impact of hiring all developers
from one or the other pool.
Solution
This is the case of embedded mode
Hence E = a i (KLOC) b i * E A F
D = ci (E)di
Case 1: Developers are with very high application experience and very
little experience in the programming language being used.
E A F = 0.82 *1.14 = 0.9348
E = 2.8(400)1.20 * 0.9348 = 3470 P M
D = 2.5 (3470)0.32 = 33.9 M
Case 2: developers of very low application experience but a lot of experience
with the programming language.
E A F = 1.29*0.95 = 1.22
E = 2.8 (400)1.20 *1.22 = 4528PM
D = 2.5 (4528)0.32 = 36.9 M
Case 2 requires more effort and time. Hence, low quality application experience
but a lot of programming language experience could not match with the very
high application experience and very little programming language experience.
A large amount of work is done by Boehm to capture all significant
aspects of a software development. It offers a means for processing all
the project characteristics to construct a software estimate.

Capabilities
Of
Detailed Model

Three-L e vel
Phase-Sensitive
Product
Effort Multipliers
Hierarchy
Phase-Sensitive Effort Multipliers:
Some phases (design, programming, integration/test) are
more affected than others by factors defined by the cost
drivers. This helps in determining the man power allocation
for each phase of the project.

Three-Level Product Hierarchy:-


Three product levels are defined. These are module,
subsystem and system levels. The rating of the cost drivers are
done at appropriate level; that is, the level at which it is most
susceptible to variation.
A software development is carried out in four successive
phases:-
1. Plan/ requirements: This is the first phase of the
development cycle. The requirement is analyzed, the product plan
is set up and a full product specification is generated. This phase
consumes from 6% to 8% of the effort and 10% to 40% of the
development time.
2. Product Design: The second phase of the C O C O M O
development cycle is concerned with the determination of the product
architecture and the specification of the subsystem. This phase
requires from 16% to 18% of the nominal effort and can last from 19%
to 38% of the development time.
3. Programming: The third phase of the COCOMO development
cycle is divided into two sub phases: detailed design and code/unit
test. This phase requires from 48% to 68% of the effort and lasts
from 24% to 64% of the development time.

4. Integration/test: This phase of the COCOMO development


cycle occurs before delivery. This mainly consist of putting the
tested parts together and then testing the final product this phase
requires from 16% to 34% of the nominal effort and can last from
18% to 34% of the development time.
02/13/2025 57
02/13/2025 58
02/13/2025 59
02/13/2025 60
02/13/2025 61
COCOMO 2 models
• COCOMO 2 incorporates a range of sub-models that produce
increasingly detailed software estimates.
• The sub-models in COCOMO 2 are:
– Application composition model. Used when software is composed
from existing parts.
– Early design model. Used when requirements are available but design
has not yet started.
– Reuse model. Used to compute the effort of integrating reusable
components.
– Post-architecture model. Used once the system architecture has been
designed and more information about the system is available.
Application composition model
• Supports prototyping projects and projects where there is extensive
reuse.
• Based on standard estimates of developer productivity in application
(object) points/month.
• Takes CASE tool use into account.
• Formula is
– PM = ( NAP  (1 - %reuse/100 ) ) / PROD
– PM is the effort in person-months, NAP is the number of application
points and PROD is the productivity.
Early design model
• Estimates can be made after the requirements have been agreed.
• Based on a standard formula for algorithmic models
– PM = A * SizeB * M where
– M = PERS*RCPX*RUSE*PDIF*PREX*FCIL*SCED;
– A = 2.94 in initial calibration, Size in KLOC,
– B varies from 1.1 to 1.24 depending on novelty of the project,
development flexibility, risk management approaches and the process
maturity.
Estimate of person-months
 pm =A(size)(sf)*(em 1) *(em 2) *(em 3)* . . *(em n)
🞑 Pm = Effort in person-months
🞑 A = Constant (In 2000 - 2.94)

🞑 Size = kdsi

🞑 sf = Exponent Scale Factor


 Exponent Scale Factor is derived as
🞑 Sf= B+0.01*∑(Exponent driver ratings)
 B= Constant (0.91)
Multipliers
• Multipliers reflect the capability of the developers, the non-functional
requirements, the familiarity with the development platform, etc.
Driver Very low Low Nominal High Very Extra
High High

PREC 6.20 4.96 3.72 2.48 1.24 0.00


FLEX 5.07 4.05 3.04 2.03 1.01 0.00
RESL 7.07 5.65 4.24 2.83 1.41 0.00
TEAM 5.48 4.38 3.29 2.19 1.10 0.00
PMAT 7.80 6.24 4.68 3.12 1.56 0.00
 Precedentedness(PREC)
 Development Flexibility(FLEX)
 Risk Resolution(RESL)
 Team Cohesion(TEAM)
 Process Maturity(PMAT)
The reuse model
• Takes into account black-box code that is reused without
change and code that has to be adapted to integrate it
with new code.
• There are two versions:
– Black-box reuse where code is not modified. An effort
estimate (PM) is computed.
– White-box reuse where code is modified. A size estimate
equivalent to the number of lines of new source code is
computed. This then adjusts the size estimate for new
code.
Reuse model estimates
• For generated code:
– PM = (ASLOC * AT/100)/ATPROD
– ASLOC is the number of lines of generated code
– AT is the percentage of code automatically
generated.
– ATPROD is the productivity of engineers in
integrating this code.
Post-architecture level
• Uses the same formula as the early design model but with
17 rather than 7 associated multipliers.
• The code size is estimated as:
– Number of lines of new code to be developed;
– Estimate of equivalent number of lines of new code
computed using the reuse model;
– An estimate of the number of lines of code that have to
be modified according to requirements changes.
The exponent term
• This depends on 5 scale factors (see next slide). Their
sum/100 is added to 1.01
• A company takes on a project in a new domain. The client has
not defined the process to be used and has not allowed time
for risk analysis. The company has a CMM level 2 rating.
– Precedenteness - new project (4)
– Development flexibility - no client involvement - Very high (1)
– Architecture/risk resolution - No risk analysis - V. Low .(5)
– Team cohesion - new team - nominal (3)
– Process maturity - some control - nominal (3)
• Scale factor is therefore 1.17.
Exponent scale factors
Precedentedness Reflects the previous experience of the organisation with this type of project. Very
low means no previous experience, Extra high means that the organisation is
completely familiar with this application domain.
Development Reflects the degree of flexibility in the development process. Very low means a
flexibility prescribed process is used; Extra high means that the client only sets general goals.
Architecture/ Reflects the extent of risk analysis carried out. Very low means little analysis, Extra
risk resolution high means a complete a thorough risk analysis.
Team cohesion Reflects how well the development team know each other and work together. Very
low means very difficult interactions, Extra high means an integrated and effective
team with no communication problems.
Process maturity Reflects the process maturity of the organisation. The computation of this value
depends on the CMM Maturity Questionnaire but an estimate can be achieved by
subtracting the CMM process maturity level from 5.

You might also like