0% found this document useful (0 votes)
22 views

SE Unit 2 Learning Material

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

SE Unit 2 Learning Material

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT –II

Syllabus:
Planning a software project
Effort Estimation, COCOMO Models, Project schedule and staffing, Quality planning, Risk
management planning, Metrics for size estimation.

Learning Material

Planning a Software Project


 Planning is the most important Project Management Activity.
 It has two basic objectives
 To establish reasonable cost, schedule, and quality goals for the project, and
 To draw out a plan to deliver the project goals.
 A project succeeds if it meets its cost, schedule, and quality goals.
 Without the project goals being defined, it is not possible to even declare if a project
has succeeded.

I. Effort estimation
 Overall effort and schedule estimates are essential prerequisites for planning the
project.
 These estimates are needed before development is initiated, as they establish the cost
and schedule goals of the project.
 Effort and schedule estimates are also required for determining the staffing level
(number of employees needed) for a project during different phases, for the detailed
plan, and for project monitoring.
 Proper Software Requirements is 20% of the total project effort. So Softwsare
requirement analysis plays an important role in effort estimation.
 There are 2 popular methods for effort estimation
1. Top down estimation approach
2. Bottom up estimation approach

1. Top down estimation approach


 The Top down Estimation Approach considers Effort as the function of Size.
 The larger the project, the greater is the effort requirement.
 The top down approach utilizes this and considers effort as a function of project size.
 Let P=productivity (in KLOC/PM) then the effort estimate for the project will be
SIZE/P person months(PM).
 But this approach can work only if the size and type of the project are similar to the
set of projects from which the productivity P was obtained.
 A more general function for determining effort from size that is commonly used is of
the form
Effort = a x SIZEb
 where a and b are constants, and
 project size is generally in KLOC (size could also be in another size measure
called function points which can be determined from requirements).

 Values for these constants for an organization are determined through regression
analysis, which is applied to data about the projects that have been performed in the
past.

2. Bottom up estimation approach


 In this approach, the project is first divided into tasks and then estimates for the
different tasks of the project are obtained.
 From the estimates of the different tasks, the overall estimate is determined.
 The overall estimate of the project is derived from the estimates of its parts.
 This type of approach is also called Activity-Based Estimation.
 In this approach, the size and complexity of the project is captured in the set of tasks
the project has to perform.
 The bottom-up approach lends itself to direct estimation of effort.
 Once the project is partitioned into smaller tasks, it is possible to directly estimate the
effort required for them, especially if tasks are relatively small.

Procedure for bottom up approach

1. Identify modules in the system and classify them as simple, medium, or complex.
2. Determine the average coding effort for simple/medium/complex modules.
3. Get the total coding effort using the coding effort of different types of modules and
the counts for them.
4. Using the effort distribution for similar projects, estimate the effort for other tasks and
the total effort.
5. Refine the estimates based on project-specific factors.

Disadvantage

 Directly estimating the overall effort is not possible, because some tasks may omit
some activities.
 If architecture of the system to be built has been developed and if past information
about how effort is distributed over different phases is known, then the bottom-up
approach need not completely list all the tasks, and a less tedious approach is possible.

II. COCOMO Models


 COCOMO (Constructive COst Model) is an algorithmic cost estimation technique
proposed by Boehm.
 This model estimates size based upon certain project parameters.
 The value of these parameters depends on the project type:
 Organic Projects are very simple and can be developed with a small-size (2
KLOC – 50 KLOC ) team. The team should have good application experience
and should be familiar with the application environments. A simple data
processing system is a good example of the organic category.
 Embedded Projects are very complex(>300KLOC) and have stringent
constraints. For example, flight control system for aircraft.
 Semidetached Projects are intermediate (50 KLOC – 300 KLOC) in size and
complexity. The team should have mixed experience to meet the mix of rigid
and less-than-rigid requirements. A Transaction Processing System with fixed
requirements for terminal hardware and database software is an example of the
semi-detached category.
 It is designed to provide some mathematical equations to estimate software projects.
 These mathematical equations are based on historical data and use project size in the
form of KLOC (Kilo Lines of Code).
 The COCOMO Model uses a Multivariable size estimation model for effort
estimation.
 A Multi Variable model depends on several variables, such as development
environment, user involvement, memory constraints, technique used etc.,
 A Single Variable model is based only upon the size of the project, which is given as
Effort = a x size b
 The constants a and b are derived from the historical data of the past projects in the
organizations.
 The values of a and b in COCOMO Model vary across the three categories of the
projects: organic, semi detached and embedded as shown in table below.

Project Category A B
Organic 3.2 1.05
Semi-detached 3.0 1.12
Embedded 2.8 1.20

 COCOMO Estimation is a family of hierarchical models, which includes


 Basic
 Intermediate and
 Detailed COCOMO Models

Basic COCOMO Model


 The Basic COCOMO Model estimates effort as a function of the estimated
KLOC in the proposed project.
 The basic COCOMO Model is very simple, quick, and applicable to small to
medium organic-type projects.
 It is given as follows:
Development Effort ( E ) = a x (KLOC) b PM
Development Time ( T ) = c x ( E ) d Months
 Where a, b, c, and d are constants and these values are determined from the
historical data of the past projects.
 The Development Time (T) is calculated from the Initial Development Effort
(E).
 The values of c and d for different types of projects are shown in table below.

Project Category C D
Organic 2.5 0.38
Semi-detached 2.5 0.35
Embedded 2.5 0.32

Example
Assume that a system for student course registration is planned to be
developed and its estimated size is approximately 10,000 lines of code. The
organization is proposed to pay Rs. 25000 per month to software engineers.
Compute the development effort, development time, and the total cost for
product development.
Solution
The project can be considered an organic project. Thus, from the basic
COCOMO model,
Development Effort (E) = 3.2 × (10) 1.05 = 35.90 PM
Development Time (T) = 2.5 × (35.90) 0.38 = 9.747 Months
Total Product Development Cost
= Development Time *Salaries of Engineers
= 9.747 × 25000
= Rs. 2,43,675

Intermediate COCOMO Model


 Boehm has introduced 15 cost drivers, considering the various aspects of
product development environment.
 These cost drivers are used to adjust the project complexity for estimation of
effort and these are termed as effort adjustment factors (EAF).
 These cost drivers are classified as Computer Attributes, Product Attributes,
Project Attributes, and Personnel Attributes.
 The intermediate COCOMO model computes software development effort as
a function of the program size and a set of cost drivers.
Figure: Effort Multipliers for Different Cost Drivers
 The intermediate COCOMO model estimates the initial effort using the basic
COCOMO model.
 Then the EAF is calculated as the product of 15 cost drivers.
 Total effort is determined by multiplying the initial effort with the total value
of EAF.

 The computation steps are summarized below.

Development effort (E):


Initial effort (Ei) = a × (KLOC) b
Effort Adjustment Factor (EAF) = EAF1× EAF2×... × EAFn
Total development effort (E) = Ei× EAF
Development time (T) = c * (E)d

Example
Suppose a library management system (LMS) is to be designed for an
academic institution. From the project proposal, the following five major
components are identified:
Online data entry - 1.0 KLOC
Data update - 2.0 KLOC
File input and output - 1.5 KL
Library reports - 2.0 KLOC
Query and search - 0.5 KLOC
The database size and application experience are very important in this
project. The use of the software tool and the main storage is highly
considerable. The virtual machine experience and its volatility can be kept
low. All other cost drivers have nominal requirements. Use the COCOMO
model to estimate the development effort and the development time.
Solution
The LMS project can be considered an organic category project. The
total size of the modules is 7 KLOC. The development effort and development
time can be calculated as follows:
Development effort
Initial effort (Ei) = 3.2×(7)1.05
= 24.6889 PM
Effort Adjustment Factor (EAF )
=1.16 × 0.82 × 0.91 × 1.06 × 1.10 × 0.87
= 0.8780
Total effort (E) = Ei * EAF
=24.6889 × 0.8780
=21.6785PM
Development time (T) = 2.5 × (E) 0.38 month
= 2.5 (21.6785) 0.38 month
= 8.0469 month

Detailed COCOMO Model


 The detailed COCOMO model inherits all the features of the intermediate
COCOMO model for the overall estimation of the project cost.
 The detailed COCOMO model uses different effort multipliers (cost drivers)
for each phase of the project.
 Phase-wise effort multipliers provide better estimates than the intermediate
model.
 The detailed COCOMO model defines five life cycle phases for effort
distribution:
 Plan and Requirement,
 System Design,
 Detailed Design,
 Code and Unit Test, and
 Integration Test.
 In the detailed COCOMO model, effort is calculated as a function of size in
terms of KLOC and the value of a set of cost drivers according to each phase
of the software life cycle.
 If the project size varies majorly from the value taken in the phase-wise
distribution, then interpolation formula can be applied to find the more
appropriate percentage value.
 The detailed COCOMO model illustrates the importance of recognizing
different levels of predictability at each phase of the development cycle.

Figure: Phase-wise Effort Distribution for Detailed COCOMO Model

Example
Compute the phase-wise development effort for the problem discussed
in Example above.
Solution
There are five components in the organic project discussed in Example
above:
Online Data Entry,
Data Update,
File Input and Output,
Library Reports,
Query and Search.
The estimated effort (E) is 21.6785 PM. The total size is 7 KLOC,
which is between 2 KLOC and 32 KLOC. Thus, the actual percentage of effort
can be calculated as follows:
H.KLOC DV + [(L.KLOC DV - H.KLOC DV) / (H.KLOC -
L.KLOC)]*Size
Where
H.KLOC DV= maximum development size of the phase
L.KLOC DV= minimum development size of the phase
H.KLOC =maximum size of the project
L.KLOC= minimum size of the project
Size =Actual size of the project

Plan and Requirement (%) = 6 + (6 - 6) / (32 - 2) × 7 = 6%


Effort = 0.06 × 21.6785 PM = 1.30071 PM
System Design = 16 + (16 - 16) / (32 - 2) × 7 = 16%
Effort = 0.16 × 21.6785 PM = 3.46856 PM
Detailed Design = 24 + (26 - 24) / (32 - 2) × 7 = 25%
Effort = 0.25 × 21.6785 PM = 5.419625 PM
Code and Unit Test = 38 + (42 - 38) / (32 - 2) × 7= 39%
Effort = 0.39 × 21.6785 PM = 8.454615 PM
Integration Test = 22 + (16 - 22) / (32 - 2) × 7= 24%
Effort = 0.24 × 21.6785 PM =5.20284 PM

III. Project Schedule and Staffing


 With the effort estimate (in person-months), it may be tempting to pick any project
duration based on convenience and then fix a suitable team size to ensure that the total
effort matches the estimate.
 For a project with some estimated effort, multiple schedules (or project duration) are
indeed possible.
 For example, if a project whose effort estimate is 56 person-months,
 A total schedule of 8 months is possible with 7 people.
 A schedule of 7 months with 8 people is also possible,
 As is a schedule of approximately 9 months with 6 people.
 But a schedule of 1 month with 56 people is not possible.
 Similarly, no one would execute the project in 28 months with 2 people.
 Once the effort is fixed, there is some flexibility in setting the schedule by
appropriately staffing the project, but this flexibility is not unlimited.
 A schedule cannot be simply obtained from the overall effort estimate by deciding on
average staff size and then determining the total time requirement by dividing the
total effort by the average staff size.
 Empirical data also suggests that no simple equation between effort and schedule fits
well.
 The objective is to fix a reasonable schedule that can be achieved (if suitable number
of resources are assigned).
 In a project, the scheduling activity can be broken into two subactivities:
 Determining the overall schedule (the project duration) with major milestones.
 Developing the detailed schedule of the various tasks.

Overall Scheduling
 One method to determine the overall schedule is to determine it as a function of effort.
 Such function can be determined from data from completed projects using statistical
techniques like fitting a regression curve through the scatter plot obtained by plotting
the effort and schedule of past projects.
 This curve is generally nonlinear because the schedule does not grow linearly with
effort.
 The IBM Federal Systems Division found that the total duration, M, in calendar
months can be estimated by M = 4.1*E0.36 .
 In COCOMO, the equation for schedule for an organic type of software is M =
2.5*E0.38 .
 As schedule is not a function solely of effort, the schedule determined in this manner
is essentially a guideline.
 Another method for checking a schedule for medium-sized projects is the rule of
thumb called the Square Root Check.
 This check suggests that the proposed schedule can be around the square root of the
total effort in person months.
 This schedule can be met if suitable resources are assigned to the project.
 For example, if the effort estimate is 50 person-months, a schedule of about 7 to 8
months will be suitable.
 To determine the milestones, we must first understand the manpower ramp-up that
usually takes place in a project.
 The number of people that can be gainfully utilized in a software project tends to
follow the Rayleigh curve.
 In the beginning and the end, few people are needed on the project; the Peak Team
Size (PTS) is needed somewhere near the middle of the project; and again fewer
people are needed after that.
Figure: Manpower ramp-up in a project
 This occurs because only a few people are needed and can be used in the initial phases
of requirements analysis and design.
 The human resources requirement peaks during coding and unit testing, and during
system testing and integration, again fewer people are required.
 Given the effort estimate for a phase, we can determine the duration of the phase if we
know the manpower ramp-up.
 For these three major phases, the percentage of the schedule consumed in the build
phase is smaller than the percentage of the effort consumed because this phase
involves more people.
 Similarly, the percentage of the schedule consumed in the design and testing phases
exceeds their effort percentages.
 The exact schedule depends on the planned manpower ramp-up, and how many
resources can be used effectively in a phase on that project.
 Design requires about a quarter of the schedule, build consumes about half, and
integration and system testing consume the remaining quarter.
 COCOMO gives 19% for design, 62% for programming, and 18% for integration.

Detailed Scheduling
 Once the milestones and the resources are fixed, it is time to set the detailed
scheduling.
 For detailed schedules, the major tasks fixed while planning the milestones are broken
into small schedulable activities in a hierarchical manner.
 For example, the detailed design phase can be broken into tasks for developing the
detailed design for each module, review of each detailed design, fixing of defects
found, and so on.
 For each detailed task, the project manager estimates the time required to complete it
and assigns a suitable resource so that the overall schedule is met.
 At each level of refinement, the project manager determines the effort for the overall
task from the detailed schedule and checks it against the effort estimates.
 If detailed schedule is not consistent with the overall schedule and effort estimates, it
must be changed.
 If it is found that the best detailed schedule cannot match the milestone effort and
schedule, then the earlier estimates must be revised.
 Thus, Scheduling is an iterative process.
 Generally, the project manager refines the tasks to a level so that the lowest-level
activity can be scheduled to occupy no more than a few days from a single resource.
 Activities related to tasks such as project management, coordination, database
management, and configuration management may also be listed in the schedule, even
though these activities have less direct effect on determining the schedule because
they are ongoing tasks rather than schedulable activities.
 Nevertheless, they consume resources and hence are often included in the project
schedule.
 Rarely will a project manager complete the detailed schedule of the entire project all
at once.
 Once the overall schedule is fixed, detailing for a phase may only be done at the start
of that phase.
 For detailed scheduling, tools like Microsoft Project or a spreadsheet can be very
useful.
 For each lowest-level activity, the project manager specifies the effort, duration, start
date, end date, and resources.
 Dependencies between activities, due either to an inherent dependency (for example,
you can conduct a unit test plan for a program only after it has been coded) or to a
resource-related dependency (the same resource is assigned two tasks) may also be
specified.
 From these tools the overall effort and schedule of higher level tasks can be
determined.
 A detailed project schedule is never static.

Team Structure
 The number of resources should be fixed when schedule is being planned.
 Detailed scheduling is done only after actual assignment of people has been done, as
task assignment needs information about the capabilities of the team members.
 The project's team is led by a project manager, who does the planning and task
assignment.
 This form of hierarchical team organization is fairly common, and was earlier called
the Chief Programmer Team.
 In this hierarchical organization, the project manager is responsible for all major
technical decisions of the project.
 Project Manager does most of the design and assigns coding of the different parts of
the design to the programmers.
 The team typically consists of programmers, testers, a configuration controller, and
possibly a librarian for documentation.
 There may be other roles like database manager, network manager, backup project
manager, or a backup configuration controller.
 These are all logical roles and one person may do multiple such roles.
 For a small project, a one-level hierarchy suffices.
 For larger projects, this organization can be extended easily by partitioning the project
into modules, and having module leaders who are responsible for all tasks related to
their module and has a team with them for performing these tasks.
 A different team organization is the egoless team [114]: Egoless teams consist of ten
or fewer programmers.
 The goals of the group are set by consensus, and input from every member is taken
for major decisions.
 Group leadership rotates among the group members.
 Due to their nature, egoless teams are sometimes called Democratic Teams.

 This structure allows input from all members, which can lead to better decisions for
difficult problems.
 This structure is well suited for long-term research-type projects that do not have time
constraints.
 It is not suitable for regular tasks that have time constraints; for such tasks, the
communication in democratic structure is unnecessary and results in inefficiency.

IV. Quality Planning


 The quality plan is the set of quality related activities that a project plans to do to
achieve the quality goal.
 To plan for quality, let us first understand the defect injection and removal cycle, as it
is defects that determine the quality of the final delivered software.
 Quality goals are specified in terms of acceptance criteria— the delivered software
should finally work for all the situations and test cases in the acceptance criteria.
 There may even be an acceptance criterion on the number of defects that can be found
during the acceptance testing.
 For example suppose not more than n defects are uncovered by acceptance testing.
 The quality plan focuses mostly on planning suitable quality control tasks for
removing defects.
 Software development is a highly people-oriented activity and hence it is error-
prone.
 In a software project, we start with no defects (there is no software to contain defects).
 Defects are injected into the software being built during the different phases in the
project.
 That is, during the transformation from user needs to software to satisfy those needs,
defects are injected in the transformation activities undertaken.
 These injection stages are primarily the requirements specification, the high-level
design, the detailed design, and coding.
 To ensure that high-quality software is delivered, these defects are removed through
the quality control (QC) activities.
 The QC activities for defect removal include requirements reviews, design reviews,
code reviews, unit testing, integration testing, system testing, acceptance testing, etc.
 Figure 5.3 shows the process of defect injection and removal.

 The task of quality management is to plan suitable quality control activities and then
to properly execute and control them so the projects quality goals are achieved.
 With respect to quality control the terms Verification and Validation are often used.
 Verification is the process of determining whether or not the products of a given
phase of software development fulfil the specifications established during the
previous phase.
 Validation is the process of compliance with the software requirements.
 Both activities are needed to be performed for high reliability.
 They are often called V&V activities together.
 The major V&V activities for software development are Inspection and Testing
(both static and dynamic).
 The quality plan identifies the different V&V tasks for the different phases and
specifies how these tasks contribute to the project V&V goals.
 The methods to be used for performing these V&V activities, the responsibilities and
milestones for each of these activities, inputs and outputs for each V&V task, and
criteria for evaluating the outputs are also specified.
 Mainly Quality Planning has 2 objectives
1. Reduce the Defects being injected
 This is often done through standards, methodologies, following of good
processes, etc., which help reduce the chances of errors by the project
personnel.
2. Increase the Defects being removed

Procedural Quality Management Approach


 Reviews and Testing are two most common QC activities.
 Reviews are structured, human-oriented processes, Testing is the process of
executing software (or parts of it) in an attempt to identify defects.
 In the Procedural Approach To Quality Management, procedures and guidelines for
the review and testing activities are planned.
 During project execution, they are carried out according to the defined procedures.
 The procedural approach is the execution of certain processes at defined points to
detect defects.
 The procedural approach does not allow claims to be made about the percentage of
defects removed or the quality of the software following the procedures completion.
 Merely executing a set of defect removal procedures does not provide a basis for
judging their effectiveness or assessing the quality of the final code.
 Such an approach is highly dependent on the quality of the procedure and the quality
of its execution.
 For example, if the test planning is done carefully and the plan is thoroughly
reviewed, the quality of the software after testing will be better than if testing was
done but using a test plan that was not carefully thought out or reviewed.

Quantitative Quality Management Approach


 This approach is used for ensuring quality, making quantitative claims can be quite
hard.
 To better assess the effectiveness of the defect detection processes, metrics based
evaluation is necessary.
 Based on analysis of the data, we can decide whether more testing or reviews are
needed.
 If controls are applied during the project based on quantitative data to achieve
quantitative quality goals, then we say that a Quantitative Quality Management
Approach is being applied. One approach to quantitative quality management is
defect prediction.
 In this approach, the quality goal is set in terms of delivered defect density.
 Intermediate goals are set by estimating the number of defects that may be identified
by various defect detection activities.
 Then the actual number of defects is compared to the estimated defect levels.
 The effectiveness of this approach depends on how well you can predict the defect
levels at various stages of the project.
 An approach like this requires past data for estimation.
 Another approach is to use Statistical Process Control (SPC) for managing quality.

Quality Plan
 The quality plan for a project drives the quality activities in the project.
 The sophistication of the plan depends on the type of data or prediction models
available.
 At the simplest, the quality plan specifies the quality control tasks that will be
performed in the project.
 These will be schedulable tasks in the detailed schedule of the project.
 For example, it will specify what documents will be inspected, what parts of the code
will be inspected, and what levels of testing will be performed.
 The plan will be considerably enhanced if some sense of defect levels that are
expected to be found for the different quality control tasks are mentioned.
 These can then be used for monitoring the quality as the project proceeds.
 Much of the quality plan revolves around testing and reviews.
 Effectiveness of reviews depends on how they are conducted.
 One particular process of conducting reviews is called inspections
 This process can be applied to any work product like requirement specifications,
design document, test plans, project plans, and code.

V. Risk Management Planning


 A software project is a complex undertaking.
 Unforeseen events may have an adverse impact on a projects cost, schedule, or
quality.
 Risk management is an attempt to minimize the chances of failure caused by
unplanned events.
 The aim of risk management is not to avoid getting into projects that have risks but to
minimize the impact of risks in the projects that are undertaken.
 A risk is a probabilistic event—it may or may not occur.
 Social and organizational factors also may lead to risks and discourage clear
identification of them.
 This kind of attitude gets the project in trouble if the risk events materialize,
something that is likely to happen in a large project.
 A materialized risk is a risk that happened in the project.
 Risk Management is considered first among the best practices for managing large
software projects.
 It first came to the forefront with Boehm's tutorial on risk management.
 Risk is defined as an exposure to the chance of injury or loss.
 Risk Management is the area that tries to ensure that the impact of risks on cost,
quality, and schedule is minimal.
 The commonly expected events, such as people going on leave or some requirements
changing, are handled by normal project management.
 Risk Management begins where normal project management ends.
 It deals with events that are infrequent, somewhat out of the control of the project
management, and which can have a major impact on the project.
 Most projects have risk.
 The idea of risk management is to minimize the possibility of risks materializing, if possible,
or to minimize the effects if risks actually materialize.
 For example, when constructing a building, there is a risk that the building may later collapse
due to an earthquake. That is, the possibility of an earthquake is a risk.

Figure: Risk Management Activities

 So the risk management revolves around Risk Assessment and Risk Control.
 For each of these major activities, some sub activities must be performed.
 A breakdown of these activities is given in the Figure above.

Risk Assessment
 Risk Assessment is an activity that must be undertaken during project planning.
 This involves identifying the risks, analyzing them, and prioritizing them on the basis
of the analysis.
 Due to the nature of a software project, uncertainties are highest near the beginning of
the project.
 Risk Assessment should be done throughout the project, but it is most needed in the
starting phases of the project.
 The goal of risk assessment is to prioritize the risks so that attention and resources can
be focused on the more risky items.
 Risk Identification is the first step in risk assessment, which identifies all the different
risks for a particular project.
 These risks are project-dependent and identifying them is an exercise in envisioning
what can go wrong.
 Methods that can aid risk identification include checklists of possible risks, surveys,
meetings and brainstorming, and reviews of plans, processes, and work products.
 Checklists of frequently occurring risks are the most common tool for risk
identification.
 Most organizations prepare a list of commonly occurring risks for projects, prepared
from a survey of previous projects.
 Such a list can form the starting point for identifying risks for the current project.
 Based on surveys of experienced project managers, Boehm has produced a list of the
top 10 risk items likely to compromise the success of a software project.
 Using the checklist of the top 10 risk items is one way to identify risks.
 This approach is likely to suffice in many projects.
 The other methods are decision driver analysis, assumption analysis, and
decomposition.
Fig: Top risk items and techniques for managing them.
 Gold plating refers to adding features in the software that are only marginally useful.
 This adds unnecessary risk to the project because gold plating consumes resources
and time with little return.
 Decision Driver Analysis involves questioning and analyzing all the major decisions
taken for the project.
 If a decision has been driven by factors other than technical and management reasons,
it is likely to be a source of risk in the project.
 Such decisions may be driven by politics, marketing, or the desire for short-term gain.
 Decomposition implies breaking a large project into clearly defined parts and then
analyzing them.
 Many software systems have the phenomenon that 20% of the modules cause 80% of
the project problems.
 Decomposition will help identify these modules.
 In Risk Analysis, the probability of occurrence of a risk has to be estimated, along
with the loss that will occur if the risk does materialize.
 This is often done through discussion, using experience and understanding of the
situation, though structured approaches also exist.
 Once the probabilities of risks materializing and losses due to materialization of
different risks have been analyzed, they can be prioritized.
 One approach for prioritization is through the concept of Risk Exposure (RE), which
is sometimes called Risk Impact.
 RE is defined by the relationship
RE = Prob(UO) * Loss(UO)
 where Prob(UO) is the probability of the risk materializing (i.e., undesirable outcome)
and Loss(UO) is the total loss incurred due to the unsatisfactory outcome.
 The loss is not only the direct financial loss that might be incurred but also any loss in
terms of credibility, future business, and loss of property or life.
 The RE is the expected value of the loss due to a particular risk.
 For Risk Prioritization using RE is, the higher the RE, the higher the priority of the
risk item.

Risk Control
 The main objective of risk management is to identify the top few risk items and then
focus on them.
 Once a project manager has identified and prioritized the risks, the top risks can be
easily identified.
 Knowing the risks is of value only if you can prepare a plan so that their
consequences are minimal - that is the basic goal of risk management.
 One strategy is Risk Avoidance, which entails taking actions that will avoid the risk
altogether, like the earlier example of shifting the building site to a zone that is not
earthquake-prone.
 The strategy is to perform the actions that will either reduce the probability of the risk
materializing or reduce the loss due to the risk materializing.
 These are called Risk Mitigation Steps.
 To decide what mitigation steps to take, a list of commonly used risk mitigation steps
for various risks is very useful here.
 Unlike risk assessment, which is largely an analytical exercise, risk mitigation
comprises active measures that have to be performed to minimize the impact of risks.
 In other words, selecting a risk mitigation step is not just an intellectual exercise.
 The risk mitigation steps must be executed.
 To ensure that the needed actions are executed properly, they must be incorporated
into the detailed project schedule.
 Risk prioritization and consequent planning are based on the risk perception at the
time the risk analysis is performed.
 In addition to monitoring the progress of the planned risk mitigation steps, a project
must periodically revisit the risk perception and modify the risk mitigation plans.
 Risk Monitoring is the activity of monitoring the status of various risks and their
control activities.

A Practical Risk Management Planning Approach


 The concept of risk exposure is rich, a simple practical way of doing risk planning is
to simply categorize risks and the impacts in a few levels and then use it for
prioritization.
 This approach is used in many organizations.
 In this approach, the probability of a risk occurring is categorized as low, medium, or
high.
 The risk impact can also be classified as low, medium, and high.
Table: Risk Management Plan for a project.
1. For each risk, rate the probability of its happening as low, medium, or high.
2. For each risk, assess its impact on the project as low, medium, or high.
3. Rank the risks based on the probability and effects on the project.
 For example, a high-probability, high-impact item will have higher rank than a risk
item with a medium probability and high impact.
 In case of conflict, use judgment.
 Select the top few risk items for mitigation and tracking.

VI. METRICS FOR PROJECT SIZE ESTIMATION


• The size of a project is obviously not the number of bytes that the source code
occupies, neither is it the size of the executable code.
• The project size is a measure of the problem complexity in terms of the effort
and time required to develop the product.
• Currently, two metrics are popularly being used to measure size
1. Lines of code (LOC) and
2. function point (FP)
Each of these metrics has its own advantages and disadvantages.
Lines of code (LOC)
• LOC is possibly the simplest among all metrics available to measure project size.
• This metric measures the size of a project by counting the number of source
instructions in the developed program.
• While counting the number of source instructions, comment lines, and header lines
are ignored.
• Determining the LOC count at the end of a project is very simple. However , accurate
• estimation of LOC count at the beginning of a project is a very difficult task.
• LOC metric has several shortcomings when used to measure problem size
1. LOC is a measure of coding activity alone.
2. LOC count depends on the choice of specific instructions.
3. LOC measure correlates poorly with the quality and efficiency of the code.
4. LOC metric penalizes use of higher-level programming languages and code reuse.
5. LOC metric measures the lexical complexity of a program and does not address
the more important issues of logical and structural complexities.
6. It is very difficult to accurately estimate LOC of the final program from problem
specification.

Disadvantage of the LOC metric: The LOC count is very difficult to estimate during
project planning stage, and can only be accurately computed after the software
development is complete.

Function Point (FP) Metric


• Function point metric was proposed by Albrecht in 1983.
• A function point is a "unit of measurement" to express the amount of business
functionality of a product.
• Function points are used to compute a functional size measurement (FSM) of
software.
• The cost (in dollars or hours) of a single unit is calculated from past projects
• The size of the project is estimated on the basis of functions or services requested by
the customer.
• It does not deal with the size of the LOC.
• It relies on the product features.
Function point (FP) metric computation
• The size of a software product (in units of function points or FPs) is computed using
different characteristics of the product identified in its requirements specification. It is
computed using the following three steps:
Step 1: Compute the unadjusted function point (UFP)
Step 2: Refine UFP
Step 3: Compute FP by further refining UFP
Step 1: UFP computation
• The unadjusted function points (UFP) is computed as the weighted sum of five
characteristics of a product
Number of inputs: Each data item input by the user is counted.
Number of outputs: The outputs considered include reports printed, screen outputs,
error messages produced, etc.
Number of inquiries: An inquiry is a user command (without any data input) and
only requires some actions to be performed by the system.
Number of files: The files are logical files.
Number of interfaces: The interfaces denote the different mechanisms that are used
to exchange information with other systems. Examples of such interfaces are data
files on tapes, disks, communication links with other systems, etc.
Step 2: Refine parameters
• UFP computed at the end of step 1 is a gross indicator of the problem size. This UFP
needs to be refined.
• For example, some input values may be extremely complex, some very simple, etc.
• The complexity of each parameter is graded into three broad categories—simple,
average, or complex.
• The weights for the different parameters are determined based on the numerical
values .
Step 3: Refine UFP based on complexity of the overall project
• In the final step, several factors that can impact the overall project size are considered
to refine the UFP computed in step 2.a
• Albrecht identified 14 parameters that can influence the development effort.
• Each of these 14 parameters is assigned a value from 0 (not present or no influence) to
6 (strong influence).
0 ( not present)
1 (incidental)
2 (moderate)
3 (average)
4 (significant)
5 (highly essential)
6 (strong influence)
• By using these values, we can compute Degree of Influence(DI).
• DI = Complexity Adjustment Attribute * value of Influence
• To compute Technical Complexity Factor,
TCF = 0.65+0.01*DI
Finally,
FP = UFP * TCF
Where
FP – Function points
UFP – Unadjusted Function Points
CAA – Complexity Adjustment Attribute
DI – Degree of Influence
TCF – Technical Complexity Factor
Simplified FP Calculation Process
1. Calculate UFP by using weighting factor table.
2. Find out the value of CAA from the given problem.
3. Find out value of Influence(I) from 0-6.
4. Compute DI, i.e., (DI = CAA * I).
5. Compute TCF, i.e., (TCF = 0.65+0.01*DI).
6. Compute FP, i.e., (FP = UFP * TCF).
Example
• Compute the FP value for the grade calculation of students. Assume that it is an
average complexity size project. The information domain values are as follows:
Number of user inputs = 13
Number of user outputs = 4
Number of user inquiries = 2
Number of files = 5
Number of external interfaces = 2
The total value of complexity adjustment attribute is 13.
Solution
Given that
• It is an average complexity size project.
• Information domain values are
Number of user inputs = 13
Number of user outputs = 4
Number of user inquiries = 2
Number of files = 5
Number of external interfaces = 2
CAA = 13.
Domain Count Weighting factor Count
Characterist
ics
Simple Average Complex

No of user 13 * 3 4 6 52
Input

No of user 4 * 4 5 7 20
output

No of user 2 * 3 4 6 8

enqui
ries

No of files 5 * 7 10 15 50

No of 2 * 5 7 10 14
external

Interfaces

UFP value : 144

• CAA = 13
• Value of Influence = 3
• DI = CAA * Influence
DI = 13 * 3
DI = 39
• TCF =0.65+0.01*(DI)
TCF = 0.65+0.01*39 //(left to right associativity)
TCF = 1.04
• FP = UFP * TCF
FP =144*1.04
FP = 149.76
Advantages
• Accurately estimates the project cost, project duration, staffing size.
Disadvantages
• This method is only suitable for Business systems.
• Many aspects of this method are not validated.
• The functional point has no significant meaning, it’s just a numerical value.

Assignment-Cum-Tutorial Questions

A. Questions testing the remembering / understanding level of students

I) Objective Questions

1) __________________________ is a size measurement unit of a software project.


2) The COCOMO model was introduces in the book software engineering economics
authored by __________________________.
3) Risk management requires that risks be __________________________.
4) According to the COCOMO model, for estimating a project, the manager needs to
consider [ ]
a) Characteristics of the product
b) Experience of the development team
c) Characteristics of the development environment
d) All of the above
5) Which of the following is an activity that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering
tasks? [ ]
a) Software Macroscopic schedule b) Software Project scheduling
c) Software Detailed schedule d) None of the mentioned
6) Risk management is responsibility of the [ ]
a) Customer
b) Investor
c) Developer
d) Project team
e) Production team

II) Descriptive Questions

1) Explain about Top-Down Effort Estimation Approach?


2) State how does Project scheduling and staffing goals are meet and explain the
manpower distribution over Rayleigh Curve.
3) Explain about Risk assessment and risk control activities
4) How planning plays an important role in Project management. Explain about quality
planning for a software project.
B. Question testing the ability of students in applying the concepts.

I) Multiple Choice Questions:

1) As Software Manager, when you will decide the number of people required for a
software project? [ ]
a) Before the scope is determined.
b) Before an estimate of the development effort is made
c) After an estimate of the development effort is made.
d) None of the above
2) As a tester which of the following will come under product risk if you are testing an
e-commerce website? [ ]
a) Shortage of testers
b) Many changes in SRS that caused changes in test cases
c) Delay in fixing defects by development team
d) Failure to transfer a user to secure gateway while paying
e) All of the above
3) In COCOMO Model, if the Project size is typically 2-50 KLOC ,then which mode is
to be selected. [ ]
a) Organic b) Semi Detached
c) Embedded d) None of the Above
4) Which of the following is not a ‘concern’ during the management of a software
project? [ ]
a) Money
b) Time
c) Product quality
d) Project/product information
e) Product quantity
5) How does a software project manager need to act to minimize the risk of software
failure? [ ]
a) Double the project team size
b) Request a large budget
c) Form a small software team
d) Track progress
e) Request for more period of time.
II) Problems:

1) An Event Registration System for which the size is estimated at 190 KLOC is to be
developed for an Institution. The values of cost drivers are as follows : low reliability
-> .78, high product complexity -> 1.15,low application, experience -> 1.13,high
programming languages experience -> .85, other cost drivers assumed to be nominal -
> 1.00. Compute the Overall Effort and Schedule Estimates.
2) Suppose a Student Helpdesk System is to be developed with the following services:
Registration(0.7K), Data Entry (1.5K), Student Reports (1.6 K),Online Query(0.9K)
and Search(1.3 K). all cost drivers are assumed to be nominal with their impact value
as 1.00. Compute the Initial Effort, Development Effort and Time, and Phase-Wise
Development Effort.
3) Assume a system for Student Course Registration is planned to be developed and its
estimated size is approximately 10,000 lines of code. The organization is proposed to
pay Rs 25000 per month to software engineers. Compute the Development Effort,
Development Time, and the Total Cost for Product Development.

4) Suppose a Library Management System(LMS) is to be designed for an academic


Institution. From the project proposal, the following five major components are
identified:
Online Data Entry - 1.0 KLOC
Data Update - 2.5 KLOC
File Input and Output - 1.5 KLOC
Library Reports - 2.0 KLOC
Query and Search - 1.5 KLOC
The database size and application experience are very important in this project. The
use of the software tool and the main storage is highly considerable. The virtual
machine experience and its volatility can be kept low. All other cost drivers have
nominal requirements. Use the COCOMO model to estimate the development effort
and the requirements.
5) Consider a Database System needed for an Office Automation Project. The
Requirements Document shows 4 modules needed. Sizes are estimated as follows
Data entry - 0.6 KLOC
Data update - 0.6 KLOC
Query - 0.8 KLOC
Report gen - 1.0 KLOC
The project is judged to be organic a=3.2, b=1.05. The manage rates project details as
follows:
Characteristics level EAF
Complexity high 1.15
Storage high 1.06
Experience low 1.13
Programmer
Capabilities low 1.17
Calculate the Development Effort.

C. Questions testing the analyzing / evaluating ability of student

1) Select the mode of Project by analyzing following given data and also calculate effort
and Development time based on mode of the Project. Consider a database system
needed for office automation on project. The Requirements Document shows 4
modules needed. Sizes are estimated as follows:
Data entry 0.6KDSI
Data update 0.6 KDSI
Query 0.8 KDSI
Report gen 1.0 KDSI

D. Previous GATE. Questions:

1) Consider the basic COCOMO model where E is the effort applied in person-months,
D is the development time in chronological months, KLOC is the estimated number of
delivered lines of code (in thousands) and ab,bb,Cb,db have their usual meanings. The
basic COCOMO equations are of the form (GATE 2015)
a) 𝐸=𝑎b(𝐾LOC)exp(bb),𝐷=,Cb(𝐸)exp(db)
b) D=𝑎b(𝐾LOC)exp(bb),E=,Cb(𝐸)exp(db)
c) E=𝑎b exp(bb),D=Cb(KLOC)exp(db)
d) D=𝑎bexp(db),E=,Cb(KLOC)exp(bb)

You might also like