UNIT V - ASE
UNIT V - ASE
Software Project Management: Estimation – LOC, FP Based Estimation, Make/Buy Decision COCOMO I & II Model –
Project Scheduling – Scheduling, Earned Value Analysis Planning – Project Plan, Planning Process, RFP Risk Management
– Identification, Projection – Risk Management-Risk Identification-RMMM Plan-CASE TOOLS
Software metric
• Any type of measurement which relates to a software system, process or related
documentation
• Lines of code in a program, the Fog index, number of person-days required to
develop a component.
• Allow the software and the software process to be quantified.
• May be used to predict product attributes or to control the software process.
• Product metrics can be used for general predictions or to identify anomalous components.
Metrics assumptions
• A software property can be measured.
• The relationship exists between what we can measure and what we want to know. We can
only measure internal attributes but are often more interested in external software attributes.
• This relationship has been formalised and validated.
• It may be difficult to relate what can be measured to desirable external quality attributes.
Data collection
• A metrics programme should be based on a set of product and process data.
• Data should be collected immediately (not in retrospect) and, if possible, automatically.
• Three types of automatic data collection
• Static product analysis;
• Dynamic product analysis;
• Process data collation.
Data accuracy
• Don‘t collect unnecessary data
• The questions to be answered should be decided in advance and the required data
identified.
• Tell people why the data is being collected.
• It should not be part of personnel evaluation.
• Don‘t rely on memory
• Collect data when it is generated not after a project has finished.
Product metrics
• A quality metric should be a predictor of product quality.
• Classes of product metric
• Dynamic metrics which are collected by measurements made of a program in
execution;
• Static metrics which are collected by measurements made of the system
representations;
• Dynamic metrics help assess efficiency and reliability; static metrics help assess
complexity, understand ability and maintainability.
Depth of inheritance tree This represents the number of discrete levels in the inheritance
tree where sub-classes inherit attributes and operations
(methods) from super-classes. The deeper the inheritance tree,
the more complex the design. Many different object classes may
have to be understood to understand the object classes at the
leaves of the tree.
Method fan-in/fan-out This is directly related to fan-in and fan-out as described above
and means essentially the same thing. However, it may be
appropriate to make a distinction between calls from other
methods within the object and calls from external methods.
Weighted methods per This is the number of methods that are included in a class
class weighted by the complexity of each method. Therefore, a simple
method may have a complexity of 1 and a large and complex
method a much higher value. The larger the value for this
metric, the more complex the object class. Complex objects are
more likely to be more difficult to understand. They may not be
logically cohesive so cannot be reused effectively as super-
classes in an inheritance tree.
Number of overriding This is the number of operations in a super-class that are over-
operations ridden in a sub-class. A high value for this metric indicates that
the super-class used may not be an appropriate parent for the
sub-class.
Measurement analysis
• It is not always obvious what data means
• Analysing collected data is very difficult.
• Professional statisticians should be consulted if available.
• Data analysis must take local circumstances into account.
Measurement surprises
• Reducing the number of faults in a program leads to an increased number of help desk calls
• The program is now thought of as more reliable and so has a wider more diverse
market. The percentage of users who call the help desk may have decreased but the
total may increase;
• A more reliable system is used in a different way from a system where users work
around the faults. This leads to more help desk calls.
ZIPF’s Law
• Zipf's Law as "the observation that frequency of occurrence of some event (P), as a function
of the rank (i) when the rank is determined by the above frequency of occurrence, is a power-
law function Pi ~ 1/ia with the exponent a close to unity (1)."
• Let P (a random variable) represented the frequency of occurrence of a keyword in a
program listing.
• It applies to computer programs written in any modern computer language.
• Without empirical proof because it's an obvious finding, that any computer program written
in any programming language has a power law distribution, i.e., some keywords are used
more than others.
• Frequency of occurrence of events is inversely proportional to the rank in this frequency of
occurrence.
• When both are plotted on a log scale, the graph is a straight line.
• we create entities that don't exist except in computer memory at run time; we create logic
nodes that will never be tested because it's impossible to test every logic branch; we create
information flows in quantities that are humanly impossible to analyze with a glance;
• Software application is the combination of keywords within the context of a solution and not
their quantity used in a program; context is not a trivial task because the context of an
application is attached to the problem being solved and every problem to solve is different
and must have a specific program to solve it.
• Although a program could be syntactically correct, it doesn't mean that t he algorithms
implemented solve the problem at hand. What's more, a correct program can solve the wrong
problem. Let's say we have the simple requirement of printing "Hello, World!" A
syntactically correct solution in Java looks as follows:
• Public class SayHello {
public static void main(String[] args) {
System.out.println("John Sena!");
}
}
• This solution is obviously wrong because it doesn't solve the original requirement. This
means that the context of the solution within the problem being solved needs to be
determined to ensure its quality. In other words, we need to verify that the output matches the
original requirement.
• Zip's Law can't even say too much about larger systems.
Software productivity
• A measure of the rate at which individual engineers involved in software development
produce software and associated documentation.
• Not quality-oriented although quality assurance is a factor in productivity assessment.
• Essentially, we want to measure useful functionality produced per time unit.
Productivity measures
• Size related measures based on some output from the software process. This may be lines of
delivered source code, object code instructions, etc.
• Function-related measures based on an estimate of the functionality of the delivered
software. Function-points are the best known of this type of measure.
Measurement problems
• Estimating the size of the measure (e.g. how many function points).
• Estimating the total number of programmer months that have elapsed.
• Estimating contractor productivity (e.g. documentation team) and incorporating this
estimate in overall estimate.
Lines of code
• The measure was first proposed when programs were typed on cards with one line per card;
• How does this correspond to statements as in Java which can span several lines or where
there can be several statements on one line.
Productivity comparisons
• The lower level the language, the more productive the programmer
• The same functionality takes more code to implement in a lower-level language than
in a high-level language.
• The more verbose the programmer, the higher the productivity
• Measures of productivity based on lines of code suggest that programmers who write
verbose code are more productive than programmers who write compact code.
COCOMO model
• An empirical model based on project experience.
• Well-documented, ‗independent‘ model which is not tied to a specific software vendor.
• Long history from initial version published in 1981 (COCOMO-81) through various
instantiations to COCOMO 2.
• COCOMO 2 takes into account different approaches to software development, reuse, etc.
COCOMO 81
COCOMO 2
• COCOMO 81 was developed with the assumption that a waterfall process would be used and
that all software would be developed from scratch.
• Since its formulation, there have been many changes in software engineering practice and
COCOMO 2 is designed to accommodate different approaches to software development.
COCOMO 2 models
• COCOMO 2 incorporates a range of sub-models that produce increasingly detailed software
estimates.
• The sub-models in COCOMO 2 are:
• Application composition model. Used when software is composed from existing
parts.
• Early design model. Used when requirements are available but design has not yet
started.
• Reuse model. Used to compute the effort of integrating reusable components.
• Post-architecture model. Used once the system architecture has been designed and
more information about the system is available.
Multipliers
• Multipliers reflect the capability of the developers, the non-functional requirements, the
familiarity with the development platform, etc.
• RCPX - product reliability and complexity;
• RUSE - the reuse required;
• PDIF - platform difficulty;
• PREX - personnel experience;
• PERS - personnel capability;
• SCED - required schedule;
• FCIL - the team support facilities.
Post-architecture level
• Uses the same formula as the early design model but with 17 rather than 7 associated
multipliers.
• The code size is estimated as:
• Number of lines of new code to be developed;
• Estimate of equivalent number of lines of new code computed using the reuse model;
• An estimate of the number of lines of code that have to be modified according to
requirements changes.
The exponent term
• This depends on 5 scale factors (see next slide). Their sum/100 is added to 1.01
• A company takes on a project in a new domain. The client has not defined the process to be
used and has not allowed time for risk analysis. The company has a CMM level 2 rating.
• Precedenteness - new project (4)
• Development flexibility - no client involvement - Very high (1)
• Architecture/risk resolution - No risk analysis - V. Low .(5)
• Team cohesion - new team - nominal (3)
• Process maturity - some control - nominal (3)
• Scale factor is therefore 1.17.
Multipliers
• Product attributes
• Concerned with required characteristics of the software product being developed.
• Computer attributes
• Constraints imposed on the software by the hardware platform.
• Personnel attributes
• Multipliers that take the experience and capabilities of the people working on the
project into account.
• Project attributes
• Concerned with the particular characteristics of the software development project.
Delphi method
The Delphi method is a systematic, interactive forecasting method which relies on a panel
of experts. The experts answer questionnaires in two or more rounds. After each round, a
facilitator provides an anonymous summary of the experts‘ forecasts from the previous round as
well as the reasons they provided for their judgments. Thus, experts are encouraged to revise
their earlier answers in light of the replies of other members of their panel. It is believed that
during this process the range of the answers will decrease and the group will converge towards
the "correct" answer. Finally, the process is stopped after a pre-defined stop criterion (e.g.
number of rounds, achievement of consensus, stability of results) and the mean or median scores
of the final rounds determine the results.
The Delphi Technique is an essential project management technique that refers to an
information gathering technique in which the opinions of those whose opinions are most
valuable, traditionally industry experts, is solicited, with the ultimate hope and go al of attaining a
consensus. Typically, the polling of these industry experts is done on an anonymous basis, in
hopes of attaining opinions that are unfettered by fears or identifiability. The experts are
presented with a series of questions in regards to the project, which is typically, but not always,
presented to the expert by a third-party facilitator, in hopes of eliciting new ideas regarding
specific project points. The responses from all experts are typically combined in the form of an
overall summary, which is then provided to the experts for a review and for the opportunity to
make further comments. This process typically results in consensus within a number of rounds,
and this technique typically helps minimize bias, and minimizes the possibility t hat any one
person can have too much influence on the outcomes.
Key characteristics
The following key characteristics of the Delphi method help the participants to focus on
the issues at hand and separate Delphi from other methodologies:
• Structuring of information flow
The initial contributions from the experts are collected in the form of answers to
questionnaires and their comments to these answers. The panel director controls the interactions
among the participants by processing the information and filt ering out irrelevant content. This
avoids the negative effects of face-to-face panel discussions and solves the usual problems of
group dynamics.
• Regular feedback
Participants comment on their own forecasts, the responses of others and on the progress
of the panel as a whole. At any moment they can revise their earlier statements. While in regular
group meetings participants tend to stick to previously stated opinions and often conform too
much to group leader, the Delphi method prevents it.
• Anonymity of the participants
Usually all participants maintain anonymity. Their identity is not revealed even after the
completion of the final report. This stops them from dominating others in the process using their
authority or personality, frees them to some extent from their personal biases, minimizes the
"bandwagon effect" or "halo effect", allows them to freely express their opinions, and
encourages open critique and admitting errors by revising earlier judgments.
The first step is to found a steering committee (if you need one) and a management team
with sufficient capacities for the process. Then expert panels to prepare and formulate the
statements are helpful unless it is decided to let that be done by the management team. The
whole procedure has to be fixed in advance: Do you need panel meetings or do the teams work
virtually. Is the questionnaire an electronic or a paper one? This means, that logistics (from
Internet programming to typing the results from the paper versions) have to be organised. Will
there be follow-up work-shops,interviews, presentations? If yes, these also have to be organised
and pre-pared. Printing of brochures, leaflets, questionnaire, reports have also be considered. The
last organisational point is the interface with the financing organisation if this is different from
the management team.
Scheduling
Scheduling Principles
• compartmentalization—define distinct tasks
• interdependency—indicate task interrelationship
• effort validation—be sure resources are available
• defined responsibilities—people must be assigned
• defined outcomes—each task must have an output
• defined milestones—review for quality
Effort
4 4
Ea = m ( t d /t a )
Eo
Empirical Relationship: P vs E
Given Putnam‘s Software Equation (5-3),
E = L3 / (P3t4)
Consider a project estimated at 33 KLOC, 12 person-years of effort, with a P of 10K, the
completion time would be 1.3 years
If deadline can be extended to 1.75 years,
E = L3 / (P3t4) ≈ 3.8 p-years vs 12 p-years
Timeline Charts
Effort Allocation
• ―front endǁ activities
• customer communication
• analysis
• design
• review and modification
• construction activities
• coding or code generation
• testing and installation
• unit, integration
• white-box, black box
• regression
Problem
• Assume you are a software project manager and that you‘ve been asked to computer earned
value statistics for a small software project. The project has 56 planned work tasks that are
estimated to require 582 person-days to complete. At the time that you‘ve been asked to do
the earned value analysis, 12 tasks have been completed. However, the project schedu le
indicates that 15 tasks should have been completed. The following scheduling data (in
person-days) are available:
• Task Planned Effort Actual Effort
• 1 12 12.5
• 2 15 11
• 3 13 17
• 4 8 9.5
• 5 9.5 9.0
• 6 18 19
• 7 10 10
• 8 4 4.5
• 9 12 10
• 10 6 6.5
• 11 5 4
• 12 14 14.5
• 13 16
• 14 6
• 15 8
Error Tracking
• Schedule Tracking
• conduct periodic project status meetings in which each team member reports progress
and problems.
• evaluate the results of all reviews conducted throughout the software engineering
process.
• determine whether formal project milestones (diamonds in previous slide) have been
accomplished by the scheduled date.
• compare actual start-date to planned start-date for each project task listed in the
resource table
• meet informally with practitioners to obtain their subjective assessment of progress to
date and problems on the horizon.
• use earned value analysis to assess progress quantitatively.
• Progress on an OO Project-I
• Technical milestone: OO analysis completed
• All classes and the class hierarchy have been defined and reviewed.
• Class attributes and operations associated with a class have been defined and
reviewed.
• Class relationships (Chapter 8) have been established and reviewed.
• A behavioral model (Chapter 8) has been created and reviewed.
• Reusable classes have been noted.
• Technical milestone: OO design completed
• The set of subsystems (Chapter 9) has been defined and reviewed.
• Classes are allocated to subsystems and reviewed.
• Task allocation has been established and reviewed.
• Responsibilities and collaborations (Chapter 9) have been identified.
• Attributes and operations have been designed and reviewed.
• The communication model has been created and reviewed.
• Progress on an OO Project-II
• Technical milestone: OO programming completed
• Each new class has been implemented in code from the design model.
• Extracted classes (from a reuse library) have been implemented.
• Prototype or increment has been built.
• Technical milestone: OO testing
• The correctness and completeness of OO analysis and design models has been
reviewed.
• A class-responsibility-collaboration network (Chapter 8) has been developed and
reviewed.
• Test cases are designed and class-level tests (Chapter 14) have been conducted for
each class.
• Test cases are designed and cluster testing (Chapter 14) is completed and the classes
are integrated.
• System level tests have been completed.
Elements of SCM
• Component element
- Tools coupled with file management
• Process element
-Procedures define change management
• Construction element
-Automate construction of software
• Human elements
-Give guidance for activities and process features
Baselines
• A work product becomes a baseline only after it is reviewed and approved.
• Before baseline – changes informal
• Once a baseline is established each change request must be evaluated and verified before it is
processed.
Software Configuration Items
• SCI
• Document
• Test cases
• Program component
• Editors, compilers, browsers
– Used to produce documentation.
Importance of evolution
• Organizations have huge investments in their software systems - they are critical business
assets.
• To maintain the value of these assets to the business, they must be changed and updated.
• The majority of the software budget in large companies is devoted to evolving existing
software rather than developing new software.
Software change
• Software change is inevitable
• New requirements emerge when the software is used;
• The business environment changes;
• Errors must be repaired;
• New computers and equipment is added to the system;
• The performance or reliability of the system may have to be improved.
• A key problem for organisations is implementing and managing change to their existing
software systems.
Lehman’s laws
Law Description
Continuing change A program that is used in a real-world environment
necessarily must change or become progressively less
useful in that environment.
Increasing complexity As an evolving program changes, its structure tends to
become more complex. Extra resources must be devoted to
preserving and simplifying the structure.
Large program Program evolution is a self-regulating process. System
evolution attributes such as size, time between releases and the
number of reported errors is approximately invariant for
each system release.
Organisational stability Over a program‘s lifetime, its rate of development is
approximately constant and independent of the resources
devoted to system development.
Conservation of Over the lifetime of a system, the incremental change in
familiarity each release is approximately constant.
Continuing growth The functionality offered by systems has to continually
increase to maintain user satisfaction.
Declining quality The quality of systems will appear to be declining unless
they are adapted to changes in their operational
environment.
Feedback system Evolution processes incorporate multi-agent, multi-loop
feedback systems and you have to treat them as feedback
systems to achieve significant product improvement.
Software maintenance
• Modifying a program after it has been put into use or delivered.
• Maintenance does not normally involve major changes to the system‘s architecture.
• Changes are implemented by modifying existing components and adding new components to
the system.
• Maintenance is inevitable
• The system requirements are likely to change while the system is being developed because
the environment is changing. Therefore a delivered system won't meet its requirements!
• Systems are tightly coupled with their environment. When a system is installed in an
environment it changes that environment and therefore changes the system requirements.
• Systems MUST be maintained therefore if they
are to remain useful in an environment.
Types of maintenance
• Maintenance to repair software faults
• Code ,design and requirement errors
• Code & design cheap. Requirements most expensive.
• Maintenance to adapt software to a different operating environment
• Changing a system‘s hardware and other support so that it operates in a different
environment (computer, OS, etc.) from its initial implementation.
• Maintenance to add to or modify the system‘s functionality
• Modifying the system to satisfy new requirements for org or business change.
Maintenance costs
• Usually greater than development costs (2* to 100* depending on the application).
• Affected by both technical and non-technical factors.
• Increases as software is maintained. Maintenance corrupts the software structure so makes
further maintenance more difficult.
• Ageing software can have high support costs
(e.g. old languages, compilers etc.).
Development/maintenance costs
Maintenance prediction
• Maintenance prediction is concerned with assessing which parts of the system may cause
problems and have high maintenance costs
• Change acceptance depends on the maintainability of the components affected by
the change;
• Implementing changes degrades the system structure and reduces its
maintainability;
• Maintenance costs depend on the number of changes and costs of change depend
on maintainability.
Change prediction
• Predicting the number of changes requires and understanding of the relationships between a
system and its environment.
• Tightly coupled systems require changes whenever the environment is changed.
• Factors influencing this relationship are
• Number and complexity of system interfaces;
• Number of inherently volatile system requirements;
• The business processes where the system is used.
Complexity metrics
• Predictions of maintainability can be made by assessing the complexity of system
components.
• Studies have shown that most maintenance effort is spent on a relatively small number of
system components of complex system.
• Reduce maintenance cost – replace complex components with simple alternatives.
• Complexity depends on
• Complexity of control structures;
• Complexity of data structures;
• Object, method (procedure) and module size.
Process metrics
• Process measurements may be used to assess maintainability
• Number of requests for corrective maintenance;
• Average time required for impact analysis;
• Average time taken to implement a change request;
• Number of outstanding change requests.
• If any or all of these is increasing, this may indicate a decline in maintainability.
• COCOMO2 model maintenance = understand existing code + develop new code.
Project management
Objectives
• To explain the main tasks undertaken by project managers
• To introduce software project management and to describe its distinctive characteristics
• To discuss project planning and the planning process
• To show how graphical schedule representations are used by project management
• To discuss the notion of risks and the risk management process Software project
management
• Concerned with activities involved in ensuring that software is delivered on time and on
schedule and in accordance with the requirements of the organisations develoing
and procuring the software.
• Project management is needed because software development is always subject to budget
and schedule constraints that are set by the organisation developing the software.
Project planning
• Probably the most time-consuming project management activity.
• Continuous activity from initial concept through to system delivery. Plans must be
regularly revised as new information becomes available.
• Various different types of plan may be developed to support the main software project
plan that is concerned with schedule and budget.
Plan Description
Quality plan Describes the quality procedures and standards that
will be used in a project.
Validation plan Describes the approach, resources and schedule used
for system validation.
Configuration management Describes the configuration management procedures
Plan and structures to be used.
Maintenance plan Predicts the maintenance requirements of the system,
maintenance costs and effort required.
Development plan. Describes how the skills and experience of the project
team members will be developed.
Project scheduling
• Split project into tasks and estimate time and resources required to complete each task.
• Organize tasks concurrently to make optimal
use of workforce.
• Minimize task dependencies to avoid delays
caused by one task waiting for another to complete.
• Dependent on project managers intuition and experience.
The project scheduling process
Scheduling problems
• Estimating the difficulty of problems and hence the cost of developing a solution is hard.
• Productivity is not proportional to the number of people working on a task.
• Adding people to a late project makes it later because of communication overheads.
• The unexpected always happens. Always allow contingency in planning.
T10 10da ys
1 8/7 /03
T12
M5
2 5 da ys
T8 Finish
19/9/03
Activity timeline
4/7 11/7 18/7 2 5/7 1/8 8/8 1 5/8 22/8 2 9/8 5/9 12/9 1 9/9
Sta r t
T4
T1
T2
M1
T7
T3
M5
T8
M3
M2
T6
T5
M4
T9
M7
T10
M6
T11
M8
T12
Finish
Staff allocation
4/7 1 1/7 18/7 2 5/7 1/8 8/8 15/8 2 2/8 2 9/8 5/9 1 2/9 19/9
Fred T4
T8 T11
T12
Ja ne T1
T3
T9
Anne T2
T6 T10
Jim T7
Ma ry T5
Risk management
• Risk management - identifying risks and drawing up plans to minimise their effect on a
project.
• A risk is a probability that some adverse circumstance will occur
• Project risks : affect schedule or resources. eg: loss of experienced designer.
• Product risks: affect the quality or performance of the software being developed.
eg: failure of purchased component.
• Business risks : affect organisation developing software. Eg: competitor
introducing new product.
Software risks
Risk identification
• Discovering possible risk
• Technology risks.
• People risks.
• Organisational risks.
• Tool risk.
• Requirements risks.
• Estimation risks.
Risk analysis
• Make judgement about probability and seriousness of each identified risk.
• Made by experienced project managers
• Probability may be very low(<10%), low(10-25%), moderate(25-50%), high(50-75%) or
very high(>75%). not precise value. Only range.
• Risk effects might be catastrophic, serious, tolerable or insignificant.
Risk planning
• Consider each identified risk and develop a strategy to manage that risk.
• categories
• Avoidance strategies
• The probability that the risk will arise is reduced;
• Minimisation strategies
• The impact of the risk on the project will be reduced;
• Contingency plans
• If the risk arises, contingency plans are plans to deal with that risk. eg: financial
problems
Risk monitoring
• Assess each identified risks regularly to decide whether or not it is becoming less or more
probable.
• Also assess whether the effects of the risk have changed.
• Cannot be observed directly. Factors affecting will give clues.
• Each key risk should be discussed at management progress meetings & review.
Risk indicators