0% found this document useful (0 votes)
17 views

Sppm Notes Word

Uploaded by

miryalalavanya11
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Sppm Notes Word

Uploaded by

miryalalavanya11
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 149

Software process and project management

SOFTWARE PROCESS AND PROJECT


MANAGEMENT (PROFESSIONAL
ELECTIVE – III)
B.Tech. IV Year I Sem. L T P C
Course Code: CS734PE 3003
Course Objectives:

• To acquire knowledge on software process management


• To acquire managerial skills for software project development
• To understand software economics
Course Outcomes:

• Gain knowledge of software economics, phases in the life cycle of software development,
project organization, and project control and process instrumentation

• Analyze the major and minor milestones, artifacts and metrics from management and
technical perspective

• Design and develop software product using conventional and modern principles of software
project management

UNIT I

Software Process Maturity


Software maturity Framework, Principles of Software Process Change, Software Process Assessment,
The Initial Process, The Repeatable Process, The Defined Process, The Managed Process, The
Optimizing Process.
Process Reference Models
Capability Maturity Model (CMM), CMMI, PCMM, PSP, TSP).

UNIT II

Software Project Management Renaissance


Conventional Software Management, Evolution of Software Economics, Improving Software Economics,
The old way and the new way.Life-Cycle Phases and Process artifacts:Engineering and Production stages,
inception phase, elaboration phase, construction phase, transition phase, artifact sets, management
artifacts, engineering artifacts and pragmatic artifacts, model-based software architectures.
Software process and project management

UNIT III
Workflows and Checkpoints of process
Software process workflows, Iteration workflows, Major milestones, minor milestones, periodic
status assessments.
Process Planning
Work breakdown structures, Planning guidelines, cost and schedule estimating process, iteration
planning process, Pragmatic planning.
UNIT IV
Project Organizations
Line-of- business organizations, project organizations, evolution of organizations, process
automation.
Project Control and process instrumentation
The seven-core metrics, management indicators, quality indicators, life-cycle expectations,
Pragmatic software metrics, metrics automation.

UNIT V

CCPDS-R Case Study and Future Software Project Management Practices


Modern Project Profiles, Next-Generation software Economics, Modern Process Transitions.
Software process and project management

UNIT I

Software Process Maturity


Software maturity Framework, Principles of Software Process Change, Software Process
Assessment, The Initial Process, The Repeatable Process, The Defined Process, The Managed
Process, The Optimizing Process.
Process Reference Models
Capability Maturity Model (CMM), CMMI, PCMM, PSP, TSP).

INTRODUCTION
Software project management:
It refers to the branch of project management dedicated to the planning, scheduling, resource
allocation, execution, tracking and delivery of software and web projects.
Project management:
It is the practice of initiating, planning, executing, controlling, and closing the work of a team to
achieve specific goals and meet specific success criteria at the specified time. This information is
usually described in project documentation, created at the beginning of the development
process.
The role and responsibility of a software project manager:

Software project managers may have to do any of the following tasks:

1. Planning: This means putting together the blueprint for the entire project from ideation
to fruition. It will define the scope, allocate necessary resources, propose the timeline,
delineate the plan for execution, lay out a communication strategy, and indicate the steps
necessary for testing and maintenance.

2. Leading: A software project manager will need to assemble and lead the project team,
which likely will consist of developers, analysts, testers, graphic designers, and technical
writers. This requires excellent communication, people and leadership skills.

3. Time management: Staying on schedule is crucial to the successful completion of any


project, but it’s particularly challenging when it comes to managing software projects
because changes to the original plan are almost certain to occur as the project evolves.
Software process and project management

Software project managers must be experts in risk management and contingency


planning to ensure forward progress when roadblocks or changes occur.

4. Execution: The project manager will participate in and supervise the successful
execution of each stage of the project. This includes monitoring progress, frequent team
check-ins and creating status reports.

5. Budget: Like traditional project managers, software project managers are tasked with
creating a budget for a project, and then sticking to it as closely as possible, moderating
spend and re-allocating funds when necessary.

6. Maintenance: Software project management typically encourages constant product


testing in order to discover and fix bugs early, adjust the end product to the customer’s
needs, and keep the project on target. The software project manager is responsible for
ensuring proper and consistent testing, evaluation and fixes are being made.

Project Management:
Project Management is the application of knowledge, skills, tools and techniques to project
activities to meet the project requirements.
Project Management Process consists of the following 4 stages:
 Feasibility study
 Project Planning
 Project Execution
 Project Termination
Software process and project management

Feasibility study:

Feasibility study explores system requirements to determine project feasibility. There are several
fields of feasibility study including economic feasibility, operational feasibility, and technical
feasibility. The goal is to determine whether the system can be implemented or not. The process
of feasibility study takes as input the requirement details as specified by the user and other
domain- specific details. The output of this process simply tells whether the project should be
undertaken or not and if yes, what would the constraints be. Additionally, all the risks and their
potential effects on the projects are also evaluated before a decision to start the project is taken.

Project Planning:

A detailed plan stating stepwise strategy to achieve the listed objectives is an integral part of any
project.
Planning consists of the following activities:
 Set objectives or goals
 Develop strategies
 Develop project policies
 Determine courses of action
 Making planning decisions
 Set procedures and rules for the project
 Develop a software project plan
 Prepare budget
 Conduct risk management
 Document software project plans
This step also involves the construction of a work breakdown structure (WBS). It also includes
size, effort, and schedule and cost estimation using various techniques.
Project Execution:

A project is executed by choosing an appropriate software development lifecycle model (SDLC).


It includes a number of steps including requirements analysis, design, coding, testing and
implementation, testing, delivery and maintenance. There are a number of factors that need to be
considered while doing so including the size of the system, the nature of the project, time and
budget constraints, domain requirements, etc. An inappropriate SDLC can lead to failure of the
project.

Project Termination:

There can be several reasons for the termination of a project. Though expecting a project to
terminate after successful completion is conventional, but at times, a project may also terminate
without completion. Projects have to be closed down when the requirements are not fulfilled
according to given time and cost constraints.
Software process and project management

Some of the reasons for failure include:


 Fast changing technology
 Project running out of time
 Organizational politics
 Too much change in customer requirements
 Project exceeding budget or funds

Once the project is terminated, a post-performance analysis is done. Also, a final report is
published describing the experiences, lessons learned, recommendations for handling future
projects.
SOFTWARE PROCESS MATURITY
Process Improvement Cycle:
1.Initialize
 Establish Sponsorship
 Create vision and strategy
 Establish improvement structure
2.For Each Maturity Level
 Characterize current practice in terms of key process areas
 Assessment recommendations
 Revise strategy (generate action plans and prioritizekey process areas)
3.For Each key Process Area
 Establish process action teams
 Implement tactical plan, define processes, plan and execute pilot(s), plan and
execute institutionalization
 Document and analyze lessons
 Revise organizational approach
Software process and project management

Software Process Improvement:


To improve their software capabilities, organizations must take five steps:
(1) Understand the current status of their development process or processes,
(2) develop a vision of the desired process,
(3) establish a list of required process improvement actions in order of priority,
(4) Produce a plan to accomplish these actions, and
(5) Commit the resources to execute the plan.
The maturity framework developed at the SE1 addresses these five steps by characterizing a
software process into one of five maturity levels. By establishing their organization's position in
this maturity structure, software professionals and management can more readily identify those
areas where improvement actions are most likely to produce results.
Software process and project management

Process Maturity Levels:


Software process and project management

1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new
or undocumented repeat process.
2. Repeatable - the process is at least documented sufficiently such that repeating the
same steps may be attempted.
3. Defined - the process is defined/confirmed as a standard business process
4. Capable - the process is quantitatively managed in accordance with agreed-upon metrics.
5. Efficient - process management includes deliberate process optimization/improvement.

Maturity Level Details:

Maturity levels consist of a predefined set of process areas. The maturity levels are measured by
the achievement of the specific and generic goals that apply to each predefined set of process
areas. The following sections describe the characteristics of each maturity level in detail.

Maturity Level 1 - Initial

At maturity level 1, processes are usually ad hoc and chaotic. The organization usually does not
provide a stable environment. Success in these organizations depends on the competence and
heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work; however, they
frequently exceed the budget and schedule of their projects.

Maturity level 1 organizations are characterized by a tendency to over commit, abandon


processes in the time of crisis, and not be able to repeat their past successes.

Maturity Level 2 - Managed

At maturity level 2, an organization has achieved all the specific and generic goals of the
maturity level 2 process areas. In other words, the projects of the organization have ensured that
requirements are managed and that processes are planned, performed, measured, and controlled.

The process discipline reflected by maturity level 2 helps to ensure that existing practices are
retained during times of stress. When these practices are in place, projects are performed and
managed according to their documented plans.

At maturity level 2, requirements, processes, work products, and services are managed. The
status of the work products and the delivery of services are visible to management at defined
points.

Commitments are established among relevant stakeholders and are revised as needed. Work
products are reviewed with stakeholders and are controlled.

The work products and services satisfy their specified requirements, standards, and objectives.

Maturity Level 3 - Defined


RojaRamani.Adapa Page 9
Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

At maturity level 3, an organization has achieved all the specific and generic goals of the
process areas assigned to maturity levels 2 and 3.

At maturity level 3, processes are well characterized and understood, and are described in
standards, procedures, tools, and methods.

A critical distinction between maturity level 2 and maturity level 3 is the scope of standards,
process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and
procedures may be quite different in each specific instance of the process (for example, on a
particular project). At maturity level 3, the standards, process descriptions, and procedures for a
project are tailored from the organization's set of standard processes to suit a particular project or
organizational unit. The organization's set of standard processes includes the processes addressed
at maturity level 2 and maturity level 3. As a result, the processes that are performed across the
organization are consistent except for the differences allowed by the tailoring guidelines.

Another critical distinction is that at maturity level 3, processes are typically described in more
detail and more rigorously than at maturity level 2. At maturity level 3, processes are managed
more proactively using an understanding of the interrelationships of the process activities and
detailed measures of the process, its work products, and its services.

Maturity Level 4 - Quantitatively Managed

At maturity level 4, an organization has achieved all the specific goals of the process areas
assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.

At maturity level 4 Subprocesses are selected that significantly contribute to overall process
performance. These selected subprocesses are controlled using statistical and other quantitative
techniques.

Quantitative objectives for quality and process performance are established and used as criteria
in managing processes. Quantitative objectives are based on the needs of the customer, end
users, organization, and process implementers. Quality and process performance are understood
in statistical terms and are managed throughout the life of the processes.

For these processes, detailed measures of process performance are collected and statistically
analyzed. Special causes of process variation are identified and, where appropriate, the sources of
special causes are corrected to prevent future occurrences.

Quality and process performance measures are incorporated into the organization.s measurement
repository to support fact-based decision making in the future.

A critical distinction between maturity level 3 and maturity level 4 is the predictability of process
performance. At maturity level 4, the performance of processes is controlled using statistical and
other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are
only qualitatively predictable.
Software process and project management

Maturity Level 5 - Optimizing

At maturity level 5, an organization has achieved all the specific goals of the process areas
assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and
3.

Processes are continually improved based on a quantitative understanding of the common causes
of variation inherent in processes.

Maturity level 5 focuses on continually improving process performance through both


incremental and innovative technological improvements.
Quantitative process-improvement objectives for the organization are established, continually
revised to reflect changing business objectives, and used as criteria in managing process
improvement.

The effects of deployed process improvements are measured and evaluated against the
quantitative process-improvement objectives. Both the defined processes and the organization's
set of standard processes are targets of measurable improvement activities.

Optimizing processes that are agile and innovative depends on the participation of an empowered
workforce aligned with the business values and objectives of the organization. The organization's
ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate
and share learning. Improvement of the processes is inherently part of everybody's role, resulting
in a cycle of continual improvement.

A critical distinction between maturity level 4 and maturity level 5 is the type of process
variation addressed. At maturity level 4, processes are concerned with addressing special causes
of process variation and providing statistical predictability of the results. Though processes may
produce predictable results, the results may be insufficient to achieve the established objectives.
At maturity level 5, processes are concerned with addressing common causes of process variation
and changing the process (that is, shifting the mean of the process performance) to improve
process performance (while maintaining statistical predictability) to achieve the established
quantitative process- improvement objectives.

Maturity Levels Should Not be Skipped:

Each maturity level provides a necessary foundation for effective implementation of processes at
the next level.

 Higher level processes have less chance of success without the discipline provided
by lower levels.
 The effect of innovation can be obscured in a noisy process.

Higher maturity level processes may be performed by organizations at lower maturity levels,
with the risk of not being consistently applied in a crisis.
Software process and project management

Principles of Software Process Change:


Software process and project management
Software process and project management
Software process and project management
Software process and project management
Software process and project management
Software process and project management

Software Process Assessment:


A software process assessment is a disciplined examination of the software processes used by
an organization, based on a process model. ... A self-assessment (first-party assessment) is
performed internally by an organization's own personnel.
Software process and project management

The Initial Process:


 The Initial Process could properly be called ad hoc, and it is often even chaotic.
 Here, the organization typically operates without formalized procedures, cost
estimates, and project plans.
 Tools are neither well integrated with the process nor uniformly applied.
 Change control is lax and there is little senior management exposure to or
understanding of the problems and issues.
 Since problems are often deferred or even forgotten, software installation
and maintenance often present serious problems.
 While organizations at this level may have formal procedures for project control, there
is no management mechanism to ensure they are used.
 The best test is to observe how such an organization behaves in a crisis.
 If it abandons established procedures and reverts to merely coding and testing, it is likely
to be at the Initial Process level.
 After all, if the techniques and methods are appropriate, they must be used in a crisis and
if they are not appropriate, they should not be used at all.
 One reason organizations behave chaotically is that they have not gained sufficient
experience to understand the consequences of such behavior. Because many effective
software actions such as design and code reviews or test data analysis do not appear to
directly support shipping the product, they seem expendable.
 It is much like driving an automobile. Few drivers with any experience will continue
driving for very long when the engine warning light comes on, regardless of their rush.
Similarly, most drivers starting on a new journey will, regardless of their hurry, pause to
consult a map.
 They have learned the difference between speed and progress.
 In software, coding and testing seem like progress, but they are often only wheel- spinning.
 While they must be done, there is always the danger of going in the wrong direction.
Without a sound plan and a thoughtful analysis of the problems, there is no way to
know.
Software process and project management

Organizations at the Initial Process level can improve their performance by instituting
basic project controls.
 The most important are: Project management. The fundamental role of a project-
management system is to ensure effective control of commitments. This requires
adequate preparation, clear responsibility, a public declaration, and a dedication to
performance.’
 For software, this starts with an understanding of the job’s magnitude. In any but the
simplest projects, a plan must then be developed to determine the best schedule and the
resources required. In the absence of such an orderly plan, no com moment can be better
than an educated guess. Management oversight.
 A disciplined software-development organization must have senior management
oversight. This includes review and approval of all major development plans before
official commitment. Also, a quarterly review should be conducted of facility-wide
process compliance, installed-quality performance, schedule tracking, cost trends,
computing service, and quality and productivity goals by project.
 The lack of such reviews typically results in uneven and generally inadequate
implementation of the process as well as in frequent over commitments and cost
surprises. Quality assurance.
 A quality assurance group is charged with assuring management that the software-
development work is actually done the way it is supposed to be done.
 To be effective, the assurance organization must have an independent reporting line to
senior management and sufficient resources to monitor performance of all key planning,
implementation, and verification activities.
 This generally requires an organization of about 5 to 6 percent the sire of the
development organization. Change control. Control of changes in software development
is fundamental to business and financial control as well as to technical stability.
 To develop quality software on a predictable schedule, the requirements must be
established and maintained with reasonable stability throughout the development cycle.
Changes will have to be made, but they must be managed and introduced in an orderly
way.
 While occasional requirements changes are needed, historical evidence demonstrates that
many of them can be deferred and phased in later. If all changes are not controlled,
orderly design, implementation, and testing is impossible and no quality plan can be
effective.
The Repeatable Process:
 The Repeatable Process has one important strength over the Initial Process: It provides
commitment control.
 This is such an enormous advance over the Initial Process that the people in the
organization tend to believe they have mastered the software problem.
 They do not realize that their strength stems from their prior experience at similar work.
Organizations at the Repeatable Process level thus face major risks when they are
presented with new challenges. Examples of the changes that represent the highest risk at
this level are: New tools and methods will likely affect how the process is performed, thus
destroying the relevance of the intuitive historical base on which the organization relies.
Software process and project management

 Without a defined process framework in which to address these risks, it is even possible
for a new technology to do more harm than good. When the organization must develop a
new kind of product, it is entering ne\+ territory.
 For example, a software group that has experience developing compilers will likely have
design, scheduling, and estimating problems if assigned to write a control program.
Similarly, a group that has developed small, self-contained programs will not understand
the interface and integration issues involved in large-scale projects.
 These changes again destroy the relevance of the intuitive historical basis for the
organization's work. Major organization changes can be highly disruptive.
 In the Repeatable Process organization, a new manager has no orderly basis for
understanding what is going on and new team members must learn the ropes through
word of mouth.
 The key actions required to advance from the Repeatable Process to the Defined Process
are:
1. Establish a process group. A process group is a technical group that focuses
exclusively on improving the software development process. In most software
organizations, people are entirely devoted to product work. Until someone is given a full-
time assignment to work on the process, little orderly progress can be made in improving
it. The responsibilities of process groups include defining the development process,
identifying technology needs and opportunities, advising the projects, and conducting
quarterly management reviews of process status and performance. Typically, the process
group should be about 1 to 3 percent the size of the development organization. Because of
the need for a nucleus of skills, groups smaller than about four are unlikely to be fully
effective. Small organizations that lack the experience base to form a proce5s group
should address these issues through specially formed committees of experienced
professionals or by retaining consultants.
2. Establish a software-development process architecture that describes the technical and
management activities required for proper execution of the development process.4 The
architecture is a structural decomposition of the development cycle into tasks, each of
which has entry criteria, functional descriptions, verification procedures, and exit criteria.
The decomposition continues until each defined task is performed by an individual or
single management unit.
3. If they are not already in place, introduce a family of software-engineering methods
and technologies. These include design and code inspections, formal design methods,
library- control systems, and comprehensive testing methods. Prototyping should also be
considered, along with the adoption of modern implementation languages.
The Defined Process:
 With the Defined Process, the organization has achieved the foundation for major
and continuing progress.
 For example, the development group, when faced with a crisis, will likely continue
to use the Defined Process.
 The foundation has now been established for examining the process and deciding how
to improve it.
Software process and project management

 As powerful as the Defined Process is, it is still only qualitative: There is little data to
indicate what is going on or how effective the process really is. There is considerable
debate about the value of software-process measurements and the best ones to use.
 This uncertainty generally stems from a lack of process definition and the consequent
confusion about the specific items to be measured. With a defined process, we can focus
the measurements on specific tasks.
 The process architecture is thus an essential prerequisite to effective measurement. The
key steps"' to advance to the Managed Process are:
1. Establish a minimum, basic set of process measurements to identify the quality and
cost parameters of each process step. The objective is to quantify the relative costs and
benefits of each major process activity, such as the cost and yield of error detection and
correction methods.
2. Establish a process database with the resources to manage and maintain it. Cost and
yield data should be maintained centrally to guard against loss, to make it available for all
projects, and to facilitate process quality and productivity analysis.
3. Provide sufficient process resources to gather and maintain this data and to advise
project members on its use. Assign skilled professionals to monitor the quality of the data
before entry in the database and to provide guidance on analysis methods and
interpretation.
4. Assess the relative quality of each product and inform management where quality
targets are not being met. An independent quality-assurance group should assess the
quality actions of each project and track its progress against its quality plan. When this
progress is compared with the historical experience on similar projects, an informed
assessment generally can be made.
The Managed Process:
 In advancing from the Initial Process via the Repeatable and Defined Processes to the
Managed Process, software organizations typically will experience substantial quality
improvements.
 The greatest potential problem with the Managed Process is the cost of gathering data.
There are an enormous number of potentially valuable measure5 of software
development and support, but such data is expensive to gather and maintain.
 Therefore, approach data gathering with care and precisely define each piece of data in
advance. Productivity data is generally meaningless unless explicitly defined. For
example, the simple measure of lines of source code per development month can vary by
100 times of more, depending on the interpretation of the parameters.
 The code count could include only new and changed code or all shipped instructions. For
modified programs, this can cause a ten-times variation. Similarly, you can use non
comment, nonblank lines, executable instructions, or equivalent assembler instructions,
with variations again of up to seven times.' Management, test, documentation, and
support personnel may or may not be counted when calculating labor months expended.
 Again, the variations can run at least as high as seven times.' When different groups
gather data but do not use identical definitions, the results are not comparable, even if it
made sense to compare them. The tendency with such data is to use it to compare several
groups and put pressure on those with the lowest ranking. This is a misapplication of
process data.
Software process and project management

 First, it is rare that two projects are comparable by any simple measures. The variations
in task complexity caused by different product types can exceed five to one. Similarly,
the cost per line of code of small modifications is often two to three times that for new
programs.
 The degree of requirements change can make an enormous difference, as can the design
status of the base program in the case of enhancements. Process data must not be used to
compare projects or individuals. Its purpose is to illuminate the product being developed
and to provide an informed basis for improving the process.
 When such data is used by management to evaluate individuals or teams, the reliability of
the data itself will deteriorate.
 The US Constitution's Fifth Amendment, which protects against self-incrimination, is
based on sound principles:
 Few people can be counted on to provide reliable data on their own performance.
 The two fundamental requirements to advance from the Managed Process to the
Optimizing Process are:
1. Support automatic gathering of process data. Some data cannot be gathered by hand,
and all manually gathered data is subject to error and omission.
2. Use this data to both analyze and modify the process to prevent problems and improve
efficiency.
The Optimizing Process:
 In varying degrees, process optimization goes on at all levels of process maturity. With
the step from the Managed to the Optimizing Process, however, there is a paradigm shift.
Up to this point, software development managers have largely focused on their products
and will typically only gather and analyze data that directly relates to product
improvement.
 In the Optimizing Process, the data is available to actually tune the process itself. With a
little experience, management will soon see that process optimization can produce major
quality and productivity improvements.
 For example, many errors can be identified and fixed far more economically by code
inspections than through testing. Unfortunately, there is little published data on the costs
of finding and fixing errors.- However, I have developed a useful rule of thumb from
experience
 It takes about one to four working hours to find and fix a bug through inspections and
about 15 to 20 working hours to find and fix a bug in function or system test. It is thus
clear that testing is not a cost-effective way to find and fix most bugs.
 However, some kinds of errors are either uneconomical or almost impossible to find
except by machine.
 Examples are errors involving spelling and syntax, interfaces, performance, human
factors, and error recovery.
 It would thus be unwise to eliminate testing completely because it provides a useful
check against human frailties.
 The data that is available with the Optimizing Process gives us a new perspective on
testing. For most projects, a little analysis shows that there are two distinct activities
involved.
Software process and project management

 The first is the removal of bugs. To reduce this cost, inspections should be emphasized
together \\ith any other cost-effective techniques. The role of functional and system
testing should then be changed to one of finding symptoms that are further explored to
see if the bug is an isolated problem or if it indicates design problems that require more
comprehensive analysis.
 In the Optimizing Process, the organization has the means to identify the weakest
elements of the process and fix them. At this point in process improvement, data is
available to justify the application of technology to various critical tasks and numerical
evidence is available on the effectiveness with which the process has been applied to any
given product.
 We no longer need reams of paper to describe what is happening because simple yield
curves and statistical plots provide clear and concise indicators. It is now possible to
assure the process and hence have confidence in the quality of the resulting products.
People in the process.
 Any software development process is dependent on the quality of the people who
implement it. Even with the best people, however, there is always a limit to what they can
accomplish.
 When engineers are already working 50 to 60 hours a week, it is hard to see how they
could handle the vastly greater challenges of the future.
 The Optimizing Process helps in several \+ays: It helps managers understand where help
is needed and how best to provide the people with the support they require. It lets
professionals communicate in concise, quantitative terms.
 This facilitates the transfer of knowledge and minimizes the likelihood of their wasting
time on problems that have already been solved.
 It provides the framework for the professionals to understand their work performance and
to see how to improve it.
 This results in a highly professional environment and substantial productivity benefits,
and it avoids the enormous amount of effort that is generally expended in fixing and
patching other people’s mistakes.
 The difference between a disciplined environment and a regimented one is that discipline
controls the environment and methods to specific standards while regimentation defines
the actual conduct of the work. Discipline is required in large software projects to ensure,
for example, that the people involved use the same conventions, don’t damage each
other’s products, and properly synchronize their work.
 Discipline thus enables creativity by freeing the most talented software professionals
from the many crises that others have created. The need. There are many examples of
disasters caused by software problems, ranging from expensive missile abort5 to
enormous financial losses.
 As the computerization of our society continues, the public risks due to poor-quality code
will become untenable. Not only are our systems being used in increasingly sensitive
application5, but they are also becoming much larger and more complex. While proper
questions can be raised about the size and complexity of current systems, they are human
creations and they will, alas, continue to be produced by humans - with all their failings
and creative talents.
Software process and project management

 While many of the currently promising technologies hill undoubtedly help, there is an
enormous backlog of needed functions that will inevitably translate into vast amounts of
code. More code means increased risk of error and, when coupled with more complexity,
these systems will become progressively less testable.
 The risks will thus increase astronomically as we become more efficient at producing
prodigious amounts of new code. As hell as being a management issue, quality is an
economic one. It is always possible to do more inspections or to run more tests, but it
costs timeand money to do so.
 It is only with the Optimizing Process that the data is available to understand the costs
and benefits of such work. The Optimizing Process thus provides the foundation for
significant advances in software q u al i t y and si in u It anew u s improvements in
productivity.
There is little data on how long it takes for software
organizations to advance through these maturity levels toward the Optimizing Process. Based
on my experience, transition from level 1 to level 2 or from level 2 to level 3 take from one to
three years, even with a dedicated management commitment to process improvement. To date,
no complete organizations have been observed at levels 4 or 5. To meet society’s needs for
increased system functions while simultaneously addressing the problems of quality and
productivity, software managers and professionals must establish the goal of moving to the
Optimizing Process.
PROCESS REFERENCE MODELS
Capability Maturity Model (CMM):

The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary path
of increasingly organized and systematically more mature processes. CMM was developed and is
promoted by the Software Engineering Institute (SEI), a research and development center
sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address
software engineering issues and, in a broad sense, to advance software engineering
methodologies. More specifically, SEI was established to optimize the process of developing,
acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes
involved are equally applicable to the software industry as a whole, SEI advocates industry-wide
adoption of the CMM

The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO). The ISO 9000 standards specify an
effective quality system for manufacturing and service industries; ISO 9001 deals specifically
Software process and project management

with software development and maintenance. The main difference between the two systems lies
in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software
processes, while the CMM establishes a framework for continuous process improvement and is
more explicit than the ISO standard in defining the means to be employed to that end.

CMM's Five Maturity Levels of Software Processes

 At the initial level, processes are disorganized, even chaotic. Success is likely to depend on
individual efforts, and is not considered to be repeatable, because processes would not be
sufficiently defined and documented to allow them to be replicated.

 At the repeatable level, basic project management techniques are established, and successes
could be repeated, because the requisite processes would have been made established,
defined, and documented.

 At the defined level, an organization has developed its own standard software process
through greater attention to documentation, standardization, and integration.

 At the managed level, an organization monitors and controls its own processes through data
collection and analysis.

 At the optimizing level, processes are constantly being improved through monitoring
feedback from current processes and introducing innovative processes to better serve the
organization's particular needs
Software process and project management

.
Software process and project management
Software process and project management

Difference between CMM and CMMI


CMM is a reference model of matured practices in a specified discipline like Systems
Engineering CMM, Software CMM, People CMM, Software Acquisition CMM etc., but they
were difficult to integrate as and when needed. CMMI is the successor of the CMM and evolved
as a more matured set of guidelines and was built combining the best components of individual
disciplines of CMM (Software CMM, People CMM, etc.). It can be applied to product

manufacturing, people management, software development, etc. CMM describes about the software engineering
alone where as CMM Integrated describes both software and system engineering. CMMI also incorporates the Integrated Process
and Product Development and the supplier sourcing.

CMMI:
What is CMMI?
CMM Integration project was formed to sort out the problem of using multiple CMMs. CMMI
product team's mission was to combine three Source Models into a single improvement
framework for the organizations pursuing enterprise-wide process improvement. These three
Source Models are:  Capability Maturity Model for Software (SW-CMM) - v2.0 Draft C. 
Electronic Industries Alliance Interim Standard (EIA/IS) - 731 Systems Engineering.  Integrated
Product Development Capability Maturity Model (IPD-CMM) v0.98 . CMM Integration  Builds
an initial set of integrated models.  Improves best practices from source models based on
lessons learned.  Establishes a framework to enable integration of future models.
Software process and project management

CMMI and Business Objectives:

The objectives of CMMI are very obvious. They are as follows:


 Produce quality products or services: The process-improvement concept in CMMI models
evolved out of the Deming, Juran, and Crosby quality paradigm: Quality products are a result of
quality processes. CMMI has a strong focus on quality-related activities including requirements
management, quality assurance, verification, and validation.  Create value for the stockholders:
Mature organizations are more likely to make better cost and revenue estimates than those with
less maturity, and then perform in line with those estimates. CMMI supports quality products,
predictable schedules, and effective measurement to support the management in making accurate
and defensible forecasts. This process maturity can guard against project performance problems
that could weaken the value of the organization in the eyes of investors.  Enhance customer
satisfaction: Meeting cost and schedule targets with high quality products that are validated
against customer needs is a good formula for customer satisfaction. CMMI addresses all of these
ingredients through its emphasis on planning, monitoring, and measuring, and the improved
predictability that comes with more capable processes.  Increase market share: Market share is a
result of many factors, including quality products and services, name identification, pricing, and
image. Customers like to deal with suppliers who have a reputation for meeting their
commitments.
 Gain an industry-wide recognition for excellence: The best way to develop a reputation for
excellence is to consistently perform well on projects, delivering quality products and services
within cost and schedule parameters. Having processes that conform to CMMI requirements can
enhance that reputation.
The CMMI is structured as follows:  Maturity Levels (staged representation) or Capability
Levels (continuous representation)  Process Areas  Goals: Generic and Specific  Common
Features  Practices: Generic and Specific This chapter will discuss about two CMMI
representations and rest of the subjects will be covered in subsequent chapters. A representation
allows an organization to pursue different improvement objectives. An organization can go for
one of the following two improvement paths. Staged Representation The staged representation is
the approach used in the Software CMM. It is an approach that uses predefined sets of process
areas to define an improvement path for an organization. This improvement path is described by
a model component called a Maturity Level. A maturity level is a well-defined evolutionary
plateau towards achieving improved organizational processes. CMMI Staged Representation 
Provides a proven sequence of improvements, each serving as a foundation for the next. 
Permits comparisons across and among organizations by the use of maturity levels.  Provides an
easy migration from the SW- CMM to CMMI.  Provides a single rating that summarizes
appraisal results and allows comparisons among organizations. Thus Staged Representation
provides a pre-defined roadmap for organizational improvement based on proven grouping and
ordering of processes and associated organizational relationships. You cannot divert from the
sequence of steps.

CMMI Staged Structure :


Software process and project management

Following picture illustrates CMMI Staged Model Structure.

Continuous Representation Continuous representation is the approach used in the SECM and the
IPD-CMM. This approach allows an organization to select a specific process area and make
improvements based on it.
The continuous representation uses Capability Levels to characterize
improvement relative to an individual process area.
CMMI Continuous Representation
 Allows you to select the order of improvement that best meets your organization's business
objectives and mitigates your organization's areas of risk.
 Enables comparisons across and among organizations on a process-area-byprocess-area basis.
 Provides an easy migration from EIA 731 (and other models with a continuous representation)
to CMMI. Thus Continuous Representation provides flexibility to organizations to choose the
processes for improvement, as well as the amount of improvement required.
Software process and project management

CMMI Continuous Structure


The following picture illustrates the CMMI Continuous Model Structure.

Maturity levels for services[edit]


The process areas below and their maturity levels are listed for the CMMI for services model:
Maturity Level 2 - Managed

 CM - Configuration Management
 MA - Measurement and Analysis
 PPQA - Process and Quality Assurance
 REQM - Requirements Management
 SAM - Supplier Agreement Management
 SD - Service Delivery
 WMC - Work Monitoring and Control
 WP - Work Planning
Maturity Level 3 - Defined

 CAM - Capacity and Availability Management


 DAR - Decision Analysis and Resolution
 IRP - Incident Resolution and Prevention
 IWM - Integrated Work Managements
 OPD - Organizational Process Definition
Software process and project management

 OPF - Organizational Process Focus...


 OT - Organizational Training
 RSKM - Risk Management
 SCON - Service Continuity
 SSD - Service System Development
 SST - Service System Transition
 STSM - Strategic Service Management
Maturity Level 4 - Quantitatively Managed

 OPP - Organizational Process Performance


 QWM - Quantitative Work Management
Maturity Level 5 - Optimizing

 CAR - Causal Analysis and Resolution.


 OPM - Organizational Performance Management

Advantages of CMMI
There are numerous benefits of implementing CMMI in an IT / Software Development
Organization, some of these benefits are listed below:

 Culture for maintaining Quality in projects starts in the mind of the junior
programmers to the senior programmers and project managers
 Centralised QMS for implementation in projects to ensure uniformity in the
documentation which means less learning cycle for new resources, better
management of project status and health
 Incorporation of Software Engineering Best Practices in the Organizations as
described in CMMI Model
 Cost saving in terms of lesser effort due to less defects and less rework
 This also results in increased Productivity
 On-Time Deliveries
 Increased Customer Satisfaction
 Overall increased Return on Investment
 Decreased Costs
 Improved Productivity

Disadvantages of CMMI
 CMMI-DEV is may not be suitable for every organization.
 It may add overhead in terms of documentation.
 May require additional resources and knowledge required in smaller
organizations to initiate CMMI-based process improvement.
 May require a considerable amount of time and effort for implementation.
 Require a major shift in organizational culture and attitude.
Software process and project management

PCMM:
Software process and project management
Software process and project management

Implementation support covers 5 points

1. Undertaking gap analysis to assess current level of maturity of organization as


against PCMM® level.
2. Establishing the level of P-CMM® at which organization is.
3. Recommendation whether to go for up-gradation in maturity level and
suggesting timelines for such up-gradation.
4. Inform recommendations for addressing key findings.
5. Develop roadmap and action plan for upgradation to next level along with
specific timelines.

Undertaking Gap Analysis to assess the current level of maturity of the organization as against
PCMM®.

This implementation support on assessing People CMM® Practices, covering the concepts and
principles, framework and will help to know how an organization has to assess the status of
People CMM® Goals and Practices being implemented in organizations. The process of
assessment is as follows;

 Preparing Phase – Preparing for assessment


 Surveying Phase – Conducting the People CMM® Practices (People Policies
and Procedures etc.,) survey
 Assessment phase – Conducting onsite assessment
 Reporting Phase – Reporting the assessment Results

2. Establishing the level of PCMM® at which organization is.

Guidelines on how to map the assessment results to the structural components of People

CMM®. Components in People CMM®

Based on the Maturity Level determined a presentation of Pros and Cons is to be made to the
authorities by making recommendations whether to go for up-gradation in maturity level and
suggesting timelines for such up-gradation. This shall facilitate for easy decision making by the
authorities.
Software process and project management

3. Recommendation whether to go for up-gradation in maturity level and suggesting timelines


for such up-gradation.

4. Inform recommendations for addressing key findings.

Recommendations for addressing key findings at;

 Process area wise Goals level


 Process area wise Implementation Practices
 Process area wise Institutionalization Practices

Recommendations would include policies, procedures, practices, guidelines for Implementation


and institutionalization of the same.

5. Develop roadmap and action plan for up gradation to the next level along with
specific timelines.

A roadmap and action plan for up gradation to next level along with specific timelines with date
wise action plan with the organizational authorities.

10 Principles of People Capability Maturity


Model (PCMM)
1. In mature organizations, workforce capability is directly related to business performance.
2. Workforce capability is a competitive issue and a source of strategic advantage.
3. Workforce capability must be defined in relation to the organization’s strategic business objectives.
4. Knowledge-intense work shifts the focus from job elements to workforcecompetencies.
5. Capability can be measured and improved at multiple levels, including individuals, workgroups,
workforce competencies, and the organization.
6. An organization should invest in improving the capability of those workforce competencies that are critical
to its core competency as a business.
7. Operational management is responsible for the capability of the workforce.
8. The improvement of workforce capability can be pursued as a process composed from proven practices
and procedures.
9. The organization is responsible for providing improvement opportunities, while individuals are responsible
for taking advantage of them.
10. Since technologies and organizational forms evolve rapidly, organizations must continually evolve
their workforce practices and develop new workforce competencies.

Level 2
Staffing
Communication and Coordination
Work Environment
Software process and project management

Performance Management
Training and Development
Compensation
Level 3

Competency Analysis Workforce


Planning Competency
Development Career
Development Competency-Based
Practices Workgroup
Development Participatory
Culture
Level 4

Competency Integration
Empowered Workgroups
Competency-Based Assets
Quantitative Performance Management
Organizational Capability Management
Mentoring
Level 5

Continuous Capability Improvement


Organizational Performance Alignment
Continuous Workforce Innovation

PSP:
The Personal Software Process (PSP) is a structured software development process that is
designed to help software engineers better understand and improve their performance by
bringing discipline to the way they develop software and tracking their predicted and actual
development of the code. It clearly shows developers how to manage the quality of their
products, how to make a sound plan, and how to make commitments. It also offers them the data
Software process and project management

to justify their plans. They can evaluate their work and suggest improvement direction by
analyzing and reviewing development time, defects, and size data. The PSP was created by
Watts Humphreyto apply the underlying principles of the Software Engineering Institute's (SEI)
Capability Maturity Model (CMM) to the software development practices of a single developer.
It claims to give software engineers the process skills necessary to work on a team software
process (TSP) team.
Objectives
The PSP aims to provide software engineers with disciplined methods for improving personal
software development processes. The PSP helps software engineers to:

 Improve their estimating and planning skills.


 Make commitments they can keep.
 Manage the quality of their projects.
 Reduce the number of defects in their work.
PSP structure
PSP training follows an evolutionary improvement approach: an engineer learning to integrate
the PSP into his or her process begins at the first level – PSP0 – and progresses in process
maturity to the final level – PSP2.1. Each Level has detailed scripts, checklists and templates to
guide the engineer through required steps and helps the engineer improve her own personal
software process. Humphrey encourages proficient engineers to customize these scripts and
templates as they gain an understanding of their own strengths and weaknesses.
Process
The input to PSP is the requirements; requirements document is completed and delivered to the
engineer.
PSP0, PSP0.1 (Introduces process discipline and measurement)
PSP0 has 3 phases: planning, development (design, code, compile, test) and a post mortem. A
baseline is established of current process measuring: time spent on programming, faults
injected/removed, size of a program. In a post mortem, the engineer ensures all data for the
projects has been properly recorded and analysed. PSP0.1 advances the process by adding a
coding standard, a size measurement and the development of a personal process improvement
plan (PIP). In the PIP, the engineer records ideas for improving his own process.
PSP1, PSP1.1 (Introduces estimating and planning)
Based upon the baseline data collected in PSP0 and PSP0.1, the engineer estimates how large a
new program will be and prepares a test report (PSP1). Accumulated data from previous projects
is used to estimate the total time. Each new project will record the actual time spent. This
information is used for task and schedule planning and estimation (PSP1.1).
PSP2, PSP2.1 (Introduces quality management and design)
PSP2 adds two new phases: design review and code review. Defect prevention and removal of
them are the focus at the PSP2. Engineers learn to evaluate and improve their process by
measuring how long tasks take and the number of defects they inject and remove in each phase
Software process and project management

of development. Engineers construct and use checklists for design and code reviews. PSP2.1
introduces design specification and analysis techniques
(PSP3 is a legacy level that has been superseded by TSP.)

Psp process flow:


Software process and project management

One of the core aspects of the PSP is using historical data to analyze and improve
process performance. PSP data collection is supported by four main elements:

 Scripts
 Measures
 Standards
 Forms
The PSP scripts provide expert-level guidance to following the process steps and they
provide a framework for applying the PSP measures. The PSP has four core measures:

 Size – the size measure for a product part, such as lines of code (LOC).
 Effort – the time required to complete a task, usually recorded in minutes.
 Quality – the number of defects in the product.
 Schedule – a measure of project progression, tracked against planned and actual
completion dates.
Applying standards to the process can ensure the data is precise and consistent. Data is
logged in forms, normally using a PSP software tool. The SEI has developed a PSP tool
and there are also open source options available, such as Process Dashboard.
The key data collected in the PSP tool are time, defect, and size data – the time spent
in each phase; when and where defects were injected, found, and fixed; and the size of
the product parts. Software developers use many other measures that are derived
from these three basic measures to understand and improve their performance.
Derived measures include:

 estimation accuracy (size/time)


 prediction intervals (size/time)
 time in phase distribution
 defect injection distribution
 defect removal distribution
Software process and project management

 productivity
 reuse percentage
 cost performance index
 planned value
 earned value
 predicted earned value
 defect density
 defect density by phase
 defect removal rate by phase
 defect removal leverage
 review rates
 process yield
 phase yield
 failure cost of quality (COQ)
 appraisal COQ
 appraisal/failure COQ ratio

Planning and tracking[edit]


Logging time, defect, and size data is an essential part of planning and tracking PSP
projects, as historical data is used to improve estimating accuracy.
The PSP uses the PROxy-Based Estimation (PROBE) method to improve a developer's
estimating skills for more accurate project planning. For project tracking, the PSP uses
the earned
value method.
The PSP also uses statistical techniques, such as correlation, linear regression, and
standard deviation, to translate data into useful information for improving estimating,
planning and quality. These statistical formulas are calculated by the PSP tool.

Using the PSP[edit]


The PSP is intended to help a developer improve their personal process; therefore PSP
developers are expected to continue adapting the process to ensure it meets their
personal needs.

PSP and the TSP[edit]


In practice, PSP skills are used in a TSP team environment. TSP teams consist of PSP-
trained developers who volunteer for areas of project responsibility, so the project is
managed by the team itself. Using personal data gathered using their PSP skills; the
team makes the plans, the estimates, and controls the quality.
Using PSP process methods can help TSP teams to meet their schedule commitments
and produce high quality software. For example, according to research by Watts
Humphrey, a third of all software projects fail,[3] but an SEI study on 20 TSP projects in
13 different organizations found that TSP teams missed their target schedules by an
average of only six percent.[4]
Successfully meeting schedule commitments can be attributed to using historical data
to make more accurate estimates, so projects are based on realistic plans – and by
using PSP quality methods, they produce low-defect software, which reduces time spent
on removing defects in later phases, such as integration and acceptance testing.

PSP and other methodologies[edit]


Software process and project management

The PSP is a personal process that can be adapted to suit the needs of the individual
developer. It is not specific to any programming or design methodology; therefore it can
be used with different methodologies, including Agile software development.
Software engineering methods can be considered to vary from predictive through
adaptive. The PSP is a predictive methodology, and Agile is considered adaptive, but
despite their differences, the TSP/PSP and Agile share several concepts and approaches
– particularly in regard to team organization. They both enable the team to:

 Define their goals and standards.


 Estimate and schedule the work.
 Determine realistic and attainable schedules.
 Make plans and process improvements.
Both Agile and the TSP/PSP share the idea of team members taking responsibility for
their own work and working together to agree on a realistic plan, creating an
environment of trust and accountability. However, the TSP/PSP differs from Agile in its
emphasis on documenting the process and its use of data for predicting and defining
project schedules.

Quality
High-quality software is the goal of the PSP, and quality is measured in terms of
defects. For the PSP, a quality process should produce low-defect software that meets
the user needs.
The PSP phase structure enables PSP developers to catch defects early. By catching
defects early, the PSP can reduce the amount of time spent in later phases, such as
Test.
The PSP theory is that it is more economical and effective to remove defects as close as
possible to where and when they were injected, so software engineers are encouraged
to conduct personal reviews for each phase of development. Therefore, the PSP phase
structure includes two review phases:

 Design Review
 Code Review
To do an effective review, you need to follow a structured review process. The PSP
recommends using checklists to help developers to consistently follow an orderly
procedure.
The PSP follows the premise that when people make mistakes, their errors are usually
predictable, so PSP developers can personalize their checklists to target their own
common errors. Software engineers are also expected to complete process
improvement proposals, to identify areas of weakness in their current performance that
they should target for improvement. Historical project data, which exposes where time
is spent and defects introduced, help developers to identify areas to improve.
PSP developers are also expected to conduct personal reviews before their work
undergoes a peer or team review.

TSP:
In combination with the personal software process (PSP), the team software process (TSP)
provides a defined operational process framework that is designed to help teams of managers and
engineers organize projects and produce software the principles products that range in size from
small projects of several thousand lines of code (KLOC) to very large projects greater than half a
Software process and project management

million lines of code. The TSP is intended to improve the levels of quality and productivity of a
team's software development project, in order to help them better meet the cost and schedule
commitments of developing a software system.
The initial version of the TSP was developed and piloted by Watts Humphrey in the late 1990s [5]
and the Technical Report[6] for TSP sponsored by the U.S. Department of Defense was published
in November 2000. The book by Watts Humphrey,[7] Introduction to the Team Software Process,
presents a view of the TSP intended for use in academic settings, that focuses on the process of
building a software production team, establishing team goals, distributing team roles, and other
teamwork-related activities
The primary goal of TSP is to create a team
environment for establishing and maintaining a self-directed team, and supporting disciplined
individual work as a base of PSP framework. Self-directed team means that the team manages
itself, plans and tracks their work, manages the quality of their work, and works proactively to
meet team goals. TSP has two principal components: team-building and team-working. Team-
building is a process that defines roles for each team member and sets up teamwork through TSP
launch and periodical relaunch. Team-working is a process that deals with engineering processes
and practices utilized by the team. TSP, in short, provides engineers and managers with a way
that establishes and manages their team to produce the high-quality software on schedule and
budget.
TSP Works
Before engineers can participate in the TSP, it is required that they have already learned about
the PSP, so that the TSP can work effectively. Training is also required for other team members,
the team lead and management. The TSP software development cycle begins with a planning
process called the launch, led by a coach who has been specially trained, and is either certified or
provisional.[8][9] The launch is designed to begin the team building process, and during this time
teams and managers establish goals, define team roles, assess risks, estimate effort, allocate
tasks, and produce a team plan. During an execution phase, developers track planned and actual
effort, schedule, and defects meeting regularly (usually weekly) to report status and revise plans.
A development cycle ends with a Post Mortem to assess performance, revise planning
parameters, and capture lessons learned for process improvement.
The coach role focuses on supporting the team and the individuals on the team as the process
expert while being independent of direct project management responsibility. [10][11] The team
leader role is different from the coach role in that, team leaders are responsible to management
for products and project outcomes while the coach is responsible for developing individual and
team performance
Software process and project management
Software process and project management
Software process and project management
Software process and project management

UNIT II

Software Project Management Renaissance


Conventional Software Management, Evolution of Software Economics, Improving Software
Economics, The old way and the new way.
Life-Cycle Phases and Process artifacts
Engineering and Production stages, inception phase, elaboration phase, construction phase,
transition phase, artifact sets, management artifacts, engineering artifacts and pragmatic artifacts,
model-based software architectures.

Conventional Software Management: The waterfall model, conventional software


Management performance.
Evolution of Software Economics: Software Economics, pragmatic software cost estimation
1. Conventional software management
Software process and project management

Conventional software management practices are sound in theory, but practice is still tied to
archaic (outdated) technology and techniques.
Conventional software economics provides a benchmark of performance for conventional
software management principles.
The best thing about software is its flexibility: It can be programmed to do almost anything.
The worst thing about software is also its flexibility: The "almost anything"
characteristic has made it difficult to plan, monitors, and control software
development.
Three important analyses of the state of the software engineering industry are

1. Software development is still highly unpredictable. Only about 10% of


software projects are delivered successfully within initial budget and schedule
estimates.
2. Management discipline is more of a discriminator in success or failure than
are technology advances.
3. The level of software scrap and rework is indicative of an immature process.
All three analyses reached the same general conclusion: The success rate for software projects
is very low. The three analyses provide a good introduction to the magnitude of the software
problem and the current norms for conventional software management performance.

THE WATERFALL MODEL


Most software engineering texts present the waterfall model as the source of the "con-
ventional" software process.
IN THEORY
It provides an insightful and concise summary of conventional software management
Three main primary points are
1. There are two essential steps common to the development of computer programs:
analysis and coding.

Waterfall Model part 1: The two basic steps to building a program.

Analysis
Analysis and coding both involve creative work that
directly contributes to the usefulness of the end product.
Coding
2. In order to manage and control all of the intellectual freedom associated with software
development, one must introduce several other "overhead" steps, including system
requirements definition, software requirements definition, program design, and
testing. These steps supplement the analysis and coding steps. Below Figure illustrates
the resulting project profile and the basic steps in developing a large-scale program.

Dept of CSE Page 47


Teegala Krishna Reddy Engineering College
Software process and project management

Requirement

Des
i

Dept of CSE Page 48


Teegala Krishna Reddy Engineering College
Software process and project management

Coding

Testing

Operatio
3. The basic framework described in the allnmodel is ky and invites failure. The
waterf ris

testing phase that occurs at the end of the development cycle is the first event for
which timing, storage, input/output transfers, etc., are experienced as distinguished
from analyzed. The resulting design changes are likely to be so disruptive that the
software requirements upon which the design is based are likely violated. Either the
requirements must be modified or a substantial design change is warranted.

Five necessary improvements for waterfall model are:-

1. Program design comes first. Insert a preliminary program design phase between the
software requirements generation phase and the analysis phase. By this technique, the
program designer assures that the software will not fail because of storage,
timing, and data flux (continuous change). As analysis proceeds in the succeeding
phase, the program designer must impose on the analyst the storage, timing, and
operational constraints in such a way that he senses the consequences. If the total
resources to be applied are insufficient or if the embryonic(in an early stage of
development) operational design is wrong, it will be recognized at this early stage and
the iteration with requirements and preliminary design can be redone before final
design, coding, and test commences. How is this program design procedure
implemented?

The following steps are required:


Begin the design process with program designers, not analysts or programmers.
Design, define, and allocate the data processing modes even at the risk of being
wrong. Allocate processing functions, design the database, allocate execution
time, define interfaces and processing modes with the operating system, describe
input and output processing, and define preliminary operating procedures.
Write an overview document that is understandable, informative, and current so that
every worker on the project can gain an elemental understanding of the system.

2. Document the design. The amount of documentation required on most software


programs is quite a lot, certainly much more than most programmers, analysts, or
program designers are willing to do if left to their own devices. Why do we need so
much documentation? (1) Each designer must communicate with interfacing
designers, managers, and possibly customers. (2) During early phases, the
documentation is the design. (3) The real monetary value of documentation is to
support later modifications by a separate test team, a separate maintenance team,
and operations personnel who are not software literate.

Dept of CSE Page 48


Teegala Krishna Reddy Engineering College
Software process and project management

3. Do it twice. If a computer program is being developed for the first time, arrange matters
so that the version finally delivered to the customer for operational deployment is actually
the

Dept of CSE Page 49


Teegala Krishna Reddy Engineering College
Software process and project management

second version insofar as critical design/operations are concerned. Note that this is simply the
entire process done in miniature, to a time scale that is relatively small with respect to the
overall effort. In the first version, the team must have a special broad competence where they
can quickly sense trouble spots in the design, model them, model alternatives, forget the
straightforward aspects of the design that aren't worth studying at this early point, and,
finally, arrive at an error-free program.

4. Plan, control, and monitor testing. Without question, the biggest user of project resources-
manpower, computer time, and/or management judgment-is the test phase. This is the phase
of greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule,
when backup alternatives are least available, if at all. The previous three recommendations
were all aimed at uncovering and solving problems before entering the test phase. However,
even after doing these things, there is still a test phase and there are still important things to
be done, including: (1) employ a team of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like dropped minus
signs, missing factors of two, jumps to wrong addresses (do not use the computer to detect
this kind of thing, it is too expensive); (3) test every logic path; (4) employ the final checkout
on the target computer.

5. Involve the customer. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery. There are three points following
requirements definition where the insight, judgment, and commitment of the customer can
bolster the development effort. These include a "preliminary software review" following the
preliminary program design step, a sequence of "critical software design reviews" during
program design, and a "final software acceptance review".

IN PRACTICE
Some software projects still practice the conventional software management approach.
It is useful to summarize the characteristics of the conventional process as it has typically
been applied, which is not necessarily as it was intended. Projects destined for trouble frequently
exhibit the following symptoms:

 Protracted integration and late design breakage.


 Late risk resolution.
 Requirements-driven functional decomposition.
 Adversarial (conflict or opposition) stakeholder relationships.
 Focus on documents and review meetings.

Protracted Integration and Late Design Breakage


For a typical development project that used a waterfall model management process, Figure 1-2
illustrates development progress versus time. Progress is defined as percent coded, that is,
demonstrable in its target form.

Dept of CSE Page 50


Teegala Krishna Reddy Engineering College
Software process and project management

The following sequence was common:

 Early success via paper designs and thorough (often too thorough) briefings.
 Commitment to code late in the life cycle.
 Integration nightmares (unpleasant experience) due to unforeseen
implementation issues and interface ambiguities.
 Heavy budget and schedule pressure to get the system working.
 Late shoe-homing of no optimal fixes, with no time for redesign.
 A very fragile, unmentionable product delivered late.

In the conventional model, the entire system was designed on paper, then
implemented all at once, then integrated. Table 1-1 provides a typical profile of cost
expenditures across the spectrum of software activities.

Dept of CSE Page 51


Teegala Krishna Reddy Engineering College
Software process and project management

Late risk resolution

A serious issue associated with the waterfall lifecycle was the lack of early risk
resolution. Figure 1.3 illustrates a typical risk profile for conventional waterfall model
projects. It includes four distinct periods of risk exposure, where risk is defined as
the probability of missing a cost, schedule, feature, or quality goal. Early in the life
cycle, as the requirements were being specified, the actual risk exposure was highly
unpredictable.

Requirements-Driven Functional Decomposition: This approach depends on specifying


requirements completely and unambiguously before other development activities
begin. It naively treats all requirements as equally important, and depends on those
requirements remaining constant over the software development life cycle. These
conditions rarely occur in the real world. Specification of requirements is a difficult
and important part of the software development process.

Dept of CSE Page 52


Teegala Krishna Reddy Engineering College
Software process and project management

Another property of the conventional approach is that the requirements were typically
specified in a functional manner. Built into the classic waterfall process was the fundamental
assumption that the software itself was decomposed into functions; requirements were then
allocated to the resulting components. This decomposition was often very different from a
decomposition based on object-oriented design and the use of existing components. Figure 1-4
illustrates the result of requirements-driven approaches: a software structure that is organized
around the requirements specification structure.

Adversarial Stakeholder Relationships:


The conventional process tended to result in adversarial stakeholder relationships,
in large part because of the difficulties of requirements specification and the
exchange of information solely through paper documents that captured engineering
information in ad hoc formats.

The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured
an intermediate artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to
30 days) a final version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of
customers and contractors.

Focus on Documents and Review Meetings:


The conventional process focused on producing various documents that attempted to describe
the software product, with insufficient focus on producing tangible increments of the products
Dept of CSE Page 53
Teegala Krishna Reddy Engineering College
Software process and project management

themselves. Contractors were driven to produce literally tons of paper to meet milestones and
demonstrate progress to stakeholders, rather than spend their energy on tasks that would reduce
risk and produce quality software. Typically, presenters and the audience reviewed the simple
things that they understood rather than the complex and important issues. Most design reviews
therefore resulted in low engineering value and high cost in terms of the effort and schedule
involved in their preparation and conduct. They presented merely a facade of progress.
Table 1-2 summarizes the results of a typical design review.

CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE

Barry Boehm's "Industrial Software Metrics Top 10 List” is a good, objective characterization
of the state of software development.
1. Finding and fixing a software problem after delivery costs 100 times more than
finding and fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number of
source lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985,
85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.
9. Walkthroughs catch 60% of the errors

Dept of CSE Page 54


Teegala Krishna Reddy Engineering College
Software process and project management

10. 80% of the contribution comes from 20% of the contributors.


11. A Simplified Model of Software Economics
12. There are several software cost models in use today. The most popular, open, and well-
documented model is the COnstructive COst MOdel (COCOMO), which has been
widely used by industry for 20 years.
13. The latest version, COCOMO II, is the result of a collaborative effort led by the
University of Southern California (USC) Center for Software Engineering, with the
financial and technical support of numerous industry affiliates.
14. The objectives of this team are threefold:
15. 1. To develop a software cost and schedule estimation model for the lifecycle practices
of the post-2000 era
16. 2. To develop a software project database and tool support for improvement of the
cost model
17. 3. To provide a quantitative analytic framework for evaluating software technologies
and their economic impacts.
18. The accuracy of COCOMO II allows its users to estimate cost within 30 percent of
actuals, 74 percent of the time. This level of unpredictability in the outcome of a software
development process should be truly
19. frightening to any software project investor, especially in view of the fact that few
projects ever miss their financial objectives by doing better than expected.
20. The COCOMO II cost model includes numerous parameters and techniques for
estimating a wide variety of software development projects. For the purposes of this
discussion, we will abstract COCOMO II into a function of four basic parameters:
21. Complexity. The complexity of the software solution is typically quantified in terms of
the size of human-generated components (the number of source instructions or the number
of function points) needed to develop the features in a usable product.
22. Process. This refers to the process used to produce the end product, and in particular
its effectiveness in helping developers avoid non-value-adding activities.
23. Team. This refers to the capabilities of the software engineering team, and particularly
their experience with both the computer science issues and the application domain
issues for the project at hand.
24. Tools. This refers to the software tools a team uses for development, that is, the extent
of process automation.
25. The relationships among these parameters in modeling the estimated effort can
be expressed as follows:
26. Effort = (Team)*(Tools)*(Complexity) (Process)
27. Schedule estimates are computed directly from the effort estimate andprocess
parameters. Reductions in effort generally result in reductions in schedule estimates. To
simplify this discussion, we can assume that the
28. "cost" includes both effort and time. The complete COCOMO II model includes
several modes, numerous parameters, and several equations.
29. This simplified model enables us to focus the discussion on the more
discriminating dimensions of improvement.
30. What constitutes a good software cost estimate is a very tough question.
31. In our experience, a good estimate can be defined as one that has the following attributes:

Dept of CSE Page 55


Teegala Krishna Reddy Engineering College
Software process and project management

32. It is conceived and supported by the project manager, the architecture team,
the development team, and the test team accountable for performing the work.
33. It is accepted by all stakeholders as ambitious but realizable.
34. It is based on a well-defined software cost model with a credible basis and a database of
relevant project experience that includes similar processes, similar technologies, similar
environments, similar quality requirements, and similar people.
35. It is defined in enough detail for both developers and managers to
36. objectively assess the probability of success and to understand key risk areas.
37. Although several parametric models have been developed to estimate software costs,
they can all be generally abstracted into the form given above. One very important aspect
of software economics (as represented
38. within today's software cost models) is that the relationship between effort and size (see
the equation above) exhibits a diseconomy of scale. The software development
diseconomy of scale is a result of the "process"
39. exponent in the equation being greater than 1.0. In contrast to the economics for most
manufacturing processes, the more software you build, the greater the cost per unit item.
It is desirable, therefore, to reduce the size and complexity of a project whenever
possible.

2. Evolution of Software Economics

SOFTWARE ECONOMICS
Most software cost models can be abstracted into a function of five basic parameters: size,
process, personnel, environment, and required quality.
1. The size of the end product (in human-generated components), which is typically
quantified in terms of the number of source instructions or the number of function
points required to develop the required functionality
2. The process used to produce the end product, in particular the ability of the process to
avoid non-value-adding activities (rework, bureaucratic delays, communications
overhead)
3. The capabilities of software engineering personnel, and particularly their experience
with the computer science issues and the applications domain issues of the project
4. The environment, which is made up of the tools and techniques available to support
efficient software development and to automate the process
5. The required quality of the product, including its features, performance, reliability,
and adaptability

The relationships among these parameters and the estimated cost can be written as follows:

Effort = (Personnel) (Environment) (Quality) ( Sizeprocess)

One important aspect of software economics (as represented within today's software cost
models) is that the relationship between effort and size exhibits a diseconomy of scale. The
diseconomy of scale of software development is a result of the process exponent being greater
than

Dept of CSE Page 56


Teegala Krishna Reddy Engineering College
Software process and project management

1.0. Contrary to most manufacturing processes, the more software you build, the more

Dept of CSE Page 57


Teegala Krishna Reddy Engineering College
Software process and project management

expensive it is per unit item.


Figure 2-1 shows three generations of basic technology advancement in tools, components,
and processes. The required levels of quality and personnel are assumed to be constant. The
ordinate of the graph refers to software unit costs (pick your favorite: per SLOC, per function
point, per component) realized by an organization.
The three generations of software development are defined as follows:
1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools,
custom processes, and virtually all custom components built in primitive languages.
Project performance was highly predictable in that cost, schedule, and quality objectives
were almost always underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable
processes and off-the-shelf tools, and mostly (>70%) custom components built in higher level
languages. Some of the components (<30%) were available as commercial products,
including the operating system, database management system, networking, and graphical user
interface.
3) Modern practices: 2000 and later, software production. This book's philosophy is
rooted in the
use of managed and measured processes, integrated automation environments, and
mostly
(70%) off-the-shelf components. Perhaps as few as 30% of the components need to be
custom
built
Technologies for environment automation, size reduction, and process improvement are not
independent of one another. In each new era, the key is complementary growth in all
technologies. For example, the process advances could not be used successfully without new
component technologies and increased tool automation.

Dept of CSE Page 58


Teegala Krishna Reddy Engineering College
Software process and project management

Organizations are achieving better economies of scale in successive technology eras-with very
large projects (systems of systems), long-lived products, and lines of business comprising
multiple similar projects. Figure 2-2 provides an overview of how a return on investment (ROI)
profile can be achieved in subsequent efforts across life cycles of various domains.

Dept of CSE Page 59


Teegala Krishna Reddy Engineering College
Software process and project management

PRAGMATIC SOFTWARE COST ESTIMATION


One critical problem in software cost estimation is a lack of well-documented case studies of
projects that used an iterative development approach. Software industry has inconsistently
defined metrics or atomic units of measure, the data from actual projects are highly suspect in
terms of consistency and comparability. It is hard enough to collect a homogeneous set of project
data within one organization; it is extremely difficult to homogenize data across different
organizations with different processes, languages, domains, and so on.

There have been many debates among developers and vendors of software cost estimation
models and tools. Three topics of these debates are of particular interest here:

Dept of CSE Page 60


Teegala Krishna Reddy Engineering College
Software process and project management

1. Which cost estimation model to use?


2. Whether to measure software size in source lines of code or function points.
3. What constitutes a good estimate?
There are several popular cost estimation models (such as COCOMO, CHECKPOINT,
ESTIMACS, KnowledgePlan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20), CO
COMO is also one of the most open and well-documented cost estimation models. The general
accuracy of conventional cost models (such as COCOMO) has been described as "within 20% of
actuals, 70% of the time."
Most real-world use of cost models is bottom-up (substantiating a target cost) rather than
top- down (estimating the "should" cost). Figure 2-3 illustrates the predominant practice: The
software project manager defines the target cost of the software, and then manipulates the
parameters and sizing until the target cost can be justified. The rationale for the target cost maybe
to win a proposal, to solicit customer funding, to attain internal corporate funding, or to achieve
some other goal.
The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary
to analyze the cost risks and understand the sensitivities and trade-offs objectively.
It forces the software project manager to examine the risks associated with
achieving the target costs and to discuss this information with other stakeholders.
A good software cost estimate has the following attributes:
 It is conceived and supported by the project manager, architecture team,
development team, and test team accountable for performing the work.
 It is accepted by all stakeholders as ambitious but realizable.
 It is based on a well-defined software cost model with a credible basis.
 It is based on a database of relevant project experience that includes similar processes,
similar technologies, similar environments, similar quality requirements, and similar
people.
 It is defined in enough detail so that its key risk areas are understood and the
probability of success is objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost
model with an experience base that reflects multiple similar projects done by the same team with
the same mature processes and tools.

Dept of CSE Page 61


Teegala Krishna Reddy Engineering College
Software process and project management

Improving Software Economics: Reducing Software product size, improving software


processes, improving team effectiveness, improving automation, Achieving required quality,
peer inspections.
The old way and the new: The principles of conventional software Engineering,
principles of modern software management, transitioning to an iterative process.

3. Improving Software Economics


Five basic parameters of the software cost model are
1. Reducing the size or complexity of what needs to be developed.
2. Improving the development process.
3. Using more-skilled personnel and better teams (not necessarily the same thing).
4. Using better environments (tools to automate the process).
5. Trading off or backing off on quality thresholds.
These parameters are given in priority order for most software domains. Table 3-1 lists some
of the technology developments, process improvement efforts, and management approaches
targeted at improving the economics of software development and integration.

Dept of CSE Page 62


Teegala Krishna Reddy Engineering College
Software process and project management

REDUCING SOFTWARE PRODUCT SIZE


The most significant way to improve affordability and return on investment (ROI) is usually to
produce a product that achieves the design goals with the minimum amount of human-generated
source material. Component-based development is introduced as the general term for reducing
the "source" language size to achieve a software solution.
Reuse, object-oriented technology, automatic code production, and higher order programming
languages are all focused on achieving a given system with fewer lines of human-specified
source directives (statements).
size reduction is the primary motivation behind improvements in higher order languages (such
as C++, Ada 95, Java, Visual Basic), automatic code generators (CASE tools, visual modeling
tools, GUI builders), reuse of commercial components (operating systems, windowing
environments, database management systems, middleware, networks), and object-oriented
technologies (Unified Modeling Language, visual modeling tools, architecture frameworks).
The reduction is defined in terms of human-generated source material. In general, when size-
reducing technologies are used, they reduce the number of human-generated source lines.

LANGUAGES
Universal function points (UFPs1) are useful estimators for language-independent, early life-
cycle estimates. The basic units of function points are external user inputs, external outputs,
internal logical data groups, external data interfaces, and external inquiries. SLOC metrics are
useful estimators for software after a candidate solution is formulated and an implementation
language is known. Substantial data have been documented relating SLOC to function points.
1
Function point metrics provide a standardized method for measuring the various functions of a software
application.
The basic units of function points are external user inputs, external outputs, internal logical data groups, external
data interfaces, and external inquiries.

Dept of CSE Page 63


Teegala Krishna Reddy Engineering College
Software process and project management

Some of these results are shown in Table 3-2.


Languages expressiveness of some of today’s popular languages
LANGUAGES SLOC per UFP
Assembly 320
C 128
FORTAN77 105
COBOL85 91
Ada83 71
C++ 56
Ada95 55
Java 55
Visual Basic 35
Table 3-2

OBJECT-ORIENTED METHODS AND VISUAL MODELING


Object-oriented technology is not germane to most of the software management topics discussed
here, and books on object-oriented technology abound. Object-oriented programming languages
appear to benefit both software productivity and software quality.
The fundamental impact of object-oriented technology is in reducing the
overall size of what needs to be developed.People like drawing pictures to explain something
to others or to themselves. When they do it for software system design, they call these pictures
diagrams or diagrammatic models and the very notation for them a modeling language.
These are interesting examples of the interrelationships among the dimensions of improving
software economics.

1. An object-oriented model of the problem and its solution encourages a common


vocabulary between the end users of a system and its developers, thus creating a
shared understanding of the problem being solved.
2. The use of continuous integration creates opportunities to recognize risk early and make
incremental corrections without destabilizing the entire development effort.
3. An object-oriented architecture provides a clear separation of concerns among
disparate elements of a system, creating firewalls that prevent a change in one part of
the system from rending the fabric of the entire architecture.

Booch also summarized five characteristics of a successful object-oriented project.

1. A ruthless focus on the development of a system that provides a well understood collection
of essential minimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and

Dept of CSE Page 64


Teegala Krishna Reddy Engineering College
Software process and project management

yet is not afraid to fail.


3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life cycle.

REUSE
Reusing existing components and building reusable components have been natural software
engineering activities since the earliest improvements in programming languages. With reuse in
order to minimize development costs while achieving all the other required attributes of per-
formance, feature set, and quality . Try to treat reuse as a mundane part of achieving a return on
investment.
Most truly reusable components of value are transitioned to commercial products supported
by organizations with the following characteristics:
 They have an economic motivation for continued support.
 They take ownership of improving product quality, adding new features,
and transitioning to new technologies.
 They have a sufficiently broad customer base to be profitable.
The cost of developing a reusable component is not trivial. Figure 3-1 examines the economic
trade-offs. The steep initial curve illustrates the economic obstacle to developing reusable
components.
Reuse is an important discipline that has an impact on the efficiency of all workflows and the
quality of most artifacts.

COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize integration of
commercial components and off-the-shelf products. While the use of commercial components is
certainly desirable as a means of reducing custom development, it has not proven to be
straightforward in practice. Table 3-3 identifies some of the advantages and disadvantages of
using commercial components.

Dept of CSE Page 65


Teegala Krishna Reddy Engineering College
Software process and project management

IMPROVING SOFTWARE PROCESSES


Process is an overloaded term. Three distinct process perspectives are.
 Metaprocess: an organization's policies, procedures, and practices for pursuing a
software-intensive line of business. The focus of this process is on organizational
economics, long-term strategies, and software ROI.
 Macroprocess: a project's policies, procedures, and practices for producing a complete
software product within certain cost, schedule, and quality constraints. The focus of
the macro process is on creating an adequate instance of the Meta process for a
specific set of constraints.
 Microprocess: a project team's policies, procedures, and practices for achieving an
artifact of the software process. The focus of the micro process is on achieving an
intermediate product baseline with adequate quality and adequate functionality as
economically and rapidly as practical.
Although these three levels of process overlap somewhat, they have different
objectives, audiences, metrics, concerns, and time scales as shown in Table 3-4

Dept of CSE Page 66


Teegala Krishna Reddy Engineering College
Software process and project management

In a perfect software engineering world with an immaculate problem description, an obvious


solution space, a development team of experienced geniuses, adequate resources, and
stakeholders with common goals, we could execute a software development process in one
iteration with almost no scrap and rework. Because we work in an imperfect world, however,
we need to manage engineering activities so that scrap and rework profiles do not have an
impact on the win conditions of any stakeholder. This should be the underlying premise for
most process improvements.

IMPROVING TEAM EFFECTIVENESS


Teamwork is much more important than the sum of the individuals. With software teams, a
project manager needs to configure a balance of solid talent with highly skilled people in the
leverage positions. Some maxims of team management include the following:
 A well-managed project can succeed with a nominal engineering team.
 A mismanaged project will almost never succeed, even with an expert team
of engineers.
 A well-architected system can be built by a nominal team of software builders.
 A poorly architected system will flounder even with an expert team of builders.

Boehm five staffing principles are


1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and motivation of the
people available.
3. The principle of career progression: An organization does best in the long run
by helping its people to self-actualize.
4. The principle of team balance: Select people who will complement and
harmonize with one another
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone

Dept of CSE Page 67


Teegala Krishna Reddy Engineering College
Software process and project management

Software project managers need many leadership qualities in order to enhance team
effectiveness. The following are some crucial attributes of successful software project managers
that deserve much more attention:
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right
person in the right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a
prerequisite for success.
Decision-making skill. The jillion books written about management have failed to
provide a clear definition of this attribute. We all know a good leader when we run
into one, and decision-making skill seems obvious despite its intangible definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate
progress, exploit eccentric prima donnas, transition average people into top
performers, eliminate misfits, and consolidate diverse opinions into a team direction.
Selling skill. Successful project managers must sell all stakeholders (including themselves)
on decisions and priorities, sell candidates on job positions, sell changes to the status quo
in the face of resistance, and sell achievements against objectives. In practice, selling
requires continuous negotiation, compromise, and empathy

IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS


The tools and environment used in the software process generally have a linear effect on the
productivity of the process. Planning tools, requirements management tools, visual modeling
tools, compilers, editors, debuggers, quality assurance analysis tools, test tools, and user
interfaces provide crucial automation support for evolving the software engineering artifacts.
Above all, configuration management environments provide the foundation for executing and
instrument the process. At first order, the isolated impact of tools and automation generally
allows improvements of 20% to 40% in effort. However, tools and environments must be viewed
as the primary delivery vehicle for process automation and improvement, so their impact can be
much higher.
Automation of the design process provides payback in quality, the ability to estimate
costs and schedules, and overall productivity using a smaller team.
Round-trip engineering describe the key capability of environments that support iterative
development. As we have moved into maintaining different information repositories for the
engineering artifacts, we need automation support to ensure efficient and error-free transition of
data from one artifact to another. Forward engineering is the automation of one engineering
artifact from another, more abstract representation. For example, compilers and linkers have
provided automated transition of source code into executable code.
Reverse engineering is the generation or modification of a more abstract representation from an
existing artifact (for example, creating a .visual design model from a source code representation).
Economic improvements associated with tools and environments. It is common for tool vendors
to make relatively accurate individual assessments of life-cycle activities to support claims about
the potential economic impact of their tools. For example, it is easy to find statements such as the
following from companies in a particular tool.
 Requirements analysis and evolution activities consume 40% of life-cycle costs.
 Software design activities have an impact on more than 50% of the resources.

Dept of CSE Page 68


Teegala Krishna Reddy Engineering College
Software process and project management

 Coding and unit testing activities consume about 50% of software development
effort and schedule.
 Test activities can consume as much as 50% of a project's resources.
 Configuration control and change management are critical activities that can
consume as much as 25% of resources on a large-scale project.
 Documentation activities can consume more than 30% of project
engineering resources.
 Project management, business administration, and progress assessment can
consume as much as 30% of project budgets.

ACHIEVING REQUIRED QUALITY


Software best practices are derived from the development process and technologies. Table 3-5
summarizes some dimensions of quality improvement.

Key practices that improve overall software quality include the following:
 Focusing on driving requirements and critical use cases early in the life cycle, focusing
on requirements completeness and traceability late in the life cycle, and focusing
throughout the life cycle on a balance between requirements evolution, design
evolution, and plan evolution
 Using metrics and indicators to measure the progress and quality of an architecture as it
evolves from a high-level prototype into a fully compliant product
 Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation
 Using visual modeling and higher level languages that support architectural control,
abstraction, reliable programming, reuse, and self-documentation
 Early and continuous insight into performance issues through demonstration-based
evaluations

Dept of CSE Page 69


Teegala Krishna Reddy Engineering College
Software process and project management

Conventional development processes stressed early sizing and timing estimates of computer
program resource utilization. However, the typical chronology of events in performance
assessment was as follows

 Project inception. The proposed design was asserted to be low risk with adequate
performance margin.
 Initial design review. Optimistic assessments of adequate design margin were based
mostly on paper analysis or rough simulation of the critical threads. In most cases, the
actual application algorithms and database sizes were fairly well understood.
 Mid-life-cycle design review. The assessments started whittling away at the margin, as
early benchmarks and initial tests began exposing the optimism inherent in earlier
estimates.
 Integration and test. Serious performance problems were uncovered, necessitating
fundamental changes in the architecture. The underlying infrastructure was usually the
scapegoat, but the real culprit was immature use of the infrastructure, immature
architectural solutions, or poorly understood early design trade-offs.

PEER INSPECTIONS: A PRAGMATICVIEW


Peer inspections are frequently over hyped as the key aspect of a quality system. In my
experience, peer reviews are valuable as secondary mechanisms, but they are rarely significant
contributors to quality compared with the following primary quality mechanisms and indicators,
which should be emphasized in the management process:
 Transitioning engineering information from one artifact set to another, thereby assessing the

Dept of CSE Page 70


Teegala Krishna Reddy Engineering College
Software process and project management

consistency, feasibility, understandability, and technology constraints inherent in the


engineering artifacts
 Major milestone demonstrations that force the artifacts to be assessed against
tangible criteria in the context of relevant use cases
 Environment tools (compilers, debuggers, analyzers, automated test suites) that
ensure representation rigor, consistency, completeness, and change control
 Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria,
and requirements compliance
 Change management metrics for objective insight into multiple-perspective
change trends and convergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality products. All
authors of software and documentation should have their products scrutinized as a natural
by-product of the process. Therefore, the coverage of inspections should be across all
authors rather than across all components.

4. THE OLD WAY AND THE NEW


THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING
1. Make quality #1. Quality must be quantified and mechanisms put into place to motivate its
achievement
2. High-quality software is possible. Techniques that have been demonstrated to increase quality
include involving the customer, prototyping, simplifying design, conducting inspections, and hiring
the best people
3. Give products to customers early. No matter how hard you try to learn users' needs during the
requirements phase, the most effective way to determine real needs is to give users a product and
let them play with it
4. Determine the problem before writing the requirements. When faced with what they believe
is a problem, most engineers rush to offer a solution. Before you try to solve a problem, be sure to
explore all the alternatives and don't be blinded by the obvious solution
5. Evaluate design alternatives. After the requirements are agreed upon, you must examine a
variety of architectures and algorithms. You certainly do not want to use” architecture" simply
because it was used in the requirements specification.
6. Use an appropriate process model. Each project must select a process that makes ·the most
sense for that project on the basis of corporate culture, willingness to take risks, application area,
volatility of requirements, and the extent to which requirements are well understood.
7. Use different languages for different phases. Our industry's eternal thirst for simple solutions
to complex problems has driven many to declare that the best development method is one that uses
the same notation throughout the life cycle.
8. Minimize intellectual distance. To minimize intellectual distance, the software's structure
should be as close as possible to the real-world structure
9. Put techniques before tools. An undisciplined software engineer with a tool becomes a
dangerous, undisciplined software engineer
10. Get it right before you make it faster. It is far easier to make a working program run faster
than it is to make a fast program work. Don't worry about optimization during initial coding
11.Inspect code. Inspecting the detailed design and code is a much better way to find errors
than testing

Dept of CSE Page 71


Teegala Krishna Reddy Engineering College
Software process and project management

12. Good management is more important than good technology. Good management motivates
people to do their best, but there are no universal "right" styles of management.
13. People are the key to success. Highly skilled people with appropriate experience, talent, and
training are key.
14. Follow with care. Just because everybody is doing something does not make it right for you. It
may be right, but you must carefully assess its applicability to your environment.
15. Take responsibility. When a bridge collapses we ask, "What did the engineers do wrong?"
Evenwhen software fails, we rarely ask this. The fact is that in any engineering discipline, the best
methodscan be used to produce awful designs, and the most antiquated methods to produce elegant
designs.
16. Understand the customer's priorities. It is possible the customer would tolerate 90% of the
functionality delivered late if they could have 10% of it on time.
17. The more they see, the more they need. The more functionality (or performance) you provide
a user, the more functionality (or performance) the user wants.
18. Plan to throw one away. One of the most important critical success factors is whether or not a
product is entirely new. Such brand-new applications, architectures, interfaces, or algorithms
rarely work the first time.
19. Design for change. The architectures, components, and specification techniques you use must
accommodate change.
20. Design without documentation is not design. I have often heard software engineers say, "I
have finished the design. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks constructs that perform a
function correctly, but in an obscure way. Show the world how smart you are by avoiding tricky
code
23. Encapsulate. Information-hiding is a simple, proven concept that results in software that
is easier to test and much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure
software's inherent maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics available to report
the inherent complexity of software, none is as intuitive and easy to use as Tom McCabe's
26. Don't test your own software. Software developers should never be the primary testers
of their own software.
27. Analyze causes for errors. It is far more cost-effective to reduce the effect of an error by
preventing it than it is to find and fix it. One way to do this is to analyze the causes of errors as they
are detected
28. Realize that software's entropy increases. Any software system that undergoes continuous
change will grow in complexity and will become more and more disorganized
29. People and time are not interchangeable. Measuring a project solely by person-months makes
little sense
30. Expect excellence. Your employees will do much better if you have high expectations for them.

THE PRINCIPLES OF MODERN SOFTWARE MANAGEMENT

Top 10 principles of modern software management are. (The first five, which are the main themes of my
definition of an iterative process, are summarized in Figure 4-1.)

Base the process on an architecture-first approach. This requires that a demonstrable

Dept of CSE Page 72


Teegala Krishna Reddy Engineering College
Software process and project management

balance be achieved among the driving requirements, the architecturally significant design
decisions, and the life-cycle plans before the resources are committed for full-scale
development.
Establish an iterative life-cycle process that confronts risk early. With today's sophisticated
software systems, it is not possible to define the entire problem, design the entire solution,
build the software, and then test the end product in sequence. Instead, an iterative process
that refines the problem understanding, an effective solution, and an effective plan over
several iterations encourages a balanced treatment of all stakeholder objectives. Major risks
must be addressed early to increase predictability and avoid expensive downstream scrap
and rework.
Transition design methods to emphasize component-based development. Moving from a
line- of-code mentality to a component-based mentality is necessary to reduce the amount
of human-generated source code and custom development.

4. Establish a change management environment. The dynamics of iterative


development, including concurrent workflows by different teams working on shared
artifacts,
necessitates objectively controlled baselines.

5. Enhance change freedom through tools that support round-trip engineering.


Round-trip engineering is the environment support necessary to automate and synchronize
engineering information in different formats(such as requirements specifications,
design models, source code, executable code, test cases).
6. Capture design artifacts in rigorous, model-based notation. A model based approach
(such as UML) supports the evolution of semantically rich graphical and textual design
notations.
7. Instrument the process for objective quality control and progress assessment. Life-
cycle assessment of the progress and the quality of all intermediate products must be integrated

Dept of CSE Page 73


Teegala Krishna Reddy Engineering College
Software process and project management

into the process.


8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels of
detail. It is essential that the software management process drive toward early and
continuous demonstrations within the operational context of the system, namely its use
cases.
10. Establish a configurable process that is economically scalable. No single process is
suitable for all software developments.

Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles
of a modern process

TRANSITIONING TO AN ITERATIVE PROCESS


Modern software development processes have moved away from the conventional waterfall
model, in which each stage of the development process is dependent on completion of the
previous stage.
The economic benefits inherent in transitioning from the conventional waterfall model to an
iterative development process are significant but difficult to quantify. As one benchmark of the
expected economic impact of process improvement, consider the process exponent parameters of
the COCOMO II model. (Appendix B provides more detail on the COCOMO model) This
exponent can range from 1.01 (virtually no diseconomy of scale) to 1.26 (significant diseconomy
of scale). The parameters that govern the value of the process exponent are application

Dept of CSE Page 74


Teegala Krishna Reddy Engineering College
Software process and project management

precedentedness, process flexibility, architecture risk resolution, team cohesion, and software
process maturity.
The following paragraphs map the process exponent parameters of CO COMO II to my top
10 principles of a modern process.

 Application precedentedness. Domain experience is a critical factor in understanding how


to plan and execute a software development project. For unprecedented systems, one of the
key goals is to confront risks and establish early precedents, even if they are incomplete or
experimental. This is one of the primary reasons that the software industry has moved to an
iterative life-cycle process. Early iterations in the life cycle establish precedents from which
the product, the process, and the plans can be elaborated in evolving levels of detail.
 Process flexibility. Development of modern software is characterized by such a broad
solution space and so many interrelated concerns that there is a paramount need for
continuous incorporation of changes. These changes may be inherent in the problem
understanding, the solution space, or the plans. Project artifacts must be supported by
efficient change management commensurate with project needs. A configurable process
that allows a common framework to be adapted across a range of projects is necessary to
achieve a software return on investment.
 Architecture risk resolution. Architecture-first development is a crucial theme underlying
a successful iterative development process. A project team develops and stabilizes
architecture before developing all the components that make up the entire suite of
applications components. An architecture-first and component-based development
approach forces the infrastructure, common mechanisms, and control mechanisms to be
elaborated early in the life cycle and drives all component make/buy decisions into the
architecture process.
 Team cohesion. Successful teams are cohesive, and cohesive teams are successful.
Successful teams and cohesive teams share common objectives and priorities. Advances in
technology (such as programming languages, UML, and visual modeling) have enabled
more rigorous and understandable notations for communicating software engineering
information, particularly in the requirements and design artifacts that previously were ad
hoc and based completely on paper exchange. These model-based formats have also
enabled the round-trip engineering support needed to establish change freedom sufficient
for evolving design representations.
 Software process maturity. The Software Engineering Institute's Capability Maturity
Model (CMM) is a well-accepted benchmark for software process assessment. One of key
themes is that truly mature processes are enabled through an integrated environment that
provides the appropriate level of automation to instrument the process for objective quality
control.

Life cycle phases: Engineering and production stages, inception, Elaboration,


construction, transition phases.
Artifacts of the process: The artifact sets, Management artifacts, Engineering
artifacts, programmatic artifacts.

Dept of CSE Page 75


Teegala Krishna Reddy Engineering College
Software process and project management

5. Life cycle phases


Characteristic of a successful software development process is the well-defined separation
between "research and development" activities and "production" activities. Most unsuccessful
projects exhibit one of the following characteristics:
 An overemphasis on research and development
 An overemphasis on production.
Successful modern projects-and even successful projects developed under the
conventional process- tend to have a very well-defined project milestone when there
is a noticeable transition from a research attitude to a production attitude. Earlier
phases focus on achieving functionality. Later phases revolve around achieving a
product that can be shipped to a customer, with explicit attention to robustness,
performance, and finish.

A modern software development process must be defined to support the following:


 Evolution of the plans, requirements, and architecture, together with well
defined synchronization points
 Risk management and objective measures of progress and quality
 Evolution of system capabilities through demonstrations of increasing functionality

ENGINEERING AND PRODUCTION STAGES

To achieve economies of scale and higher returns on investment, we must move toward a
software manufacturing process driven by technological improvements in process automation
and component-based development. Two stages of the life cycle are:

1. The engineering stage, driven by less predictable but smaller teams doing design
and synthesis activities
2. The production stage, driven by more predictable but larger teams
doing construction, test, and deployment activities

Dept of CSE Page 76


Teegala Krishna Reddy Engineering College
Software process and project management

The transition between engineering and production is a crucial event for the
various stakeholders. The production plan has been agreed upon, and there is a
good enough understanding of the problem and the solution that all stakeholders
can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and
elaboration, and the production stage into construction and transition. These four
phases of the life-cycle process are loosely mapped to the conceptual framework of
the spiral model as shown in Figure 5-1

INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among stakeholders on
the life-cycle objectives for the project.

PRIMARY OBJECTIVES
 Establishing the project's software scope and boundary conditions, including an
operational concept, acceptance criteria, and a clear understanding of what is and is not
intended to be in the product
 Discriminating the critical use cases of the system and the primary scenarios
of operation that will drive the major design trade-offs
 Demonstrating at least one candidate architecture against some of the
primary scenanos
 Estimating the cost and schedule for the entire project (including detailed estimates
for the elaboration phase)
 Estimating potential risks (sources of unpredictability)

ESSENTIAL ACTMTIES
 Formulating the scope of the project. The information repository should be
sufficient to define the problem space and derive the acceptance criteria for the end
product.
 Synthesizing the architecture. An information repository is created that is sufficient to
Dept of CSE Page 77
Teegala Krishna Reddy Engineering College
Software process and project management

demonstrate the feasibility of at least one candidate architecture and an, initial baseline
of make/buy decisions so that the cost, schedule, and resource estimates can be derived.
 Planning and preparing a business case. Alternatives for risk management, staffing,
iteration plans, and cost/schedule/profitability trade-offs are evaluated.
PRIMARY EVALUATION CRITERIA
 Do all stakeholders concur on the scope definition and cost and schedule estimates?
 Are requirements understood, as evidenced by the fidelity of the critical use cases?
 Are the cost and schedule estimates, priorities, risks, and development processes
credible?
 Do the depth and breadth of an architecture prototype demonstrate the preceding
criteria? (The primary value of prototyping candidate architecture is to provide a
vehicle for understanding the scope and assessing the credibility of the development
group in solving the particular technical problem.)
 Are actual resource expenditures versus planned expenditures acceptable

ELABORATION PHASE

At the end of this phase, the "engineering" is considered complete. The elaboration phase
activities must ensure that the architecture, requirements, and plans are stable enough, and the
risks sufficiently mitigated, that the cost and schedule for the completion of the development can
be predicted within an acceptable range. During the elaboration phase, an executable architecture
prototype is built in one or more iterations, depending on the scope, size, & risk.
PRIMARY OBJECTIVES
 Baselining the architecture as rapidly as practical (establishing a configuration-
managed snapshot in which all changes are rationalized, tracked, and maintained)
 Baselining the vision
 Baselining a high-fidelity plan for the construction phase
 Demonstrating that the baseline architecture will support the vision at a reasonable cost in
a reasonable time

ESSENTIAL ACTIVITIES
 Elaborating the vision.
 Elaborating the process and infrastructure.
 Elaborating the architecture and selecting components.

PRIMARY EVALUATION CRITERIA


 Is the vision stable?
 Is the architecture stable?
 Does the executable demonstration show that the major risk elements have been
addressed and credibly resolved?

Dept of CSE Page 78


Teegala Krishna Reddy Engineering College
Software process and project management

 Is the construction phase plan of sufficient fidelity, and is it backed up with a credible
basis of estimate?
 Do all stakeholders agree that the current vision can be met if the current plan is
executed to develop the complete system in the context of the current architecture?
 Are actual resource expenditures versus planned expenditures acceptable?

CONSTRUCTION PHASE
During the construction phase, all remaining components and application features are integrated into the
application, and all features are thoroughly tested. Newly developed software is integrated where required. The
construction phase represents a production process, in which emphasis is placed on managing resources and
controlling operations to optimize costs, schedules, and quality.

PRIMARY OBJECTIVES
 Minimizing development costs by optimizing resources and avoiding
unnecessary scrap and rework
 Achieving adequate quality as rapidly as practical
 Achieving useful versions (alpha, beta, and other test releases) as rapidly as practical

ESSENTIAL ACTIVITIES
 Resource management, control, and process optimization
 Complete component development and testing against evaluation criteria
 Assessment of product releases against acceptance criteria of the vision

PRIMARY EVALUATION CRITERIA


 Is this product baseline mature enough to be deployed in the user community?
(Existing defects are not obstacles to achieving the purpose of the next
release.)
 Is this product baseline stable enough to be deployed in the user community?
(Pending changes are not obstacles to achieving the purpose of the next release.)
 Are the stakeholders ready for transition to the user community?
 Are actual resource expenditures versus planned expenditures acceptable?

TRANSITION PHASE
The transition phase is entered when a baseline is mature enough to be deployed in the end-user
domain. This typically requires that a usable subset of the system has been achieved with
acceptable quality levels and user documentation so that transition to the user will provide
positive results. This phase could include any of the following activities:

1. Beta testing to validate the new system against user expectations


2. Beta testing and parallel operation relative to a legacy system it is replacing
3. Conversion of operational databases

Dept of CSE Page 79


Teegala Krishna Reddy Engineering College
Software process and project management

4. Training of users and maintainers


The transition phase concludes when the deployment baseline has achieved the complete vision.

PRIMARY OBJECTIVES
 Achieving user self-supportability
 Achieving stakeholder concurrence that deployment baselines are complete
and consistent with the evaluation criteria of the vision
 Achieving final product baselines as rapidly and cost-effectively as practical

ESSENTIAL ACTIVITIES
 Synchronization and integration of concurrent construction increments into
consistent deployment baselines
 Deployment-specific engineering (cutover, commercial packaging and
production, sales rollout kit development, field personnel training)
 Assessment of deployment baselines against the complete vision and
acceptance criteria in the requirements set

EVALUATION CRITERIA
 Is the user satisfied?
 Are actual resource expenditures versus planned expenditures acceptable?

6. Artifacts of the process

THE ARTIFACT SETS


To make the development of a complete software system manageable, distinct collections of
information are organized into artifact sets. Artifact represents cohesive information that
typically is developed and reviewed as a single entity.
Life-cycle software artifacts are organized into five distinct sets that are roughly partitioned
by the underlying language of the set: management (ad hoc textual formats), requirements
(organized text and models of the problem space), design (models of the solution space),
implementation (human-readable programming language and associated source files), and
deployment (machine-process able languages and associated files). The artifact sets are shown in
Figure 6-1.

Dept of CSE Page 80


Teegala Krishna Reddy Engineering College
Software process and project management

THE MANAGEMENT SET


The management set captures the artifacts associated with process planning and
execution. These artifacts use ad hoc notations, including text, graphics, or
whatever representation is required to capture the "contracts" among project
personnel (project management, architects, developers, testers, marketers,
administrators), among stakeholders (funding authority, user, software project
manager, organization manager, regulatory agency), and between project
personnel and stakeholders. Specific artifacts included in this set are the work
breakdown structure (activity breakdown and financial tracking mechanism), the
business case (cost, schedule, profit expectations), the release specifications
(scope, plan, objectives for release baselines), the software development plan
(project process instance), the release descriptions (results of release baselines),
the status assessments (periodic snapshots of project progress), the software
change orders (descriptions of discrete baseline changes), the deployment
documents (cutover plan, training course, sales rollout kit), and the environment
(hardware and software tools, process automation, & documentation).
Management set artifacts are evaluated, assessed, and measured through a combination of the
following:
 Relevant stakeholder review
 Analysis of changes between the current version of the artifact and previous versions
 Major milestone demonstrations of the balance among all artifacts and, in
particular, the accuracy of the business case and vision artifacts

THE ENGINEERING SETS


The engineering sets consist of the requirements set, the design set, the implementation set,
and the deployment set.
Dept of CSE Page 81
Teegala Krishna Reddy Engineering College
Software process and project management

Requirements Set
Requirements artifacts are evaluated, assessed, and measured through a combination of the
following:

 Analysis of consistency with the release specifications of the management set


 Analysis of consistency between the vision and the requirements models
 Mapping against the design, implementation, and deployment sets to evaluate the
consistency and completeness and the semantic balance between information in the
different sets
 Analysis of changes between the current version of requirements artifacts and
previous versions (scrap, rework, and defect elimination trends)
 Subjective review of other dimensions of quality

Design Set
UML notation is used to engineer the design models for the solution. The design set
contains varying levels of abstraction that represent the components of the solution space (their
identities, attributes, static relationships, dynamic interactions). The design set is evaluated,
assessed, and measured through a combination of the following:
 Analysis of the internal consistency and quality of the design model
 Analysis of consistency with the requirements models
 Translation into implementation and deployment sets and notations (for example,
traceability, source code generation, compilation, linking) to evaluate the consistency
and completeness and the semantic balance between information in the sets
 Analysis of changes between the current version of the design model and previous
versions (scrap, rework, and defect elimination trends)
 Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations) that represents
the tangible implementations of components (their form, interface, and dependency
relationships)
Implementation sets are human-readable formats that are evaluated, assessed, and measured
through a combination of the following:
 Analysis of consistency with the design models
 Translation into deployment set notations (for example, compilation and linking)
to evaluate the consistency and completeness among artifact sets
 Assessment of component source or executable files against relevant
evaluation criteria through inspection, analysis, demonstration, or testing
 Execution of stand-alone component test cases that automatically compare
expected results with actual results
 Analysis of changes between the current version of the implementation set
and previous versions (scrap, rework, and defect elimination trends)
 Subjective review of other dimensions of quality

Dept of CSE Page 82


Teegala Krishna Reddy Engineering College
Software process and project management

Deployment Set
The deployment set includes user deliverables and machine language notations, executable
software, and the build scripts, installation scripts, and executable target specific data necessary
to use the product in its target environment.
Deployment sets are evaluated, assessed, and measured through a combination of the following:
 Testing against the usage scenarios and quality attributes defined in the requirements
set to evaluate the consistency and completeness and the~ semantic balance between
information in the two sets
 Testing the partitioning, replication, and allocation strategies in mapping components
of the implementation set to physical resources of the deployment system (platform
type, number, network topology)
 Testing against the defined usage scenarios in the user manual such as installation,
user- oriented dynamic reconfiguration, mainstream usage, and anomaly management
 Analysis of changes between the current version of the deployment set and previous
versions (defect elimination trends, performance changes)
 Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the other
sets take on check and balance roles. As illustrated in Figure 6-2, each phase has a predominant
focus: Requirements are the focus of the inception phase; design, the elaboration phase;
implementation, the construction phase; and deployment, the transition phase. The management
artifacts also evolve, but at a fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact sets.
1. Management: scheduling, workflow, defect tracking, change
management, documentation, spreadsheet, resource management, and
presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage
analysis tools, and test management tools
5. Deployment: test coverage and test automation tools, network management tools,
commercial components (operating systems, GUIs, RDBMS, networks, middleware), and
installation tools.

Dept of CSE Page 83


Teegala Krishna Reddy Engineering College
Software process and project management

Implementation Set versus Deployment Set


The separation of the implementation set (source code) from the deployment set (executable
code) is important because there are very different concerns with each set. The structure of the
information delivered to the user (and typically the test organization) is very different from the
structure of the source code information. Engineering decisions that have an impact on the
quality of the deployment set but are relatively incomprehensible in the design and
implementation sets include the following:
 Dynamically reconfigurable parameters (buffer sizes, color palettes, number of
servers, number of simultaneous clients, data files, run-time parameters)
 Effects of compiler/link optimizations (such as space optimization versus
speed optimization)
 Performance under certain allocation strategies (centralized versus distributed,
primary and shadow threads, dynamic load balancing, hot backup versus
checkpoint/rollback)
 Virtual machine constraints (file descriptors, garbage collection, heap size,
maximum record size, disk file rotations)
 Process-level concurrency issues (deadlock and race conditions)
 Platform-specific differences in performance or behavior

ARTIFACT EVOLUTION OVER THE LIFE CYCLE


Each state of development represents a certain amount of precision in the final system
description. Early in the life cycle, precision is low and the representation is generally high.
Eventually, the precision of representation is high and everything is specified in full detail. Each
phase of development focuses on a particular artifact set. At the end of each phase, the overall
system state will have progressed on all sets, as illustrated in Figure 6-3.

Dept of CSE Page 84


Teegala Krishna Reddy Engineering College
Software process and project management

The inception phase focuses mainly on critical requirements usually with a


secondary focus on an initial deployment view. During the elaboration phase, there
is much greater depth in requirements, much more breadth in the design set, and
further work on implementation and deployment issues. The main focus of the
construction phase is design and implementation. The main focus of the transition
phase is on achieving consistency and completeness of the deployment set in the
context of the other sets.

TEST ARTIFACTS
 The test artifacts must be developed concurrently with the product from inception
through deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle
activity.
 The test artifacts are communicated, engineered, and developed within the
same artifact sets as the developed product.
 The test artifacts are implemented in programmable and repeatable formats
(as software programs).
 The test artifacts are documented in the same way that the product is documented.
 Developers of the test artifacts use the same tools, techniques, and training as
the software engineers developing the product.
Test artifact subsets are highly project-specific, the following example clarifies the relationship
between test artifacts and the other artifact sets. Consider a project to perform seismic data
processing for the purpose of oil exploration. This system has three fundamental subsystems: (1)
a sensor subsystem that captures raw seismic data in real time and delivers these data to (2) a
technical operations subsystem that converts raw data into an organized database and manages
queries to this database from (3) a display subsystem that allows workstation operators to
examine seismic data in human-readable form. Such a system would result in the following test
artifacts:

 Management set. The release specifications and release descriptions capture the
objectives, evaluation criteria, and results of an intermediate milestone. These
Dept of CSE Page 85
Teegala Krishna Reddy Engineering College
Software process and project management

artifacts

Dept of CSE Page 86


Teegala Krishna Reddy Engineering College
Software process and project management

are the test plans and test results negotiated among internal project teams. The
software change orders capture test results (defects, testability changes, requirements
ambiguities, enhancements) and the closure criteria associated with making a discrete
change to a baseline.
 Requirements set. The system-level use cases capture the operational concept for the
system and the acceptance test case descriptions, including the expected behavior of
the system and its quality attributes. The entire requirement set is a test artifact
because it is the basis of all assessment activities across the life cycle.
 Design set. A test model for nondeliverable components needed to test the product
baselines is captured in the design set. These components include such design set
artifacts as a seismic event simulation for creating realistic sensor data; a "virtual
operator" that can support unattended, after-hours test cases; specific instrumentation
suites for early demonstration of resource usage; transaction rates or response times;
and use case test drivers and component stand-alone test drivers.
 Implementation set. Self-documenting source code representations for test
components and test drivers provide the equivalent of test procedures and test scripts.
These source files may also include human-readable data files representing certain
statically defined data sets that are explicit test source files. Output files from test
drivers provide the equivalent of test reports.
 Deployment set. Executable versions of test components, test drivers, and data
files are provided.

MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results and
ancillary information necessary to document the product/process legacy, maintain the
product, improve the product, and improve the process.

Business Case
The business case artifact provides all the information necessary to determine whether the
project is worth investing in. It details the expected revenue, expected cost, technical and
management plans, and backup data necessary to demonstrate the risks and realism of the
plans. The main purpose is to transform the vision into economic terms so that an
organization can make an accurate ROI assessment. The financial forecasts are
evolutionary, updated with more accurate forecasts as the life cycle progresses. Figure 6-4
provides a default outline for a business case.

Software Development Plan


The software development plan (SDP) elaborates the process framework into a fully
detailed plan. Two indications of a useful SDP are periodic updating (it is not stagnant
shelfware) and understanding and acceptance by managers and practitioners alike. Figure 6-
5 provides a default outline for a software development plan.

Dept of CSE Page 87


Teegala Krishna Reddy Engineering College
Software process and project management

Dept of CSE Page 88


Teegala Krishna Reddy Engineering College
Software process and project management

Work Breakdown Structure


Work breakdown structure (WBS) is the vehicle for budgeting and collecting costs.
To monitor and control a project's financial performance, the software project
man1ger must have insight into project costs and how they are expended. The
structure of cost accountability is a serious project planning constraint.

Software Change Order Database


Managing change is one of the fundamental primitives of an iterative development process.
With greater change freedom, a project can iterate more productively. This flexibility increases
the content, quality, and number of iterations that a project can achieve within a given schedule.
Change freedom has been achieved in practice through automation, and today's iterative
development environments carry the burden of change management. Organizational processes
that depend on manual change management techniques have encountered major inefficiencies.

Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are
derived from the vision statement as well as many other sources (make/buy
analyses, risk management concerns, architectural considerations, shots in the
dark, implementation constraints, quality thresholds). These artifacts are intended
to evolve along with the process, achieving greater fidelity as the life cycle
progresses and requirements understanding matures. Figure 6-6 provides a default
outline for a release specification

Release Descriptions
Release description documents describe the results of each release, including performance
against each of the evaluation criteria in the corresponding release specification. Release
baselines should be accompanied by a release description document that describes the evaluation
criteria for that configuration baseline and provides substantiation (through demonstration,
testing, inspection, or analysis) that each criterion has been addressed in an acceptable manner.
Figure 6-7 provides a default outline for a release description.
Status Assessments
Dept of CSE Page 89
Teegala Krishna Reddy Engineering College
Software process and project management

Status assessments provide periodic snapshots of project health and status,


including the software project manager's risk assessment, quality indicators, and
management indicators. Typical status assessments should include a review of
resources, personnel staffing, f inancial data (cost and revenue), top 10 risks,
technical progress (metrics snapshots), major milestone plans and results, total
project or product scope & action items

Environment
An important emphasis of a modern approach is to define the development and
maintenance environment as a first-class artifact of the process. A robust,
integrated development environment must support automation of the development
process. This environment should include requirements management, visual
modeling, document automation, host and target programming tools, automated
regression testing, and continuous and integrated change management, and feature
and defect tracking.

Deployment
A deployment document can take many forms. Depending on the project, it could include several
document subsets for transitioning the product into operational status. In big contractual efforts
in which the system is delivered to a separate maintenance organization, deployment artifacts
may include computer system operations manuals, software installation manuals, plans and
procedures for cutover (from a legacy system), site surveys, and so forth. For commercial
software products, deployment artifacts may include marketing plans, sales rollout kits, and
training courses.

Management Artifact Sequences


In each phase of the life cycle, new artifacts are produced and previously developed artifacts are
updated to incorporate lessons learned and to capture further depth and breadth of the solution.
Figure 6-8 identifies a typical sequence of artifacts across the life-cycle phases.

Dept of CSE Page 90


Teegala Krishna Reddy Engineering College
Software process and project management

Dept of CSE Page 91


Teegala Krishna Reddy Engineering College
Software process and project management

ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations such as UML,
programming languages, or executable machine codes. Three engineering artifacts are explicitly
intended for more general review, and they deserve further elaboration.

Vision Document
The vision document provides a complete vision for the software system under development and.
supports the contract between the funding authority and the development organization. A project
vision is meant to be changeable as understanding evolves of the requirements, architecture,
plans, and technology. A good vision document should change slowly. Figure 6-9 provides a
default outline for a vision document.

Architecture Description
The architecture description provides an organized view of the software architecture under
development. It is extracted largely from the design model and includes views of the design,
implementation, and deployment sets sufficient to understand how the operational concept of the
requirements set will be achieved. The breadth of the architecture description will vary from
project to project depending on many factors. Figure 6-10 provides a default outline for an
architecture description.

Dept of CSE Page 92


Teegala Krishna Reddy Engineering College
Software process and project management

Software User Manual


The software user manual provides the user with the reference documentation
necessary to support the delivered software. Although content is highly variable
across application domains, the user manual should include installation procedures,
usage procedures and guidance, operational constraints, and a user interface
description, at a minimum. For software products with a user interface, this manual
should be developed early in the life cycle because it is a necessary mechanism for
communicating and stabilizing an important subset of requirements. The user
manual should be written by members of the test team, who are more likely to
understand the user's perspective than the development team.
PRAGMATIC ARTIFACTS

 People want to review information but don't understand the language of the artifact.
Many interested reviewers of a particular artifact will resist having to learn the engineering
language in which the artifact is written. It is not uncommon to find people (such as veteran
software managers, veteran quality assurance specialists, or an auditing authority from a regula-
tory agency) who react as follows: "I'm not going to learn UML, but I want to review the design
of this software, so give me a separate description such as some flowcharts and text that I can
understand."
 People want to review the information but don't have access to the tools. It is not very

common for the development organization to be fully tooled; it is extremely rare that the/other
stakeholders have any capability to review the engineering artifacts on-line. Consequently,
organizations are forced to exchange paper documents. Standardized formats (such as UML,
spreadsheets, Visual Basic, C++, and Ada 95), visualization tools, and the Web are rapidly

Dept of CSE Page 93


Teegala Krishna Reddy Engineering College
Software process and project management

making it economically feasible for all stakeholders to exchange information electronically.

Dept of CSE Page 94


Teegala Krishna Reddy Engineering College
Software process and project management

 Human-readable engineering artifacts should use rigorous notations that are complete,
consistent, and used in a self-documenting manner. Properly spelled English words should be
used for all identifiers and descriptions. Acronyms and abbreviations should be used only where
they are well accepted jargon in the context of the component's usage. Readability should be
emphasized and the use of proper English words should be required in all engineering artifacts.
This practice enables understandable representations, browse able formats (paperless review),
more-rigorous notations, and reduced error rates.
 Useful documentation is self-defining: It is documentation that gets used.
 Paper is tangible; electronic artifacts are too easy to change. On-line and Web-based

artifacts can be changed easily and are viewed with more skepticism because of their inherent
volatility.

7. Model based software architecture


ARCHITECTURE: A MANAGEMENT PERSPECTIVE
The most critical technical product of a software project is its architecture: the infrastructure,
control, and data interfaces that permit software components to cooperate as a system and
software designers to cooperate efficiently as a team. When the communications media include
multiple languages and intergroup literacy varies, the communications problem can become
extremely complex and even unsolvable. If a software development team is to be successful, the
inter project communications, as captured in the software architecture, must be both accurate
and precise
From a management perspective, there are three different aspects of architecture.
1. An architecture (the intangible design concept) is the design of a software system this
includes all engineering necessary to specify a complete bill of materials.
2. An architecture baseline (the tangible artifacts) is a slice of information across the
engineering artifact sets sufficient to satisfy all stakeholders that the vision (function
and quality) can be achieved within the parameters of the business case (cost, profit,
time, technology, and people).
3. An architecture description (a human-readable representation of an architecture,
which is one of the components of an architecture baseline) is an organized subset of
information extracted from the design set model(s). The architecture description
communicates how the intangible concept is realized in the tangible artifacts.
The number of views and the level of detail in each view can vary widely.
The importance of software architecture and its close linkage with modern
software development processes can be summarized as follows:
 Achieving a stable software architecture represents a significant project milestone at
which the critical make/buy decisions should have been resolved.
 Architecture representations provide a basis for balancing the trade-offs between the
problem space (requirements and constraints) and the solution space (the operational
product).
 The architecture and process encapsulate many of the important (high-payoff or high-
risk) communications among individuals, teams, organizations, and stakeholders.
 Poor architectures and immature processes are often given as reasons for project

Dept of CSE Page 95


Teegala Krishna Reddy Engineering College
Software process and project management

failures.
 A mature process, an understanding of the primary requirements, and a demonstrable
architecture are important prerequisites for predictable planning.
 Architecture development and process definition are the intellectual steps that map the
problem to a solution without violating the constraints; they require human innovation
and cannot be automated.

Model based software architecture:

ARCHITECTURE: A TECHNICAL PERSPECTIVE


An architecture framework is defined in terms of views that are abstractions of the UML models
in the design set. The design model includes the full breadth and depth of information. An
architecture view is an abstraction of the design model; it contains only the architecturally
significant information. Most real-world systems require four views: design, process, component,
and deployment. The purposes of these views are as follows:
 Design: describes architecturally significant structures and functions of the
design model
 Process: describes concurrency and control thread relationships among the
design, component, and deployment views
 Component: describes the structure of the implementation set
 Deployment: describes the structure of the deployment set
Figure 7-1 summarizes the artifacts of the design set, including the architecture views and
architecture description.
The requirements model addresses the behavior of the system as seen by its end users, analysts,
and testers. This view is modeled statically using use case and class diagrams, and dynamically
using sequence, collaboration, state chart, and activity diagrams.
 The use case view describes how the system's critical (architecturally significant) use
cases are realized by elements of the design model. It is modeled statically using use
case diagrams, and dynamically using any of the UML behavioral diagrams.
 The design view describes the architecturally significant elements of the design model.
This view, an abstraction of the design model, addresses the basic structure and
functionality of the solution. It is modeled statically using class and object diagrams,
and dynamically using any of the UML behavioral diagrams.
 The process view addresses the run-time collaboration issues involved in executing the
architecture on a distributed deployment model, including the logical software
network topology (allocation to processes and threads of control), interprocess
communication, and state management. This view is modeled statically using
deployment diagrams, and dynamically using any of the UML behavioral diagrams.
 The component view describes the architecturally significant elements of the
implementation set. This view, an abstraction of the design model, addresses the
software source code realization of the system from the perspective of the project's
integrators and developers, especially with regard to releases and configuration
management. It is modeled statically using component diagrams, and dynamically
using any of the UML behavioral diagrams.

Dept of CSE Page 96


Teegala Krishna Reddy Engineering College
Software process and project management

 The deployment view addresses the executable realization of the system, including the
allocation of logical processes in the distribution view (the logical software topology)
to physical resources of the deployment network (the physical system topology). It is
modeled statically using deployment diagrams, and dynamically using any of the
UML behavioral diagrams.
Generally, an architecture baseline should include the following:
 Requirements: critical use cases, system-level quality objectives, and
priority relationships among features and qualities
 Design: names, attributes, structures, behaviors, groupings, and relationships
of significant classes and components
 Implementation: source component inventory and bill of materials (number,
name, purpose, cost) of all primitive components
 Deployment: executable components sufficient to demonstrate the critical use
cases and the risk associated with achieving the system qualities

Dept of CSE Page 97


Teegala Krishna Reddy Engineering College
Software process and project management

Dept of CSE Page 98


Teegala Krishna Reddy Engineering College
Software process and project management

UNIT III
Workflows and Checkpoints of process
Software process workflows, Iteration workflows, Major milestones, minor milestones, periodic
status assessments.
Process Planning
Work breakdown structures, Planning guidelines, cost and schedule estimating process, iteration
planning process, Pragmatic planning.

Work Flows of the process: Software process workflows, Iteration workflows.

8. Workflow of the process


SOFTWARE PROCESS WORKFLOWS
The term WORKFLOWS is used to mean a thread of cohesive and mostly sequential activities.
Workflows are mapped to product artifacts There are seven top-level workflows:
1. Management workflow: controlling the process and ensuring win conditions for
all stakeholders
2. Environment workflow: automating the process and evolving the
maintenance environment
3. Requirements workflow: analyzing the problem space and evolving the
requirements artifacts
4. Design workflow: modeling the solution and evolving the architecture and
design artifacts
5. Implementation workflow: programming the components and evolving the
implementation and deployment artifacts
6. Assessment workflow: assessing the trends in process and product quality
7. Deployment workflow: transitioning the end products to the user
Figure 8-1 illustrates the relative levels of effort expected across the phases in each of the top-
level workflows.

Dept of CSE Page 99


Teegala Krishna Reddy Engineering College
Software process and project management

Table 8-1 shows the allocation of artifacts and the emphasis of each workflow in each of the life-
cycle phases of inception, elaboration, construction, and transition.

Dept of CSE Page 100


Teegala Krishna Reddy Engineering College
Software process and project management

Dept of CSE Page 101


Teegala Krishna Reddy Engineering College
Software process and project management

ITERATION WORKFLOWS
Iteration consists of a loosely sequential set of activities in various proportions, depending on
where the iteration is located in the development cycle. Each iteration is defined in terms of a set
of allocated usage scenarios. An individual iteration's workflow, illustrated in Figure 8-2, gener-
ally includes the following sequence:
 Management: iteration planning to determine the content of the release and develop the
detailed plan for the iteration; assignment of work packages, or tasks, to the
development team
 Environment: evolving the software change order database to reflect all new baselines
and changes to existing baselines for all product, test, and environment components

 Requirements: analyzing the baseline plan, the baseline architecture, and the baseline
requirements set artifacts to fully elaborate the use cases to be demonstrated at the
end of this iteration and their evaluation criteria; updating any requirements set
artifacts to reflect changes necessitated by results of this iteration's engineering
activities
 Design: evolving the baseline architecture and the baseline design set artifacts to
elaborate fully the design model and test model components necessary to demonstrate
against the evaluation criteria allocated to this iteration; updating design set artifacts

Dept of CSE Page 102


Teegala Krishna Reddy Engineering College
Software process and project management

to reflect changes necessitated by the results of this iteration's engineering activities


 Implementation: developing or acquiring any new components, and enhancing or
modifying any existing components, to demonstrate the evaluation criteria allocated
to this iteration; integrating and testing all new and modified components with
existing baselines (previous versions)

 Assessment: evaluating the results of the iteration, including compliance with the
allocated evaluation criteria and the quality of the current baselines; identifying any
rework required and determining whether it should be performed before deployment
of this release or allocated to the next release; assessing results to improve the basis of
the subsequent iteration's plan
 Deployment: transitioning the release either to an external organization (such as a
user, independent verification and validation contractor, or regulatory agency) or to
internal closure by conducting a post-mortem so that lessons learned can be captured
and reflected in the next iteration
Iterations in the inception and elaboration phases focus on management. Requirements, and
design activities. Iterations in the construction phase focus on design, implementation, and
assessment. Iterations in the transition phase focus on assessment and deployment. Figure 8-3
shows the emphasis on different activities across the life cycle. An iteration represents the state
of the overall architecture and the complete deliverable system. An increment represents the
current progress that will be combined with the preceding iteration to from the next iteration.
Figure 8-4, an example of a simple development life cycle, illustrates the differences between
iterations and increments.

Dept of CSE Page 103


Teegala Krishna Reddy Engineering College
Software process and project management

Dept of CSE Page 104


Teegala Krishna Reddy Engineering College
Software process and project management

Checkpoints of the process: Major mile stones, Minor Milestones, Periodic status assessments.
Iterative Process Planning: Work breakdown structures, planning guidelines, cost and
schedule estimating, Iteration planning process, Pragmatic planning.

Dept of CSE Page 105


Teegala Krishna Reddy Engineering College
Software process and project management

9. Checkpoints of the process


Three types of joint management reviews are conducted throughout the process:

1. Major milestones. These system wide events are held at the end of each development
phase. They provide visibility to system wide issues, synchronize the management
and engineering perspectives, and verify that the aims of the phase have been
achieved.
2. Minor milestones. These iteration-focused events are conducted to review the
content of an iteration in detail and to authorize continued work.
3. Status assessments. These periodic events provide management with frequent
and regular insight into the progress being made.

Each of the four phases-inception, elaboration, construction, and transition consists of one or
more iterations and concludes with a major milestone when a planned technical capability is
produced in demonstrable form. An iteration represents a cycle of activities for which there is a
well-defined intermediate result-a minor milestone-captured with two artifacts: a release
specification (the evaluation criteria and plan) and a release description (the results). Major
milestones at the end of each phase use formal, stakeholder-approved evaluation criteria and
release descriptions; minor milestones use informal, development-team-controlled versions of
these artifacts.
Figure 9-1 illustrates a typical sequence of project checkpoints for a relatively large project.

Dept of CSE Page 106


Teegala Krishna Reddy Engineering College
Software process and project management

MAJOR MILESTONES
The four major milestones occur at the transition points between life-cycle phases. They can be
used in many different process models, including the conventional waterfall model. In an
iterative model, the major milestones are used to achieve concurrence among all stakeholders
on the current state of the project. Different stakeholders have very different concerns:

 Customers: schedule and budget estimates, feasibility, risk assessment,


requirements understanding, progress, product line compatibility
 Users: consistency with requirements and usage scenarios, potential
for accommodating growth, quality attributes
 Architects and systems engineers: product line compatibility, requirements changes,
trade-off analyses, completeness and consistency, balance among risk, quality, and
usability
 Developers: sufficiency of requirements detail and usage scenario descriptions, .
frameworks for component selection or development, resolution of development risk,
product line compatibility, sufficiency of the development environment
 Maintainers: sufficiency of product and documentation artifacts, understandability,
interoperability with existing systems, sufficiency of maintenance environment
 Others: possibly many other perspectives by stakeholders such as regulatory agencies,
independent verification and validation contractors, venture capital investors,
subcontractors, associate contractors, and sales and marketing teams
Table 9-1 summarizes the balance of information across the major milestones.

Dept of CSE Page 107


Teegala Krishna Reddy Engineering College
Software process and project management

Life-Cycle Objectives Milestone


The life-cycle objectives milestone occurs at the end of the inception phase. The goal is to
present to all stakeholders a recommendation on how to proceed with development, including a
plan, estimated cost and schedule, and expected benefits and cost savings. A successfully
completed life-cycle objectives milestone will result in authorization from all stakeholders to
proceed with the elaboration phase.

Dept of CSE Page 108


Teegala Krishna Reddy Engineering College
Software process and project management

Life-Cycle Architecture Milestone


The life-cycle architecture milestone occurs at the end of the elaboration phase. The primary
goal is to demonstrate an executable architecture to all stakeholders. The baseline architecture
consists of both a human-readable representation (the architecture document) and a
configuration- controlled set of software components captured in the engineering artifacts. A
successfully completed life-cycle architecture milestone will result in authorization from the
stakeholders to proceed with the construction phase.

The technical data listed in Figure 9-2 should have been reviewed by the time of the lifecycle
architecture milestone. Figure 9-3 provides default agendas for this milestone.

Dept of CSE Page 109


Teegala Krishna Reddy Engineering College
Software process and project management

Initial Operational Capability Milestone


The initial operational capability milestone occurs late in the construction phase.
The goals are to assess the readiness of the software to begin the transition into
customer/user sites and to authorize the start of acceptance testing. Acceptance
testing can be done incrementally across multiple iterations or can be completed
entirely during the transition phase is not necessarily the completion of the
construction phase.
Product Release Milestone
The product release milestone occurs at the end of the transition phase. The goal is to assess the
Dept of CSE Page 110
Teegala Krishna Reddy Engineering College
Software process and project management

completion of the software and its transition to the support organization, if any. The results of
acceptance testing are reviewed, and all open issues are addressed. Software quality metrics are
reviewed to determine whether quality is sufficient for transition to the support organization.

MINOR MILESTONES
For most iterations, which have a one-month to six-month duration, only two minor milestones
are needed: the iteration readiness review and the iteration assessment review.
 Iteration Readiness Review. This informal milestone is conducted at the start of each
iteration to review the detailed iteration plan and the evaluation criteria that have been
allocated to this iteration .
 Iteration Assessment Review. This informal milestone is conducted at the end of each
iteration to assess the degree to which the iteration achieved its objectives and
satisfied its evaluation criteria, to review iteration results, to review qualification test
results (if part of the iteration), to determine the amount of rework to be done, and to
review the impact of the iteration results on the plan for subsequent iterations.
The format and content of these minor milestones tend to be highly dependent on the project
and the organizational culture. Figure 9-4 identifies the various minor milestones to be
considered when a project is being planned.

PERIODIC STATUS ASSESSMENTS


Periodic status assessments are management reviews conducted at regular intervals (monthly,
quarterly) to address progress and quality indicators, ensure continuous attention to project
dynamics, and maintain open communications among all stakeholders.
Periodic status assessments serve as project snapshots. While the period may vary, the recurring
event forces the project history to be captured and documented. Status assessments provide the

Dept of CSE Page 111


Teegala Krishna Reddy Engineering College
Software process and project management

following:
 A mechanism for openly addressing, communicating, and resolving
management issues, technical issues, and project risks
 Objective data derived directly from on-going activities and evolving
product configurations
 A mechanism for disseminating process, progress, quality trends, practices,
and experience information to and from all stakeholders in an open forum
Periodic status assessments are crucial for focusing continuous attention on the evolving
health of the project and its dynamic priorities. They force the software project manager to
collect and review the data periodically, force outside peer review, and encourage dissemination
of best practices to and from other stakeholders.

The default content of periodic status assessments should include the topics identified in Table
9-2.

10. Iterative process planning


A good work breakdown structure and its synchronization with the process framework are
critical factors in software project success. Development of a work breakdown structure

Dept of CSE Page 112


Teegala Krishna Reddy Engineering College
Software process and project management

dependent on the project management style, organizational culture, customer preference,


financial constraints, and several other hard-to-define, project-specific parameters.
A WBS is simply a hierarchy of elements that decomposes the project plan into the discrete
work tasks. A WBS provides the following information structure:
 A delineation of all significant work
 A clear task decomposition for assignment of responsibilities
 A framework for scheduling, budgeting, and expenditure tracking
Many parameters can drive the decomposition of work into discrete tasks: product subsystems,
components, functions, organizational units, life-cycle phases, even geographies. Most systems
have a first-level decomposition by subsystem. Subsystems are then decomposed into their
components, one of which is typically the software.

CONVENTIONAL WBS ISSUES


Conventional work breakdown structures frequently suffer from three fundamental flaws.

1. They are prematurely structured around the product design.


2. They are prematurely decomposed, planned, and budgeted in either too much or
too little detail.
3. They are project-specific, and cross-project comparisons are usually difficult
or impossible.
Conventional work breakdown structures are prematurely structured around the product
design. Figure 10-1 shows a typical conventional WBS that has been structured primarily around
the subsystems of its product architecture, then further decomposed into the components of each
subsystem. A WBS is the architecture for the financial plan.
Conventional work breakdown structures are prematurely decomposed, planned, and budgeted
in either too little or too much detail. Large software projects tend to be over planned and small
projects tend to be under planned. The basic problem with planning too much detail at the outset
is that the detail does not evolve with the level of fidelity in the plan.
Conventional work breakdown structures are project-specific, and cross-project comparisons
are usually difficult or impossible. With no standard WBS structure, it is extremely difficult to
compare plans, financial data, schedule data, organizational efficiencies, cost trends, productivity
trends, or quality trends across multiple projects.

Dept of CSE Page 113


Teegala Krishna Reddy Engineering College
Software process and project management

Figure 10-1 Conventional work breakdown structure, following the product hierarchy

Management
System requirement and design

Dept of CSE Page 114


Teegala Krishna Reddy Engineering College
Software process and project management

Subsystem 1
Component 11
Requirements Design
Code Test
Documentation

…(similar structures for other


components) Component 1N
Requirements
Design
Code
Test
Documentation

…(similar structures for other


subsystems) Subsystem M
Component
M1
Requirements Design
Code Test
Documentation
…(similar structures for other
components) Component MN
Requirements
Design
Code Test

Dept of CSE Page 115


Teegala Krishna Reddy Engineering College
Software process and project management

Documentation
Integration and test
Test planning
Test procedure
preparation Testing
Test reports
Other support
areas
Configuration
control Quality
assurance System
administration

EVOLUTIONARY WORK BREAKDOWN STRUCTURES


An evolutionary WBS should organize the planning elements around the process framework
rather than the product framework. The basic recommendation for the WBS is to organize the
hierarchy as follows:

 First-level WBS elements are the workflows (management, environment,


requirements, design, implementation, assessment, and deployment).
 Second-level elements are defined for each phase of the life cycle
(inception, elaboration, construction, and transition).
 Third-level elements are defined for the focus of activities that produce the artifacts
of each phase.
A default WBS consistent with the process framework (phases, workflows, and artifacts) is
shown in Figure 10-2. This recommended structure provides one example of how the
elements of the process framework can be integrated into a plan. It provides a framework
for estimating the costs and schedules of each element, allocating them across a project
organization, and tracking expenditures.
The structure shown is intended to be merely a starting point. It needs to be tailored to
the specifics of a project in many ways.
 Scale. Larger projects will have more levels and substructures.
 Organizational structure. Projects that include subcontractors or span multiple
organizational entities may introduce constraints that necessitate different WBS
allocations.
 Degree of custom development. Depending on the character of the project, there can
be very different emphases in the requirements, design, and implementation
workflows.
 Business context. Projects developing commercial products for delivery to a broad
customer base may require much more elaborate substructures for the deployment
Dept of CSE Page 116
Teegala Krishna Reddy Engineering College
Software process and project management

element.
 Precedent experience. Very few projects start with a clean slate. Most of them are

Dept of CSE Page 117


Teegala Krishna Reddy Engineering College
Software process and project management

developed as new generations of a legacy system (with a mature WBS) or in the


context of existing organizational standards (with preordained WBS expectations).
The WBS decomposes the character of the project and maps it to the life cycle, the budget,
and the personnel. Reviewing a WBS provides insight into the important attributes,
priorities, and structure of the project plan.
Another important attribute of a good WBS is that the planning fidelity inherent in each element
is commensurate with the current life-cycle phase and project state. Figure 10-3 illustrates this
idea. One of the primary reasons for organizing the default WBS the way I have is to allow for
planning elements that range from planning packages (rough budgets that are maintained as an
estimate for future elaboration rather than being decomposed into detail) through fully planned
activity networks (with a well-defined budget and continuous assessment of actual versus
planned expenditures).

Figure 10-2 Default work


breakdown structure A
Management
AA Inception phase
management AAA
Business case
development
AAB Elaboration phase release
specifications AAC Elaboration phase
WBS specifications AAD Software
development plan
AAE Inception phase project control and status
assessments AB Elaboration phase management
ABA Construction phase release
specifications ABB Construction
phase WBS baselining
ABC Elaboration phase project control and status
assessments AC Construction phase management

Dept of CSE Page 118


Teegala Krishna Reddy Engineering College
Software process and project management

ACA Deployment phase planning


ACB Deployment phase WBS baselining

Dept of CSE Page 119


Teegala Krishna Reddy Engineering College
Software process and project management

ACC Construction phase project control and status


assessments AD Transition phase management

Dept of CSE Page 120


Teegala Krishna Reddy Engineering College
Software process and project management

ADA Next generation planning


ADB Transition phase project control and status
assessments B Environment
BA Inception phase environment
specification BB Elaboration phase
environment baselining
BBA Development environment installation and
administration BBB Development environment integration and
custom toolsmithing BBC SCO database formulation
BC Construction phase environment maintenance
BCA Development environment installation and
administration BCB SCO database maintenance
BD Transition phase environment maintenance
BDA Development environment maintenance and
administration BDB SCO database maintenance
BDC Maintenance environment packaging and
transition C Requirements
CA Inception phase requirements
development CCA Vision specification
CAB Use case modeling
CB Elaboration phase
requirements baselining CBA
Vision
baselining
CBB Use case model baselining
CC Construction phase requirements
Dept of CSE Page 121
Teegala Krishna Reddy Engineering College
Software process and project management

maintenance CD Transition
phase requirements maintenance

Dept of CSE Page 122


Teegala Krishna Reddy Engineering College
Software process and project management

D
Design
DA Inception phase architecture
prototyping DB Elaboration
phase architecture baselining DBA
Architecture design modeling
DBB Design demonstration planning and
conduct DBC Software architecture
description
DC Construction phase design modeling
DCA Architecture design model
maintenance DCB Component design
modeling
DD Transition phase design
maintenance E Implementation
EA Inception phase component prototyping
EB Elaboration phase component implementation
EBA Critical component coding demonstration
integration EC Construction phase component
implementation
ECA Initial release(s) component coding and stand-
alone testing ECB Alpha release component
coding and stand-alone testing ECC Beta release
component coding and stand-alone testing
ECD Component
maintenance F Assessment
Dept of CSE Page 123
Teegala Krishna Reddy Engineering College
Software process and project management

FA Inception phase
assessment FB
Elaboration phase
assessment
FBA Test modeling
FBB Architecture test scenario implementation

Dept of CSE Page 124


Teegala Krishna Reddy Engineering College
Software process and project management

FBC Demonstration assessment and release


descriptions FC Construction phase assessment
FCA Initial release assessment and release
description FCB Alpha release assessment and
release description FCC Beta release
assessment and release description
FD Transition phase assessment
FDA Product release assessment and release
description G Deployment
GA Inception phase
deployment planning GB
Elaboration phase deployment
planning GC Construction phase
deployment
GCA User manual
baselining GD Transition
phase deployment
GDA Product transition to user
Figure 10-3 Evolution of planning fidelity in the WBS over the life cycle

Inception Elaboration

WBS Element Fidelity WBS Element Fidelity


Management High Management High
Environment Moderate Environment High
Requirement High Requirement High
Design Moderate Design High

M.THANMAYEE Page
115
Software process and project management

Implementation Low Implementatio Moderate


n
Assessment Low Assessment Moderate
Deployment Low Deployment Low

WBS Element Fidelity WBS Element Fidelity


Management High Management High

Environment High Environment High

Requirements Low Requirements Low

Design Low Design Moderate

Implementatio Moderate Implementation High


n
Assessment High Assessment High

Deployment High Deployment Moderate

Transition Construction

PLANNING GUIDELINES
Software projects span a broad range of application domains. It is valuable but risky
to make specific planning recommendations independent of project context. Project-
independent planning advice is also risky. There is the risk that the guidelines may
pe adopted blindly without being adapted to specific project circumstances. Two
simple planning guidelines should be considered when a project plan is being
initiated or assessed. The first guideline, detailed in Table 10 -1, prescribes a default
allocation of costs among the first-level WBS elements. The second guideline,
detailed in Table 10-2, prescribes the allocation of effort and schedule across the
lifecycle phases.

RojaRamani.Adapa Page 116


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

10-1 Web budgeting defaults


First Level WBS Element Default Budget
Management 10%
Environment 10%
Requirement 10%
Design 15%
Implementation 25%
Assessment 25%
Deployment 5%
Total 100%

Table 10-2 Default distributions of effort and schedule by phase


Domain Inception Elaboratio Constructio Transition
n n
Effort 5% 20% 65% 10%
Schedule 10% 30% 50% 10%

THE COST AND SCHEDULE ESTIMATING PROCESS


Project plans need to be derived from two perspectives. The first is a forward-looking, top-down
approach. It starts with an understanding of the general requirements and constraints, derives a
macro-level budget and schedule, then decomposes these elements into lower level budgets and
intermediate milestones. From this perspective, the following planning sequence would occur:

RojaRamani.Adapa Page 117


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

1. The software project manager (and others) develops a characterization of the


overall size, process, environment, people, and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a
software cost estimation model.
3. The software project manager partitions the estimate for the effort into a top-level
WBS using guidelines such as those in Table 10-1.
4. At this point, subproject managers are given the responsibility for decomposing each of
the WBS elements into lower levels using their top-level allocation, staffing profile,
and major milestone dates as constraints.

The second perspective is a backward-looking, bottom-up approach. We start with the end in
mind, analyze the micro-level budgets and schedules, then sum all these elements into the
higher level budgets and intermediate milestones. This approach tends to define and
populate the WBS from the lowest levels upward. From this perspective, the following
planning sequence would occur:
1. The lowest level WBS elements are elaborated into detailed tasks
2. Estimates are combined and integrated into higher level budgets and milestones.
3. Comparisons are made with the top-down budgets and schedule milestones.
Milestone scheduling or budget allocation through top-down estimating tends to exaggerate the
project management biases and usually results in an overly optimistic plan. Bottom-up estimates
usually exaggerate the performer biases and result in an overly pessimistic plan.
These two planning approaches should be used together, in balance, throughout the life
cycle of the project. During the engineering stage, the top-down perspective will dominate
because there is usually not enough depth of understanding nor stability in the detailed task
sequences to perform credible bottom-up planning. During the production stage, there should be
enough precedent experience and planning fidelity that the bottom-up planning perspective will
dominate. Top- down approach should be well tuned to the project-specific parameters, so it
should be used more as a global assessment technique. Figure 10-4 illustrates this life-cycle
planning balance.

Figure 10-4 Planning balance throughout the life cycle

Bottom up task level planning


based on metrics from
previous iterations

Top down project level planning based on


microanalysis from previous projects

RojaRamani.Adapa Page 118


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Engineering Stage Production Stage


Inception Elaboration Construction Transition

Feasibility iteration Architecture iteration Usable iteration Product

Releases

Engineering stage Production stage


planning emphasis planning emphasis

Macro level task estimation for Micro level task estimation for
production stage artifacts production stage artifacts

Micro level task estimation for Macro level task estimation for
engineering artifacts maintenance of engineering
artifacts
Stakeholder concurrence Stakeholder concurrence

Coarse grained variance analysis of Fine grained variance analysis of


actual vs planned expenditures actual vs planned expenditures

Tuning the top down project


independent planning guidelines
into project specific planning
guidelines
WBS definition and elaboration

THE ITERATION PLANNING PROCESS


Planning is concerned with defining the actual sequence of intermediate results. An evolutionary
build plan is important because there are always adjustments in build content and schedule as
early conjecture evolves into well-understood project circumstances. Iteration is used to mean a
complete synchronization across the project, with a well-orchestrated global assessment of the
entire project baseline.
 Inception iterations. The early prototyping activities integrate the foundation
components of a candidate architecture and provide an executable framework for
elaborating the
critical use cases of the system. This framework includes existing

RojaRamani.Adapa Page 119


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

components, commercial components, and custom prototypes sufficient to demonstrate


a candidate architecture and sufficient requirements understanding to establish a
credible business case, vision, and software development plan.
 Elaboration iterations. These iterations result in architecture, including a complete framework
and infrastructure for execution. Upon completion of the architecture iteration, a few critical
use cases should be demonstrable: (1) initializing the architecture, (2) injecting a scenario to
drive the worst-case data processing flow through the system (for example, the peak
transaction throughput or peak load scenario), and (3) injecting a scenario to drive the worst-
case control flow through the system (for example, orchestrating the fault-tolerance use
cases).
 Construction iterations. Most projects require at least two major construction iterations: an
alpha release and a beta release.
 Transition iterations. Most projects use a single iteration to transition a beta release into the
final product.
The general guideline is that most projects will use between four and nine iterations. The
typical project would have the following six-iteration profile:
 One iteration in inception: an architecture prototype
 Two iterations in elaboration: architecture prototype and architecture baseline
 Two iterations in construction: alpha and beta releases
 One iteration in transition: product release
A very large or unprecedented project with many stakeholders may require
additional inception iteration and two additional iterations in construction, for a
total of nine iterations.

PRAGMATIC PLANNING
Even though good planning is more dynamic in an iterative process, doing it
accurately is far easier. While executing iteration N of any phase, the software
project manager must be monitoring and controlling against a plan that was
initiated in iteration N - 1 and must be planning iteration N
+ 1. The art of good project· management is to make trade-offs in the current
iteration plan and the next iteration plan based on objective results in the current
iteration and previous iterations. Aside from bad architectures and misunderstood
requirements, inadequate planning (and subsequent bad management) is one of
the most common reasons for project failures. Conversely, the success of every
successful project can be attributed in part to good planning.
A project's plan is a definition of how the project requirements will be transformed into' a
product within the business constraints. It must be realistic, it must be current, it must be a team
product, it must be understood by the stakeholders, and it must be used. Plans are not just for
managers. The more open and visible the planning process and results, the more ownership there
is among the team members who need to execute it. Bad, closely held plans cause attrition.
Good, open plans can shape cultures and encourage teamwork.

RojaRamani.Adapa Page 120


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

UNIT IV
Project Organizations
Line-of- business organizations, project organizations, evolution of organizations, process
automation.
Project Control and process instrumentation
The seven-core metrics, management indicators, quality indicators, life-cycle expectations,
Pragmatic software metrics, metrics automation.

Project Organizations and Responsibilities: Line-of-Business Organizations,


Project Organizations, evolution of Organizations.
Process Automation: Automation Building blocks, The Project Environment.

Project Organizations and Responsibilities:


 Organizations engaged in software Line-of-Business need to support projects with
the infrastructure necessary to use a common process.
 Project organizations need to allocate artifacts & responsibilities across project team
to ensure a balance of global (architecture) & local (component) concerns.
 The organization must evolve with the WBS & Life cycle concerns.
 Software lines of business & product teams have different motivation.
 Software lines of business are motivated by return of investment (ROI), new
business discriminators, market diversification & profitability.
 Project teams are motivated by the cost, Schedule & quality of specific deliverables
1) Line-Of-Business Organizations:
The main features of default organization are as follows:
• Responsibility for process definition & maintenance is specific to a cohesive line
of business.
• Responsibility for process automation is an organizational role & is equal
in importance to the process definition role.
• Organizational role may be fulfilled by a single individual or several
different teams.

RojaRamani.Adapa Page 121


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Fig: Default roles in a software Line-of-Business Organization.


Software Engineering Process Authority (SEPA)
The SEPA facilities the exchange of information & process guidance both to &
from project practitioners
This role is accountable to General Manager for maintaining a current assessment

of the organization’s process maturity & its plan for future improvement
Project Review Authority (PRA)
The PRA is the single individual responsible for ensuring that a software project
complies with all organizational & business unit software policies, practices &
standards
A software Project Manager is responsible for meeting the requirements of a contract or
some other project compliance standard
Software Engineering Environment Authority( SEEA )
The SEEA is responsible for automating the organization’s process, maintaining
the organization’s standard environment, Training projects to use the environment
& maintaining organization-wide reusable assets
The SEEA role is necessary to achieve a significant ROI for common process.
RojaRamani.Adapa Page 122
Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Infrastructure
An organization’s infrastructure provides human resources support, project-
independent research & development, & other capital software engineering assets.
2) Project organizations:

Artifact Activitie

Administratio

Software

• The above figure shows a default project organization and maps project-level roles
and responsibilities.
• The main features of the default organization are as follows:
• The project management team is an active participant, responsible for producing

RojaRamani.Adapa Page 123


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

as well as managing.
• The architecture team is responsible for real artifacts and for the integration
of components, not just for staff functions.
• The development team owns the component construction and
maintenance activities.
• The assessment team is separate from development.
• Quality is everyone’s into all activities and checkpoints.
• Each team takes responsibility for a different quality perspective.

3) EVOLUTION OF ORGANIZATIONS:
Software Software
Manageme Manageme
nt 50% nt

Software Software Software Software Software Software


Architectu Assessm Architectu Assessm
Developme Developme
re 20% re 50%
10 20

Inceptio Elaboratio

RojaRamani.Adapa Page 124


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Software Software
Manageme Manageme
nt nt 10%
10
%

Software Software Software Software Software


Architectu Assessme Architectu Assessme
re 10% Development nt
re 5% Software nt
Developme
nt

Transition Construction

Inception: Elaboration:
Software management: Software management:
50% 10%
Software Architecture: Software Architecture:
20% Software 50% Software
development: 20% development: 20%
Software Software
Assessment Assessment
(measurement/evaluation): (measurement/evaluation):
10% 20%
Construction: Transition:
Software management: Software management:
10% 10%
Software Architecture: 10% Software Architecture: 5%
Software development: Software development:
50% Software Assessment 35% Software
(measurement/evaluation): Assessment
30 (measurement/evaluation
% ):50
%

RojaRamani.Adapa Page 125


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

The Process Automation:


Introductory Remarks:
The environment must be the first-class artifact of the process.
Process automation & change management is critical to an iterative
process. If the change is expensive then the development organization will
resist it.
Round-trip engineering & integrated environments promote change
freedom & effective evolution of technical artifacts.
Metric automation is crucial to effective project control.
External stakeholders need access to environment resources to
improve interaction with the development team & add value to the
process.
The three levels of process which requires a certain degree of process
automation for the corresponding process to be carried out efficiently.
Metaprocess (Line of business): The automation support for this level is called
an infrastructure. Macroproces (project): The automation support fora project’s
process is called an environment. Microprocess (iteration): The automation
support for generating artifacts is generally called a tool.

Tools: Automation Building blocks:

Many tools are available to automate the software development process. Most of
the core software development tools map closely to one of the process workflows
Workflows Environment Tools & process
Automation Management Workflow automation, Metrics automation

Environment Change Management, Document Automation


Requirements Requirement Management

Design Visual Modeling

Implementation -Editors, Compilers, Debugger,


Linker, Runtime Assessment -Test automation, defect Tracking
Deployment defect Tracking

RojaRamani.Adapa Page 126


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

The Project Environment:


The project environment artifacts evolve through three discrete states.

(1)Prototyping Environment.(2)Development Environment.(3)Maintenance Environment.


The Prototype Environment includes an architecture test bed for prototyping project
architecture to evaluate trade-offs during inception & elaboration phase of the life
cycle.
The Development environment should include a full suite of development tools needed
to support various Process workflows & round-trip engineering to the maximum extent
possible.
The Maintenance Environment should typically coincide with the mature version of the
development.

There are four important environment disciplines that are critical to management
context & the success of a modern iterative development process.
Round-Trip engineering
Change Management
Software Change
Orders (SCO)
RojaRamani.Adapa Page 127
Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Configuration baseline Configuration Control Board

RojaRamani.Adapa Page 128


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Infrastructure
Organization
Policy
Organization
Environment
Stakeholder
Environment.

Round Trip Environment


Tools must be integrated to maintain consistency & traceability.

Round-Trip engineering is the term used to describe this key requirement for
environment that support iterative development.
As the software industry moves into maintaining different information sets for the
engineering artifacts, more automation support is needed to ensure efficient & error
free transition of data from one artifacts to another.
Round-trip engineering is the environment support necessary to maintain
Consistency among the engineering artifacts.

RojaRamani.Adapa Page 129


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Change Management
Change management must be automated & enforced to manage multiple iterations
& to enable change freedom.
Change is the fundamental primitive of iterative Development.
I. Software Change Orders
The atomic unit of software work that is authorized to create, modify or obsolesce
components within a configuration baseline is called a software change orders (
SCO )

The basic fields of the SCO are Title, description, metrics, resolution, assessment & disposition

RojaRamani.Adapa Page 130


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Change management
II. Configuration Baseline

A configuration baseline is a named collection of software components


&Supporting documentation that is subjected to change management
& is upgraded, maintained, tested, statuses & obsolesced a unit
There are generally two classes of baselines
External Product
RojaRamani.Adapa Page 131
Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Release Internal
testing Release
Three levels of baseline releases are required for most Systems
1.Major release (N)
2.Minor Release (M)
3.Interim (temporary) Release (X)
Major release represents a new generation of the product or project
A minor release represents the same basic product but with enhanced
features, performance or quality.
Major & Minor releases are intended to be external product releases
that are persistent & supported for a period of time.
An interim release corresponds to a developmental
configuration that is intended to be transient.
Once software is placed in a controlled baseline all changes are
tracked such that a distinction must be made for the cause of the
change. Change categories are
Type 0: Critical Failures (must be fixed before release)
Type 1: A bug or defect either does not impair (Harm) the usefulness of
the system or can be worked around
Type 2: A change that is an enhancement rather than a response to a defect
Type 3: A change that is necessitated by the update to the environment

Type 4: Changes that are not accommodated by the other categories.

RojaRamani.Adapa Page 132


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Change Management
III Configuration Control Board (CCB)
A CCB is a team of people that functions as
the decision Authority on the content of
configuration baselines
A CCB includes:
1. Software managers
2. Software Architecture managers
3. Software Development managers
4. Software Assessment managers
5. Other Stakeholders who are integral to the maintenance of
the controlled software delivery system?
Infrastructure
The organization infrastructure provides the organization’s capital assets
including two key artifacts - Policy & Environment
I Organization Policy:
A Policy captures the standards for project software development processes
The organization policy is usually packaged as a handbook that defines
the life cycles & the process primitives such as

 Major milestones
 Intermediate Artifacts
 Engineering repositories
 Metrics
 Roles & Responsibilities

RojaRamani.Adapa Page 133


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

Infrastructure
II Organization Environment
The Environment that captures an inventory of tools which are building
blocks from which project environments can be configured efficiently &
economically

Stakeholder Environment
Many large scale projects include people in external organizations
that represent other stakeholders participating in the development
process they might include
 Procurement agency contract monitors
 End-user engineering support personnel
 Third party maintenance contractors
 Independent verification & validation contractors
 Representatives of regulatory agencies & others.

RojaRamani.Adapa Page 134


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

These stakeholder representatives also need to access to


development resources so that they can contribute value to
overall effort. These stakeholders will be access through on-
line
An on-line environment accessible by the external stakeholders
allow them to participate in the process a follows
Accept & use executable increments for the hands-on evaluation.
Use the same on-line tools, data & reports that the development
organization uses to manage & monitor the project
Avoid excessive travel, paper interchange delays, format translations,
paper * shipping costs & other overhead cost

RojaRamani.Adapa Page 135


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management

RojaRamani.Adapa Page 136


Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College

You might also like