Sppm Notes Word
Sppm Notes Word
• Gain knowledge of software economics, phases in the life cycle of software development,
project organization, and project control and process instrumentation
• Analyze the major and minor milestones, artifacts and metrics from management and
technical perspective
• Design and develop software product using conventional and modern principles of software
project management
UNIT I
UNIT II
UNIT III
Workflows and Checkpoints of process
Software process workflows, Iteration workflows, Major milestones, minor milestones, periodic
status assessments.
Process Planning
Work breakdown structures, Planning guidelines, cost and schedule estimating process, iteration
planning process, Pragmatic planning.
UNIT IV
Project Organizations
Line-of- business organizations, project organizations, evolution of organizations, process
automation.
Project Control and process instrumentation
The seven-core metrics, management indicators, quality indicators, life-cycle expectations,
Pragmatic software metrics, metrics automation.
UNIT V
UNIT I
INTRODUCTION
Software project management:
It refers to the branch of project management dedicated to the planning, scheduling, resource
allocation, execution, tracking and delivery of software and web projects.
Project management:
It is the practice of initiating, planning, executing, controlling, and closing the work of a team to
achieve specific goals and meet specific success criteria at the specified time. This information is
usually described in project documentation, created at the beginning of the development
process.
The role and responsibility of a software project manager:
1. Planning: This means putting together the blueprint for the entire project from ideation
to fruition. It will define the scope, allocate necessary resources, propose the timeline,
delineate the plan for execution, lay out a communication strategy, and indicate the steps
necessary for testing and maintenance.
2. Leading: A software project manager will need to assemble and lead the project team,
which likely will consist of developers, analysts, testers, graphic designers, and technical
writers. This requires excellent communication, people and leadership skills.
4. Execution: The project manager will participate in and supervise the successful
execution of each stage of the project. This includes monitoring progress, frequent team
check-ins and creating status reports.
5. Budget: Like traditional project managers, software project managers are tasked with
creating a budget for a project, and then sticking to it as closely as possible, moderating
spend and re-allocating funds when necessary.
Project Management:
Project Management is the application of knowledge, skills, tools and techniques to project
activities to meet the project requirements.
Project Management Process consists of the following 4 stages:
Feasibility study
Project Planning
Project Execution
Project Termination
Software process and project management
Feasibility study:
Feasibility study explores system requirements to determine project feasibility. There are several
fields of feasibility study including economic feasibility, operational feasibility, and technical
feasibility. The goal is to determine whether the system can be implemented or not. The process
of feasibility study takes as input the requirement details as specified by the user and other
domain- specific details. The output of this process simply tells whether the project should be
undertaken or not and if yes, what would the constraints be. Additionally, all the risks and their
potential effects on the projects are also evaluated before a decision to start the project is taken.
Project Planning:
A detailed plan stating stepwise strategy to achieve the listed objectives is an integral part of any
project.
Planning consists of the following activities:
Set objectives or goals
Develop strategies
Develop project policies
Determine courses of action
Making planning decisions
Set procedures and rules for the project
Develop a software project plan
Prepare budget
Conduct risk management
Document software project plans
This step also involves the construction of a work breakdown structure (WBS). It also includes
size, effort, and schedule and cost estimation using various techniques.
Project Execution:
Project Termination:
There can be several reasons for the termination of a project. Though expecting a project to
terminate after successful completion is conventional, but at times, a project may also terminate
without completion. Projects have to be closed down when the requirements are not fulfilled
according to given time and cost constraints.
Software process and project management
Once the project is terminated, a post-performance analysis is done. Also, a final report is
published describing the experiences, lessons learned, recommendations for handling future
projects.
SOFTWARE PROCESS MATURITY
Process Improvement Cycle:
1.Initialize
Establish Sponsorship
Create vision and strategy
Establish improvement structure
2.For Each Maturity Level
Characterize current practice in terms of key process areas
Assessment recommendations
Revise strategy (generate action plans and prioritizekey process areas)
3.For Each key Process Area
Establish process action teams
Implement tactical plan, define processes, plan and execute pilot(s), plan and
execute institutionalization
Document and analyze lessons
Revise organizational approach
Software process and project management
1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new
or undocumented repeat process.
2. Repeatable - the process is at least documented sufficiently such that repeating the
same steps may be attempted.
3. Defined - the process is defined/confirmed as a standard business process
4. Capable - the process is quantitatively managed in accordance with agreed-upon metrics.
5. Efficient - process management includes deliberate process optimization/improvement.
Maturity levels consist of a predefined set of process areas. The maturity levels are measured by
the achievement of the specific and generic goals that apply to each predefined set of process
areas. The following sections describe the characteristics of each maturity level in detail.
At maturity level 1, processes are usually ad hoc and chaotic. The organization usually does not
provide a stable environment. Success in these organizations depends on the competence and
heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work; however, they
frequently exceed the budget and schedule of their projects.
At maturity level 2, an organization has achieved all the specific and generic goals of the
maturity level 2 process areas. In other words, the projects of the organization have ensured that
requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are
retained during times of stress. When these practices are in place, projects are performed and
managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The
status of the work products and the delivery of services are visible to management at defined
points.
Commitments are established among relevant stakeholders and are revised as needed. Work
products are reviewed with stakeholders and are controlled.
The work products and services satisfy their specified requirements, standards, and objectives.
At maturity level 3, an organization has achieved all the specific and generic goals of the
process areas assigned to maturity levels 2 and 3.
At maturity level 3, processes are well characterized and understood, and are described in
standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards,
process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and
procedures may be quite different in each specific instance of the process (for example, on a
particular project). At maturity level 3, the standards, process descriptions, and procedures for a
project are tailored from the organization's set of standard processes to suit a particular project or
organizational unit. The organization's set of standard processes includes the processes addressed
at maturity level 2 and maturity level 3. As a result, the processes that are performed across the
organization are consistent except for the differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described in more
detail and more rigorously than at maturity level 2. At maturity level 3, processes are managed
more proactively using an understanding of the interrelationships of the process activities and
detailed measures of the process, its work products, and its services.
At maturity level 4, an organization has achieved all the specific goals of the process areas
assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Subprocesses are selected that significantly contribute to overall process
performance. These selected subprocesses are controlled using statistical and other quantitative
techniques.
Quantitative objectives for quality and process performance are established and used as criteria
in managing processes. Quantitative objectives are based on the needs of the customer, end
users, organization, and process implementers. Quality and process performance are understood
in statistical terms and are managed throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically
analyzed. Special causes of process variation are identified and, where appropriate, the sources of
special causes are corrected to prevent future occurrences.
Quality and process performance measures are incorporated into the organization.s measurement
repository to support fact-based decision making in the future.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of process
performance. At maturity level 4, the performance of processes is controlled using statistical and
other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are
only qualitatively predictable.
Software process and project management
At maturity level 5, an organization has achieved all the specific goals of the process areas
assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and
3.
Processes are continually improved based on a quantitative understanding of the common causes
of variation inherent in processes.
The effects of deployed process improvements are measured and evaluated against the
quantitative process-improvement objectives. Both the defined processes and the organization's
set of standard processes are targets of measurable improvement activities.
Optimizing processes that are agile and innovative depends on the participation of an empowered
workforce aligned with the business values and objectives of the organization. The organization's
ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate
and share learning. Improvement of the processes is inherently part of everybody's role, resulting
in a cycle of continual improvement.
A critical distinction between maturity level 4 and maturity level 5 is the type of process
variation addressed. At maturity level 4, processes are concerned with addressing special causes
of process variation and providing statistical predictability of the results. Though processes may
produce predictable results, the results may be insufficient to achieve the established objectives.
At maturity level 5, processes are concerned with addressing common causes of process variation
and changing the process (that is, shifting the mean of the process performance) to improve
process performance (while maintaining statistical predictability) to achieve the established
quantitative process- improvement objectives.
Each maturity level provides a necessary foundation for effective implementation of processes at
the next level.
Higher level processes have less chance of success without the discipline provided
by lower levels.
The effect of innovation can be obscured in a noisy process.
Higher maturity level processes may be performed by organizations at lower maturity levels,
with the risk of not being consistently applied in a crisis.
Software process and project management
Organizations at the Initial Process level can improve their performance by instituting
basic project controls.
The most important are: Project management. The fundamental role of a project-
management system is to ensure effective control of commitments. This requires
adequate preparation, clear responsibility, a public declaration, and a dedication to
performance.’
For software, this starts with an understanding of the job’s magnitude. In any but the
simplest projects, a plan must then be developed to determine the best schedule and the
resources required. In the absence of such an orderly plan, no com moment can be better
than an educated guess. Management oversight.
A disciplined software-development organization must have senior management
oversight. This includes review and approval of all major development plans before
official commitment. Also, a quarterly review should be conducted of facility-wide
process compliance, installed-quality performance, schedule tracking, cost trends,
computing service, and quality and productivity goals by project.
The lack of such reviews typically results in uneven and generally inadequate
implementation of the process as well as in frequent over commitments and cost
surprises. Quality assurance.
A quality assurance group is charged with assuring management that the software-
development work is actually done the way it is supposed to be done.
To be effective, the assurance organization must have an independent reporting line to
senior management and sufficient resources to monitor performance of all key planning,
implementation, and verification activities.
This generally requires an organization of about 5 to 6 percent the sire of the
development organization. Change control. Control of changes in software development
is fundamental to business and financial control as well as to technical stability.
To develop quality software on a predictable schedule, the requirements must be
established and maintained with reasonable stability throughout the development cycle.
Changes will have to be made, but they must be managed and introduced in an orderly
way.
While occasional requirements changes are needed, historical evidence demonstrates that
many of them can be deferred and phased in later. If all changes are not controlled,
orderly design, implementation, and testing is impossible and no quality plan can be
effective.
The Repeatable Process:
The Repeatable Process has one important strength over the Initial Process: It provides
commitment control.
This is such an enormous advance over the Initial Process that the people in the
organization tend to believe they have mastered the software problem.
They do not realize that their strength stems from their prior experience at similar work.
Organizations at the Repeatable Process level thus face major risks when they are
presented with new challenges. Examples of the changes that represent the highest risk at
this level are: New tools and methods will likely affect how the process is performed, thus
destroying the relevance of the intuitive historical base on which the organization relies.
Software process and project management
Without a defined process framework in which to address these risks, it is even possible
for a new technology to do more harm than good. When the organization must develop a
new kind of product, it is entering ne\+ territory.
For example, a software group that has experience developing compilers will likely have
design, scheduling, and estimating problems if assigned to write a control program.
Similarly, a group that has developed small, self-contained programs will not understand
the interface and integration issues involved in large-scale projects.
These changes again destroy the relevance of the intuitive historical basis for the
organization's work. Major organization changes can be highly disruptive.
In the Repeatable Process organization, a new manager has no orderly basis for
understanding what is going on and new team members must learn the ropes through
word of mouth.
The key actions required to advance from the Repeatable Process to the Defined Process
are:
1. Establish a process group. A process group is a technical group that focuses
exclusively on improving the software development process. In most software
organizations, people are entirely devoted to product work. Until someone is given a full-
time assignment to work on the process, little orderly progress can be made in improving
it. The responsibilities of process groups include defining the development process,
identifying technology needs and opportunities, advising the projects, and conducting
quarterly management reviews of process status and performance. Typically, the process
group should be about 1 to 3 percent the size of the development organization. Because of
the need for a nucleus of skills, groups smaller than about four are unlikely to be fully
effective. Small organizations that lack the experience base to form a proce5s group
should address these issues through specially formed committees of experienced
professionals or by retaining consultants.
2. Establish a software-development process architecture that describes the technical and
management activities required for proper execution of the development process.4 The
architecture is a structural decomposition of the development cycle into tasks, each of
which has entry criteria, functional descriptions, verification procedures, and exit criteria.
The decomposition continues until each defined task is performed by an individual or
single management unit.
3. If they are not already in place, introduce a family of software-engineering methods
and technologies. These include design and code inspections, formal design methods,
library- control systems, and comprehensive testing methods. Prototyping should also be
considered, along with the adoption of modern implementation languages.
The Defined Process:
With the Defined Process, the organization has achieved the foundation for major
and continuing progress.
For example, the development group, when faced with a crisis, will likely continue
to use the Defined Process.
The foundation has now been established for examining the process and deciding how
to improve it.
Software process and project management
As powerful as the Defined Process is, it is still only qualitative: There is little data to
indicate what is going on or how effective the process really is. There is considerable
debate about the value of software-process measurements and the best ones to use.
This uncertainty generally stems from a lack of process definition and the consequent
confusion about the specific items to be measured. With a defined process, we can focus
the measurements on specific tasks.
The process architecture is thus an essential prerequisite to effective measurement. The
key steps"' to advance to the Managed Process are:
1. Establish a minimum, basic set of process measurements to identify the quality and
cost parameters of each process step. The objective is to quantify the relative costs and
benefits of each major process activity, such as the cost and yield of error detection and
correction methods.
2. Establish a process database with the resources to manage and maintain it. Cost and
yield data should be maintained centrally to guard against loss, to make it available for all
projects, and to facilitate process quality and productivity analysis.
3. Provide sufficient process resources to gather and maintain this data and to advise
project members on its use. Assign skilled professionals to monitor the quality of the data
before entry in the database and to provide guidance on analysis methods and
interpretation.
4. Assess the relative quality of each product and inform management where quality
targets are not being met. An independent quality-assurance group should assess the
quality actions of each project and track its progress against its quality plan. When this
progress is compared with the historical experience on similar projects, an informed
assessment generally can be made.
The Managed Process:
In advancing from the Initial Process via the Repeatable and Defined Processes to the
Managed Process, software organizations typically will experience substantial quality
improvements.
The greatest potential problem with the Managed Process is the cost of gathering data.
There are an enormous number of potentially valuable measure5 of software
development and support, but such data is expensive to gather and maintain.
Therefore, approach data gathering with care and precisely define each piece of data in
advance. Productivity data is generally meaningless unless explicitly defined. For
example, the simple measure of lines of source code per development month can vary by
100 times of more, depending on the interpretation of the parameters.
The code count could include only new and changed code or all shipped instructions. For
modified programs, this can cause a ten-times variation. Similarly, you can use non
comment, nonblank lines, executable instructions, or equivalent assembler instructions,
with variations again of up to seven times.' Management, test, documentation, and
support personnel may or may not be counted when calculating labor months expended.
Again, the variations can run at least as high as seven times.' When different groups
gather data but do not use identical definitions, the results are not comparable, even if it
made sense to compare them. The tendency with such data is to use it to compare several
groups and put pressure on those with the lowest ranking. This is a misapplication of
process data.
Software process and project management
First, it is rare that two projects are comparable by any simple measures. The variations
in task complexity caused by different product types can exceed five to one. Similarly,
the cost per line of code of small modifications is often two to three times that for new
programs.
The degree of requirements change can make an enormous difference, as can the design
status of the base program in the case of enhancements. Process data must not be used to
compare projects or individuals. Its purpose is to illuminate the product being developed
and to provide an informed basis for improving the process.
When such data is used by management to evaluate individuals or teams, the reliability of
the data itself will deteriorate.
The US Constitution's Fifth Amendment, which protects against self-incrimination, is
based on sound principles:
Few people can be counted on to provide reliable data on their own performance.
The two fundamental requirements to advance from the Managed Process to the
Optimizing Process are:
1. Support automatic gathering of process data. Some data cannot be gathered by hand,
and all manually gathered data is subject to error and omission.
2. Use this data to both analyze and modify the process to prevent problems and improve
efficiency.
The Optimizing Process:
In varying degrees, process optimization goes on at all levels of process maturity. With
the step from the Managed to the Optimizing Process, however, there is a paradigm shift.
Up to this point, software development managers have largely focused on their products
and will typically only gather and analyze data that directly relates to product
improvement.
In the Optimizing Process, the data is available to actually tune the process itself. With a
little experience, management will soon see that process optimization can produce major
quality and productivity improvements.
For example, many errors can be identified and fixed far more economically by code
inspections than through testing. Unfortunately, there is little published data on the costs
of finding and fixing errors.- However, I have developed a useful rule of thumb from
experience
It takes about one to four working hours to find and fix a bug through inspections and
about 15 to 20 working hours to find and fix a bug in function or system test. It is thus
clear that testing is not a cost-effective way to find and fix most bugs.
However, some kinds of errors are either uneconomical or almost impossible to find
except by machine.
Examples are errors involving spelling and syntax, interfaces, performance, human
factors, and error recovery.
It would thus be unwise to eliminate testing completely because it provides a useful
check against human frailties.
The data that is available with the Optimizing Process gives us a new perspective on
testing. For most projects, a little analysis shows that there are two distinct activities
involved.
Software process and project management
The first is the removal of bugs. To reduce this cost, inspections should be emphasized
together \\ith any other cost-effective techniques. The role of functional and system
testing should then be changed to one of finding symptoms that are further explored to
see if the bug is an isolated problem or if it indicates design problems that require more
comprehensive analysis.
In the Optimizing Process, the organization has the means to identify the weakest
elements of the process and fix them. At this point in process improvement, data is
available to justify the application of technology to various critical tasks and numerical
evidence is available on the effectiveness with which the process has been applied to any
given product.
We no longer need reams of paper to describe what is happening because simple yield
curves and statistical plots provide clear and concise indicators. It is now possible to
assure the process and hence have confidence in the quality of the resulting products.
People in the process.
Any software development process is dependent on the quality of the people who
implement it. Even with the best people, however, there is always a limit to what they can
accomplish.
When engineers are already working 50 to 60 hours a week, it is hard to see how they
could handle the vastly greater challenges of the future.
The Optimizing Process helps in several \+ays: It helps managers understand where help
is needed and how best to provide the people with the support they require. It lets
professionals communicate in concise, quantitative terms.
This facilitates the transfer of knowledge and minimizes the likelihood of their wasting
time on problems that have already been solved.
It provides the framework for the professionals to understand their work performance and
to see how to improve it.
This results in a highly professional environment and substantial productivity benefits,
and it avoids the enormous amount of effort that is generally expended in fixing and
patching other people’s mistakes.
The difference between a disciplined environment and a regimented one is that discipline
controls the environment and methods to specific standards while regimentation defines
the actual conduct of the work. Discipline is required in large software projects to ensure,
for example, that the people involved use the same conventions, don’t damage each
other’s products, and properly synchronize their work.
Discipline thus enables creativity by freeing the most talented software professionals
from the many crises that others have created. The need. There are many examples of
disasters caused by software problems, ranging from expensive missile abort5 to
enormous financial losses.
As the computerization of our society continues, the public risks due to poor-quality code
will become untenable. Not only are our systems being used in increasingly sensitive
application5, but they are also becoming much larger and more complex. While proper
questions can be raised about the size and complexity of current systems, they are human
creations and they will, alas, continue to be produced by humans - with all their failings
and creative talents.
Software process and project management
While many of the currently promising technologies hill undoubtedly help, there is an
enormous backlog of needed functions that will inevitably translate into vast amounts of
code. More code means increased risk of error and, when coupled with more complexity,
these systems will become progressively less testable.
The risks will thus increase astronomically as we become more efficient at producing
prodigious amounts of new code. As hell as being a management issue, quality is an
economic one. It is always possible to do more inspections or to run more tests, but it
costs timeand money to do so.
It is only with the Optimizing Process that the data is available to understand the costs
and benefits of such work. The Optimizing Process thus provides the foundation for
significant advances in software q u al i t y and si in u It anew u s improvements in
productivity.
There is little data on how long it takes for software
organizations to advance through these maturity levels toward the Optimizing Process. Based
on my experience, transition from level 1 to level 2 or from level 2 to level 3 take from one to
three years, even with a dedicated management commitment to process improvement. To date,
no complete organizations have been observed at levels 4 or 5. To meet society’s needs for
increased system functions while simultaneously addressing the problems of quality and
productivity, software managers and professionals must establish the goal of moving to the
Optimizing Process.
PROCESS REFERENCE MODELS
Capability Maturity Model (CMM):
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary path
of increasingly organized and systematically more mature processes. CMM was developed and is
promoted by the Software Engineering Institute (SEI), a research and development center
sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address
software engineering issues and, in a broad sense, to advance software engineering
methodologies. More specifically, SEI was established to optimize the process of developing,
acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes
involved are equally applicable to the software industry as a whole, SEI advocates industry-wide
adoption of the CMM
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO). The ISO 9000 standards specify an
effective quality system for manufacturing and service industries; ISO 9001 deals specifically
Software process and project management
with software development and maintenance. The main difference between the two systems lies
in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software
processes, while the CMM establishes a framework for continuous process improvement and is
more explicit than the ISO standard in defining the means to be employed to that end.
At the initial level, processes are disorganized, even chaotic. Success is likely to depend on
individual efforts, and is not considered to be repeatable, because processes would not be
sufficiently defined and documented to allow them to be replicated.
At the repeatable level, basic project management techniques are established, and successes
could be repeated, because the requisite processes would have been made established,
defined, and documented.
At the defined level, an organization has developed its own standard software process
through greater attention to documentation, standardization, and integration.
At the managed level, an organization monitors and controls its own processes through data
collection and analysis.
At the optimizing level, processes are constantly being improved through monitoring
feedback from current processes and introducing innovative processes to better serve the
organization's particular needs
Software process and project management
.
Software process and project management
Software process and project management
manufacturing, people management, software development, etc. CMM describes about the software engineering
alone where as CMM Integrated describes both software and system engineering. CMMI also incorporates the Integrated Process
and Product Development and the supplier sourcing.
CMMI:
What is CMMI?
CMM Integration project was formed to sort out the problem of using multiple CMMs. CMMI
product team's mission was to combine three Source Models into a single improvement
framework for the organizations pursuing enterprise-wide process improvement. These three
Source Models are: Capability Maturity Model for Software (SW-CMM) - v2.0 Draft C.
Electronic Industries Alliance Interim Standard (EIA/IS) - 731 Systems Engineering. Integrated
Product Development Capability Maturity Model (IPD-CMM) v0.98 . CMM Integration Builds
an initial set of integrated models. Improves best practices from source models based on
lessons learned. Establishes a framework to enable integration of future models.
Software process and project management
Continuous Representation Continuous representation is the approach used in the SECM and the
IPD-CMM. This approach allows an organization to select a specific process area and make
improvements based on it.
The continuous representation uses Capability Levels to characterize
improvement relative to an individual process area.
CMMI Continuous Representation
Allows you to select the order of improvement that best meets your organization's business
objectives and mitigates your organization's areas of risk.
Enables comparisons across and among organizations on a process-area-byprocess-area basis.
Provides an easy migration from EIA 731 (and other models with a continuous representation)
to CMMI. Thus Continuous Representation provides flexibility to organizations to choose the
processes for improvement, as well as the amount of improvement required.
Software process and project management
CM - Configuration Management
MA - Measurement and Analysis
PPQA - Process and Quality Assurance
REQM - Requirements Management
SAM - Supplier Agreement Management
SD - Service Delivery
WMC - Work Monitoring and Control
WP - Work Planning
Maturity Level 3 - Defined
Advantages of CMMI
There are numerous benefits of implementing CMMI in an IT / Software Development
Organization, some of these benefits are listed below:
Culture for maintaining Quality in projects starts in the mind of the junior
programmers to the senior programmers and project managers
Centralised QMS for implementation in projects to ensure uniformity in the
documentation which means less learning cycle for new resources, better
management of project status and health
Incorporation of Software Engineering Best Practices in the Organizations as
described in CMMI Model
Cost saving in terms of lesser effort due to less defects and less rework
This also results in increased Productivity
On-Time Deliveries
Increased Customer Satisfaction
Overall increased Return on Investment
Decreased Costs
Improved Productivity
Disadvantages of CMMI
CMMI-DEV is may not be suitable for every organization.
It may add overhead in terms of documentation.
May require additional resources and knowledge required in smaller
organizations to initiate CMMI-based process improvement.
May require a considerable amount of time and effort for implementation.
Require a major shift in organizational culture and attitude.
Software process and project management
PCMM:
Software process and project management
Software process and project management
Undertaking Gap Analysis to assess the current level of maturity of the organization as against
PCMM®.
This implementation support on assessing People CMM® Practices, covering the concepts and
principles, framework and will help to know how an organization has to assess the status of
People CMM® Goals and Practices being implemented in organizations. The process of
assessment is as follows;
Guidelines on how to map the assessment results to the structural components of People
Based on the Maturity Level determined a presentation of Pros and Cons is to be made to the
authorities by making recommendations whether to go for up-gradation in maturity level and
suggesting timelines for such up-gradation. This shall facilitate for easy decision making by the
authorities.
Software process and project management
5. Develop roadmap and action plan for up gradation to the next level along with
specific timelines.
A roadmap and action plan for up gradation to next level along with specific timelines with date
wise action plan with the organizational authorities.
Level 2
Staffing
Communication and Coordination
Work Environment
Software process and project management
Performance Management
Training and Development
Compensation
Level 3
Competency Integration
Empowered Workgroups
Competency-Based Assets
Quantitative Performance Management
Organizational Capability Management
Mentoring
Level 5
PSP:
The Personal Software Process (PSP) is a structured software development process that is
designed to help software engineers better understand and improve their performance by
bringing discipline to the way they develop software and tracking their predicted and actual
development of the code. It clearly shows developers how to manage the quality of their
products, how to make a sound plan, and how to make commitments. It also offers them the data
Software process and project management
to justify their plans. They can evaluate their work and suggest improvement direction by
analyzing and reviewing development time, defects, and size data. The PSP was created by
Watts Humphreyto apply the underlying principles of the Software Engineering Institute's (SEI)
Capability Maturity Model (CMM) to the software development practices of a single developer.
It claims to give software engineers the process skills necessary to work on a team software
process (TSP) team.
Objectives
The PSP aims to provide software engineers with disciplined methods for improving personal
software development processes. The PSP helps software engineers to:
of development. Engineers construct and use checklists for design and code reviews. PSP2.1
introduces design specification and analysis techniques
(PSP3 is a legacy level that has been superseded by TSP.)
One of the core aspects of the PSP is using historical data to analyze and improve
process performance. PSP data collection is supported by four main elements:
Scripts
Measures
Standards
Forms
The PSP scripts provide expert-level guidance to following the process steps and they
provide a framework for applying the PSP measures. The PSP has four core measures:
Size – the size measure for a product part, such as lines of code (LOC).
Effort – the time required to complete a task, usually recorded in minutes.
Quality – the number of defects in the product.
Schedule – a measure of project progression, tracked against planned and actual
completion dates.
Applying standards to the process can ensure the data is precise and consistent. Data is
logged in forms, normally using a PSP software tool. The SEI has developed a PSP tool
and there are also open source options available, such as Process Dashboard.
The key data collected in the PSP tool are time, defect, and size data – the time spent
in each phase; when and where defects were injected, found, and fixed; and the size of
the product parts. Software developers use many other measures that are derived
from these three basic measures to understand and improve their performance.
Derived measures include:
productivity
reuse percentage
cost performance index
planned value
earned value
predicted earned value
defect density
defect density by phase
defect removal rate by phase
defect removal leverage
review rates
process yield
phase yield
failure cost of quality (COQ)
appraisal COQ
appraisal/failure COQ ratio
The PSP is a personal process that can be adapted to suit the needs of the individual
developer. It is not specific to any programming or design methodology; therefore it can
be used with different methodologies, including Agile software development.
Software engineering methods can be considered to vary from predictive through
adaptive. The PSP is a predictive methodology, and Agile is considered adaptive, but
despite their differences, the TSP/PSP and Agile share several concepts and approaches
– particularly in regard to team organization. They both enable the team to:
Quality
High-quality software is the goal of the PSP, and quality is measured in terms of
defects. For the PSP, a quality process should produce low-defect software that meets
the user needs.
The PSP phase structure enables PSP developers to catch defects early. By catching
defects early, the PSP can reduce the amount of time spent in later phases, such as
Test.
The PSP theory is that it is more economical and effective to remove defects as close as
possible to where and when they were injected, so software engineers are encouraged
to conduct personal reviews for each phase of development. Therefore, the PSP phase
structure includes two review phases:
Design Review
Code Review
To do an effective review, you need to follow a structured review process. The PSP
recommends using checklists to help developers to consistently follow an orderly
procedure.
The PSP follows the premise that when people make mistakes, their errors are usually
predictable, so PSP developers can personalize their checklists to target their own
common errors. Software engineers are also expected to complete process
improvement proposals, to identify areas of weakness in their current performance that
they should target for improvement. Historical project data, which exposes where time
is spent and defects introduced, help developers to identify areas to improve.
PSP developers are also expected to conduct personal reviews before their work
undergoes a peer or team review.
TSP:
In combination with the personal software process (PSP), the team software process (TSP)
provides a defined operational process framework that is designed to help teams of managers and
engineers organize projects and produce software the principles products that range in size from
small projects of several thousand lines of code (KLOC) to very large projects greater than half a
Software process and project management
million lines of code. The TSP is intended to improve the levels of quality and productivity of a
team's software development project, in order to help them better meet the cost and schedule
commitments of developing a software system.
The initial version of the TSP was developed and piloted by Watts Humphrey in the late 1990s [5]
and the Technical Report[6] for TSP sponsored by the U.S. Department of Defense was published
in November 2000. The book by Watts Humphrey,[7] Introduction to the Team Software Process,
presents a view of the TSP intended for use in academic settings, that focuses on the process of
building a software production team, establishing team goals, distributing team roles, and other
teamwork-related activities
The primary goal of TSP is to create a team
environment for establishing and maintaining a self-directed team, and supporting disciplined
individual work as a base of PSP framework. Self-directed team means that the team manages
itself, plans and tracks their work, manages the quality of their work, and works proactively to
meet team goals. TSP has two principal components: team-building and team-working. Team-
building is a process that defines roles for each team member and sets up teamwork through TSP
launch and periodical relaunch. Team-working is a process that deals with engineering processes
and practices utilized by the team. TSP, in short, provides engineers and managers with a way
that establishes and manages their team to produce the high-quality software on schedule and
budget.
TSP Works
Before engineers can participate in the TSP, it is required that they have already learned about
the PSP, so that the TSP can work effectively. Training is also required for other team members,
the team lead and management. The TSP software development cycle begins with a planning
process called the launch, led by a coach who has been specially trained, and is either certified or
provisional.[8][9] The launch is designed to begin the team building process, and during this time
teams and managers establish goals, define team roles, assess risks, estimate effort, allocate
tasks, and produce a team plan. During an execution phase, developers track planned and actual
effort, schedule, and defects meeting regularly (usually weekly) to report status and revise plans.
A development cycle ends with a Post Mortem to assess performance, revise planning
parameters, and capture lessons learned for process improvement.
The coach role focuses on supporting the team and the individuals on the team as the process
expert while being independent of direct project management responsibility. [10][11] The team
leader role is different from the coach role in that, team leaders are responsible to management
for products and project outcomes while the coach is responsible for developing individual and
team performance
Software process and project management
Software process and project management
Software process and project management
Software process and project management
UNIT II
Conventional software management practices are sound in theory, but practice is still tied to
archaic (outdated) technology and techniques.
Conventional software economics provides a benchmark of performance for conventional
software management principles.
The best thing about software is its flexibility: It can be programmed to do almost anything.
The worst thing about software is also its flexibility: The "almost anything"
characteristic has made it difficult to plan, monitors, and control software
development.
Three important analyses of the state of the software engineering industry are
Analysis
Analysis and coding both involve creative work that
directly contributes to the usefulness of the end product.
Coding
2. In order to manage and control all of the intellectual freedom associated with software
development, one must introduce several other "overhead" steps, including system
requirements definition, software requirements definition, program design, and
testing. These steps supplement the analysis and coding steps. Below Figure illustrates
the resulting project profile and the basic steps in developing a large-scale program.
Requirement
Des
i
Coding
Testing
Operatio
3. The basic framework described in the allnmodel is ky and invites failure. The
waterf ris
testing phase that occurs at the end of the development cycle is the first event for
which timing, storage, input/output transfers, etc., are experienced as distinguished
from analyzed. The resulting design changes are likely to be so disruptive that the
software requirements upon which the design is based are likely violated. Either the
requirements must be modified or a substantial design change is warranted.
1. Program design comes first. Insert a preliminary program design phase between the
software requirements generation phase and the analysis phase. By this technique, the
program designer assures that the software will not fail because of storage,
timing, and data flux (continuous change). As analysis proceeds in the succeeding
phase, the program designer must impose on the analyst the storage, timing, and
operational constraints in such a way that he senses the consequences. If the total
resources to be applied are insufficient or if the embryonic(in an early stage of
development) operational design is wrong, it will be recognized at this early stage and
the iteration with requirements and preliminary design can be redone before final
design, coding, and test commences. How is this program design procedure
implemented?
3. Do it twice. If a computer program is being developed for the first time, arrange matters
so that the version finally delivered to the customer for operational deployment is actually
the
second version insofar as critical design/operations are concerned. Note that this is simply the
entire process done in miniature, to a time scale that is relatively small with respect to the
overall effort. In the first version, the team must have a special broad competence where they
can quickly sense trouble spots in the design, model them, model alternatives, forget the
straightforward aspects of the design that aren't worth studying at this early point, and,
finally, arrive at an error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project resources-
manpower, computer time, and/or management judgment-is the test phase. This is the phase
of greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule,
when backup alternatives are least available, if at all. The previous three recommendations
were all aimed at uncovering and solving problems before entering the test phase. However,
even after doing these things, there is still a test phase and there are still important things to
be done, including: (1) employ a team of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like dropped minus
signs, missing factors of two, jumps to wrong addresses (do not use the computer to detect
this kind of thing, it is too expensive); (3) test every logic path; (4) employ the final checkout
on the target computer.
5. Involve the customer. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery. There are three points following
requirements definition where the insight, judgment, and commitment of the customer can
bolster the development effort. These include a "preliminary software review" following the
preliminary program design step, a sequence of "critical software design reviews" during
program design, and a "final software acceptance review".
IN PRACTICE
Some software projects still practice the conventional software management approach.
It is useful to summarize the characteristics of the conventional process as it has typically
been applied, which is not necessarily as it was intended. Projects destined for trouble frequently
exhibit the following symptoms:
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen
implementation issues and interface ambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then
implemented all at once, then integrated. Table 1-1 provides a typical profile of cost
expenditures across the spectrum of software activities.
A serious issue associated with the waterfall lifecycle was the lack of early risk
resolution. Figure 1.3 illustrates a typical risk profile for conventional waterfall model
projects. It includes four distinct periods of risk exposure, where risk is defined as
the probability of missing a cost, schedule, feature, or quality goal. Early in the life
cycle, as the requirements were being specified, the actual risk exposure was highly
unpredictable.
Another property of the conventional approach is that the requirements were typically
specified in a functional manner. Built into the classic waterfall process was the fundamental
assumption that the software itself was decomposed into functions; requirements were then
allocated to the resulting components. This decomposition was often very different from a
decomposition based on object-oriented design and the use of existing components. Figure 1-4
illustrates the result of requirements-driven approaches: a software structure that is organized
around the requirements specification structure.
The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured
an intermediate artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to
30 days) a final version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of
customers and contractors.
themselves. Contractors were driven to produce literally tons of paper to meet milestones and
demonstrate progress to stakeholders, rather than spend their energy on tasks that would reduce
risk and produce quality software. Typically, presenters and the audience reviewed the simple
things that they understood rather than the complex and important issues. Most design reviews
therefore resulted in low engineering value and high cost in terms of the effort and schedule
involved in their preparation and conduct. They presented merely a facade of progress.
Table 1-2 summarizes the results of a typical design review.
Barry Boehm's "Industrial Software Metrics Top 10 List” is a good, objective characterization
of the state of software development.
1. Finding and fixing a software problem after delivery costs 100 times more than
finding and fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number of
source lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985,
85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.
9. Walkthroughs catch 60% of the errors
32. It is conceived and supported by the project manager, the architecture team,
the development team, and the test team accountable for performing the work.
33. It is accepted by all stakeholders as ambitious but realizable.
34. It is based on a well-defined software cost model with a credible basis and a database of
relevant project experience that includes similar processes, similar technologies, similar
environments, similar quality requirements, and similar people.
35. It is defined in enough detail for both developers and managers to
36. objectively assess the probability of success and to understand key risk areas.
37. Although several parametric models have been developed to estimate software costs,
they can all be generally abstracted into the form given above. One very important aspect
of software economics (as represented
38. within today's software cost models) is that the relationship between effort and size (see
the equation above) exhibits a diseconomy of scale. The software development
diseconomy of scale is a result of the "process"
39. exponent in the equation being greater than 1.0. In contrast to the economics for most
manufacturing processes, the more software you build, the greater the cost per unit item.
It is desirable, therefore, to reduce the size and complexity of a project whenever
possible.
SOFTWARE ECONOMICS
Most software cost models can be abstracted into a function of five basic parameters: size,
process, personnel, environment, and required quality.
1. The size of the end product (in human-generated components), which is typically
quantified in terms of the number of source instructions or the number of function
points required to develop the required functionality
2. The process used to produce the end product, in particular the ability of the process to
avoid non-value-adding activities (rework, bureaucratic delays, communications
overhead)
3. The capabilities of software engineering personnel, and particularly their experience
with the computer science issues and the applications domain issues of the project
4. The environment, which is made up of the tools and techniques available to support
efficient software development and to automate the process
5. The required quality of the product, including its features, performance, reliability,
and adaptability
The relationships among these parameters and the estimated cost can be written as follows:
One important aspect of software economics (as represented within today's software cost
models) is that the relationship between effort and size exhibits a diseconomy of scale. The
diseconomy of scale of software development is a result of the process exponent being greater
than
1.0. Contrary to most manufacturing processes, the more software you build, the more
Organizations are achieving better economies of scale in successive technology eras-with very
large projects (systems of systems), long-lived products, and lines of business comprising
multiple similar projects. Figure 2-2 provides an overview of how a return on investment (ROI)
profile can be achieved in subsequent efforts across life cycles of various domains.
There have been many debates among developers and vendors of software cost estimation
models and tools. Three topics of these debates are of particular interest here:
LANGUAGES
Universal function points (UFPs1) are useful estimators for language-independent, early life-
cycle estimates. The basic units of function points are external user inputs, external outputs,
internal logical data groups, external data interfaces, and external inquiries. SLOC metrics are
useful estimators for software after a candidate solution is formulated and an implementation
language is known. Substantial data have been documented relating SLOC to function points.
1
Function point metrics provide a standardized method for measuring the various functions of a software
application.
The basic units of function points are external user inputs, external outputs, internal logical data groups, external
data interfaces, and external inquiries.
1. A ruthless focus on the development of a system that provides a well understood collection
of essential minimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and
REUSE
Reusing existing components and building reusable components have been natural software
engineering activities since the earliest improvements in programming languages. With reuse in
order to minimize development costs while achieving all the other required attributes of per-
formance, feature set, and quality . Try to treat reuse as a mundane part of achieving a return on
investment.
Most truly reusable components of value are transitioned to commercial products supported
by organizations with the following characteristics:
They have an economic motivation for continued support.
They take ownership of improving product quality, adding new features,
and transitioning to new technologies.
They have a sufficiently broad customer base to be profitable.
The cost of developing a reusable component is not trivial. Figure 3-1 examines the economic
trade-offs. The steep initial curve illustrates the economic obstacle to developing reusable
components.
Reuse is an important discipline that has an impact on the efficiency of all workflows and the
quality of most artifacts.
COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize integration of
commercial components and off-the-shelf products. While the use of commercial components is
certainly desirable as a means of reducing custom development, it has not proven to be
straightforward in practice. Table 3-3 identifies some of the advantages and disadvantages of
using commercial components.
Software project managers need many leadership qualities in order to enhance team
effectiveness. The following are some crucial attributes of successful software project managers
that deserve much more attention:
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right
person in the right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a
prerequisite for success.
Decision-making skill. The jillion books written about management have failed to
provide a clear definition of this attribute. We all know a good leader when we run
into one, and decision-making skill seems obvious despite its intangible definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate
progress, exploit eccentric prima donnas, transition average people into top
performers, eliminate misfits, and consolidate diverse opinions into a team direction.
Selling skill. Successful project managers must sell all stakeholders (including themselves)
on decisions and priorities, sell candidates on job positions, sell changes to the status quo
in the face of resistance, and sell achievements against objectives. In practice, selling
requires continuous negotiation, compromise, and empathy
Coding and unit testing activities consume about 50% of software development
effort and schedule.
Test activities can consume as much as 50% of a project's resources.
Configuration control and change management are critical activities that can
consume as much as 25% of resources on a large-scale project.
Documentation activities can consume more than 30% of project
engineering resources.
Project management, business administration, and progress assessment can
consume as much as 30% of project budgets.
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing
on requirements completeness and traceability late in the life cycle, and focusing
throughout the life cycle on a balance between requirements evolution, design
evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an architecture as it
evolves from a high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation
Using visual modeling and higher level languages that support architectural control,
abstraction, reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based
evaluations
Conventional development processes stressed early sizing and timing estimates of computer
program resource utilization. However, the typical chronology of events in performance
assessment was as follows
Project inception. The proposed design was asserted to be low risk with adequate
performance margin.
Initial design review. Optimistic assessments of adequate design margin were based
mostly on paper analysis or rough simulation of the critical threads. In most cases, the
actual application algorithms and database sizes were fairly well understood.
Mid-life-cycle design review. The assessments started whittling away at the margin, as
early benchmarks and initial tests began exposing the optimism inherent in earlier
estimates.
Integration and test. Serious performance problems were uncovered, necessitating
fundamental changes in the architecture. The underlying infrastructure was usually the
scapegoat, but the real culprit was immature use of the infrastructure, immature
architectural solutions, or poorly understood early design trade-offs.
12. Good management is more important than good technology. Good management motivates
people to do their best, but there are no universal "right" styles of management.
13. People are the key to success. Highly skilled people with appropriate experience, talent, and
training are key.
14. Follow with care. Just because everybody is doing something does not make it right for you. It
may be right, but you must carefully assess its applicability to your environment.
15. Take responsibility. When a bridge collapses we ask, "What did the engineers do wrong?"
Evenwhen software fails, we rarely ask this. The fact is that in any engineering discipline, the best
methodscan be used to produce awful designs, and the most antiquated methods to produce elegant
designs.
16. Understand the customer's priorities. It is possible the customer would tolerate 90% of the
functionality delivered late if they could have 10% of it on time.
17. The more they see, the more they need. The more functionality (or performance) you provide
a user, the more functionality (or performance) the user wants.
18. Plan to throw one away. One of the most important critical success factors is whether or not a
product is entirely new. Such brand-new applications, architectures, interfaces, or algorithms
rarely work the first time.
19. Design for change. The architectures, components, and specification techniques you use must
accommodate change.
20. Design without documentation is not design. I have often heard software engineers say, "I
have finished the design. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks constructs that perform a
function correctly, but in an obscure way. Show the world how smart you are by avoiding tricky
code
23. Encapsulate. Information-hiding is a simple, proven concept that results in software that
is easier to test and much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure
software's inherent maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics available to report
the inherent complexity of software, none is as intuitive and easy to use as Tom McCabe's
26. Don't test your own software. Software developers should never be the primary testers
of their own software.
27. Analyze causes for errors. It is far more cost-effective to reduce the effect of an error by
preventing it than it is to find and fix it. One way to do this is to analyze the causes of errors as they
are detected
28. Realize that software's entropy increases. Any software system that undergoes continuous
change will grow in complexity and will become more and more disorganized
29. People and time are not interchangeable. Measuring a project solely by person-months makes
little sense
30. Expect excellence. Your employees will do much better if you have high expectations for them.
Top 10 principles of modern software management are. (The first five, which are the main themes of my
definition of an iterative process, are summarized in Figure 4-1.)
balance be achieved among the driving requirements, the architecturally significant design
decisions, and the life-cycle plans before the resources are committed for full-scale
development.
Establish an iterative life-cycle process that confronts risk early. With today's sophisticated
software systems, it is not possible to define the entire problem, design the entire solution,
build the software, and then test the end product in sequence. Instead, an iterative process
that refines the problem understanding, an effective solution, and an effective plan over
several iterations encourages a balanced treatment of all stakeholder objectives. Major risks
must be addressed early to increase predictability and avoid expensive downstream scrap
and rework.
Transition design methods to emphasize component-based development. Moving from a
line- of-code mentality to a component-based mentality is necessary to reduce the amount
of human-generated source code and custom development.
Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles
of a modern process
precedentedness, process flexibility, architecture risk resolution, team cohesion, and software
process maturity.
The following paragraphs map the process exponent parameters of CO COMO II to my top
10 principles of a modern process.
To achieve economies of scale and higher returns on investment, we must move toward a
software manufacturing process driven by technological improvements in process automation
and component-based development. Two stages of the life cycle are:
1. The engineering stage, driven by less predictable but smaller teams doing design
and synthesis activities
2. The production stage, driven by more predictable but larger teams
doing construction, test, and deployment activities
The transition between engineering and production is a crucial event for the
various stakeholders. The production plan has been agreed upon, and there is a
good enough understanding of the problem and the solution that all stakeholders
can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and
elaboration, and the production stage into construction and transition. These four
phases of the life-cycle process are loosely mapped to the conceptual framework of
the spiral model as shown in Figure 5-1
INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among stakeholders on
the life-cycle objectives for the project.
PRIMARY OBJECTIVES
Establishing the project's software scope and boundary conditions, including an
operational concept, acceptance criteria, and a clear understanding of what is and is not
intended to be in the product
Discriminating the critical use cases of the system and the primary scenarios
of operation that will drive the major design trade-offs
Demonstrating at least one candidate architecture against some of the
primary scenanos
Estimating the cost and schedule for the entire project (including detailed estimates
for the elaboration phase)
Estimating potential risks (sources of unpredictability)
ESSENTIAL ACTMTIES
Formulating the scope of the project. The information repository should be
sufficient to define the problem space and derive the acceptance criteria for the end
product.
Synthesizing the architecture. An information repository is created that is sufficient to
Dept of CSE Page 77
Teegala Krishna Reddy Engineering College
Software process and project management
demonstrate the feasibility of at least one candidate architecture and an, initial baseline
of make/buy decisions so that the cost, schedule, and resource estimates can be derived.
Planning and preparing a business case. Alternatives for risk management, staffing,
iteration plans, and cost/schedule/profitability trade-offs are evaluated.
PRIMARY EVALUATION CRITERIA
Do all stakeholders concur on the scope definition and cost and schedule estimates?
Are requirements understood, as evidenced by the fidelity of the critical use cases?
Are the cost and schedule estimates, priorities, risks, and development processes
credible?
Do the depth and breadth of an architecture prototype demonstrate the preceding
criteria? (The primary value of prototyping candidate architecture is to provide a
vehicle for understanding the scope and assessing the credibility of the development
group in solving the particular technical problem.)
Are actual resource expenditures versus planned expenditures acceptable
ELABORATION PHASE
At the end of this phase, the "engineering" is considered complete. The elaboration phase
activities must ensure that the architecture, requirements, and plans are stable enough, and the
risks sufficiently mitigated, that the cost and schedule for the completion of the development can
be predicted within an acceptable range. During the elaboration phase, an executable architecture
prototype is built in one or more iterations, depending on the scope, size, & risk.
PRIMARY OBJECTIVES
Baselining the architecture as rapidly as practical (establishing a configuration-
managed snapshot in which all changes are rationalized, tracked, and maintained)
Baselining the vision
Baselining a high-fidelity plan for the construction phase
Demonstrating that the baseline architecture will support the vision at a reasonable cost in
a reasonable time
ESSENTIAL ACTIVITIES
Elaborating the vision.
Elaborating the process and infrastructure.
Elaborating the architecture and selecting components.
Is the construction phase plan of sufficient fidelity, and is it backed up with a credible
basis of estimate?
Do all stakeholders agree that the current vision can be met if the current plan is
executed to develop the complete system in the context of the current architecture?
Are actual resource expenditures versus planned expenditures acceptable?
CONSTRUCTION PHASE
During the construction phase, all remaining components and application features are integrated into the
application, and all features are thoroughly tested. Newly developed software is integrated where required. The
construction phase represents a production process, in which emphasis is placed on managing resources and
controlling operations to optimize costs, schedules, and quality.
PRIMARY OBJECTIVES
Minimizing development costs by optimizing resources and avoiding
unnecessary scrap and rework
Achieving adequate quality as rapidly as practical
Achieving useful versions (alpha, beta, and other test releases) as rapidly as practical
ESSENTIAL ACTIVITIES
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
TRANSITION PHASE
The transition phase is entered when a baseline is mature enough to be deployed in the end-user
domain. This typically requires that a usable subset of the system has been achieved with
acceptable quality levels and user documentation so that transition to the user will provide
positive results. This phase could include any of the following activities:
PRIMARY OBJECTIVES
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete
and consistent with the evaluation criteria of the vision
Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into
consistent deployment baselines
Deployment-specific engineering (cutover, commercial packaging and
production, sales rollout kit development, field personnel training)
Assessment of deployment baselines against the complete vision and
acceptance criteria in the requirements set
EVALUATION CRITERIA
Is the user satisfied?
Are actual resource expenditures versus planned expenditures acceptable?
Requirements Set
Requirements artifacts are evaluated, assessed, and measured through a combination of the
following:
Design Set
UML notation is used to engineer the design models for the solution. The design set
contains varying levels of abstraction that represent the components of the solution space (their
identities, attributes, static relationships, dynamic interactions). The design set is evaluated,
assessed, and measured through a combination of the following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with the requirements models
Translation into implementation and deployment sets and notations (for example,
traceability, source code generation, compilation, linking) to evaluate the consistency
and completeness and the semantic balance between information in the sets
Analysis of changes between the current version of the design model and previous
versions (scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations) that represents
the tangible implementations of components (their form, interface, and dependency
relationships)
Implementation sets are human-readable formats that are evaluated, assessed, and measured
through a combination of the following:
Analysis of consistency with the design models
Translation into deployment set notations (for example, compilation and linking)
to evaluate the consistency and completeness among artifact sets
Assessment of component source or executable files against relevant
evaluation criteria through inspection, analysis, demonstration, or testing
Execution of stand-alone component test cases that automatically compare
expected results with actual results
Analysis of changes between the current version of the implementation set
and previous versions (scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
Deployment Set
The deployment set includes user deliverables and machine language notations, executable
software, and the build scripts, installation scripts, and executable target specific data necessary
to use the product in its target environment.
Deployment sets are evaluated, assessed, and measured through a combination of the following:
Testing against the usage scenarios and quality attributes defined in the requirements
set to evaluate the consistency and completeness and the~ semantic balance between
information in the two sets
Testing the partitioning, replication, and allocation strategies in mapping components
of the implementation set to physical resources of the deployment system (platform
type, number, network topology)
Testing against the defined usage scenarios in the user manual such as installation,
user- oriented dynamic reconfiguration, mainstream usage, and anomaly management
Analysis of changes between the current version of the deployment set and previous
versions (defect elimination trends, performance changes)
Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the other
sets take on check and balance roles. As illustrated in Figure 6-2, each phase has a predominant
focus: Requirements are the focus of the inception phase; design, the elaboration phase;
implementation, the construction phase; and deployment, the transition phase. The management
artifacts also evolve, but at a fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact sets.
1. Management: scheduling, workflow, defect tracking, change
management, documentation, spreadsheet, resource management, and
presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage
analysis tools, and test management tools
5. Deployment: test coverage and test automation tools, network management tools,
commercial components (operating systems, GUIs, RDBMS, networks, middleware), and
installation tools.
TEST ARTIFACTS
The test artifacts must be developed concurrently with the product from inception
through deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle
activity.
The test artifacts are communicated, engineered, and developed within the
same artifact sets as the developed product.
The test artifacts are implemented in programmable and repeatable formats
(as software programs).
The test artifacts are documented in the same way that the product is documented.
Developers of the test artifacts use the same tools, techniques, and training as
the software engineers developing the product.
Test artifact subsets are highly project-specific, the following example clarifies the relationship
between test artifacts and the other artifact sets. Consider a project to perform seismic data
processing for the purpose of oil exploration. This system has three fundamental subsystems: (1)
a sensor subsystem that captures raw seismic data in real time and delivers these data to (2) a
technical operations subsystem that converts raw data into an organized database and manages
queries to this database from (3) a display subsystem that allows workstation operators to
examine seismic data in human-readable form. Such a system would result in the following test
artifacts:
Management set. The release specifications and release descriptions capture the
objectives, evaluation criteria, and results of an intermediate milestone. These
Dept of CSE Page 85
Teegala Krishna Reddy Engineering College
Software process and project management
artifacts
are the test plans and test results negotiated among internal project teams. The
software change orders capture test results (defects, testability changes, requirements
ambiguities, enhancements) and the closure criteria associated with making a discrete
change to a baseline.
Requirements set. The system-level use cases capture the operational concept for the
system and the acceptance test case descriptions, including the expected behavior of
the system and its quality attributes. The entire requirement set is a test artifact
because it is the basis of all assessment activities across the life cycle.
Design set. A test model for nondeliverable components needed to test the product
baselines is captured in the design set. These components include such design set
artifacts as a seismic event simulation for creating realistic sensor data; a "virtual
operator" that can support unattended, after-hours test cases; specific instrumentation
suites for early demonstration of resource usage; transaction rates or response times;
and use case test drivers and component stand-alone test drivers.
Implementation set. Self-documenting source code representations for test
components and test drivers provide the equivalent of test procedures and test scripts.
These source files may also include human-readable data files representing certain
statically defined data sets that are explicit test source files. Output files from test
drivers provide the equivalent of test reports.
Deployment set. Executable versions of test components, test drivers, and data
files are provided.
MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results and
ancillary information necessary to document the product/process legacy, maintain the
product, improve the product, and improve the process.
Business Case
The business case artifact provides all the information necessary to determine whether the
project is worth investing in. It details the expected revenue, expected cost, technical and
management plans, and backup data necessary to demonstrate the risks and realism of the
plans. The main purpose is to transform the vision into economic terms so that an
organization can make an accurate ROI assessment. The financial forecasts are
evolutionary, updated with more accurate forecasts as the life cycle progresses. Figure 6-4
provides a default outline for a business case.
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are
derived from the vision statement as well as many other sources (make/buy
analyses, risk management concerns, architectural considerations, shots in the
dark, implementation constraints, quality thresholds). These artifacts are intended
to evolve along with the process, achieving greater fidelity as the life cycle
progresses and requirements understanding matures. Figure 6-6 provides a default
outline for a release specification
Release Descriptions
Release description documents describe the results of each release, including performance
against each of the evaluation criteria in the corresponding release specification. Release
baselines should be accompanied by a release description document that describes the evaluation
criteria for that configuration baseline and provides substantiation (through demonstration,
testing, inspection, or analysis) that each criterion has been addressed in an acceptable manner.
Figure 6-7 provides a default outline for a release description.
Status Assessments
Dept of CSE Page 89
Teegala Krishna Reddy Engineering College
Software process and project management
Environment
An important emphasis of a modern approach is to define the development and
maintenance environment as a first-class artifact of the process. A robust,
integrated development environment must support automation of the development
process. This environment should include requirements management, visual
modeling, document automation, host and target programming tools, automated
regression testing, and continuous and integrated change management, and feature
and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could include several
document subsets for transitioning the product into operational status. In big contractual efforts
in which the system is delivered to a separate maintenance organization, deployment artifacts
may include computer system operations manuals, software installation manuals, plans and
procedures for cutover (from a legacy system), site surveys, and so forth. For commercial
software products, deployment artifacts may include marketing plans, sales rollout kits, and
training courses.
ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations such as UML,
programming languages, or executable machine codes. Three engineering artifacts are explicitly
intended for more general review, and they deserve further elaboration.
Vision Document
The vision document provides a complete vision for the software system under development and.
supports the contract between the funding authority and the development organization. A project
vision is meant to be changeable as understanding evolves of the requirements, architecture,
plans, and technology. A good vision document should change slowly. Figure 6-9 provides a
default outline for a vision document.
Architecture Description
The architecture description provides an organized view of the software architecture under
development. It is extracted largely from the design model and includes views of the design,
implementation, and deployment sets sufficient to understand how the operational concept of the
requirements set will be achieved. The breadth of the architecture description will vary from
project to project depending on many factors. Figure 6-10 provides a default outline for an
architecture description.
People want to review information but don't understand the language of the artifact.
Many interested reviewers of a particular artifact will resist having to learn the engineering
language in which the artifact is written. It is not uncommon to find people (such as veteran
software managers, veteran quality assurance specialists, or an auditing authority from a regula-
tory agency) who react as follows: "I'm not going to learn UML, but I want to review the design
of this software, so give me a separate description such as some flowcharts and text that I can
understand."
People want to review the information but don't have access to the tools. It is not very
common for the development organization to be fully tooled; it is extremely rare that the/other
stakeholders have any capability to review the engineering artifacts on-line. Consequently,
organizations are forced to exchange paper documents. Standardized formats (such as UML,
spreadsheets, Visual Basic, C++, and Ada 95), visualization tools, and the Web are rapidly
Human-readable engineering artifacts should use rigorous notations that are complete,
consistent, and used in a self-documenting manner. Properly spelled English words should be
used for all identifiers and descriptions. Acronyms and abbreviations should be used only where
they are well accepted jargon in the context of the component's usage. Readability should be
emphasized and the use of proper English words should be required in all engineering artifacts.
This practice enables understandable representations, browse able formats (paperless review),
more-rigorous notations, and reduced error rates.
Useful documentation is self-defining: It is documentation that gets used.
Paper is tangible; electronic artifacts are too easy to change. On-line and Web-based
artifacts can be changed easily and are viewed with more skepticism because of their inherent
volatility.
failures.
A mature process, an understanding of the primary requirements, and a demonstrable
architecture are important prerequisites for predictable planning.
Architecture development and process definition are the intellectual steps that map the
problem to a solution without violating the constraints; they require human innovation
and cannot be automated.
The deployment view addresses the executable realization of the system, including the
allocation of logical processes in the distribution view (the logical software topology)
to physical resources of the deployment network (the physical system topology). It is
modeled statically using deployment diagrams, and dynamically using any of the
UML behavioral diagrams.
Generally, an architecture baseline should include the following:
Requirements: critical use cases, system-level quality objectives, and
priority relationships among features and qualities
Design: names, attributes, structures, behaviors, groupings, and relationships
of significant classes and components
Implementation: source component inventory and bill of materials (number,
name, purpose, cost) of all primitive components
Deployment: executable components sufficient to demonstrate the critical use
cases and the risk associated with achieving the system qualities
UNIT III
Workflows and Checkpoints of process
Software process workflows, Iteration workflows, Major milestones, minor milestones, periodic
status assessments.
Process Planning
Work breakdown structures, Planning guidelines, cost and schedule estimating process, iteration
planning process, Pragmatic planning.
Table 8-1 shows the allocation of artifacts and the emphasis of each workflow in each of the life-
cycle phases of inception, elaboration, construction, and transition.
ITERATION WORKFLOWS
Iteration consists of a loosely sequential set of activities in various proportions, depending on
where the iteration is located in the development cycle. Each iteration is defined in terms of a set
of allocated usage scenarios. An individual iteration's workflow, illustrated in Figure 8-2, gener-
ally includes the following sequence:
Management: iteration planning to determine the content of the release and develop the
detailed plan for the iteration; assignment of work packages, or tasks, to the
development team
Environment: evolving the software change order database to reflect all new baselines
and changes to existing baselines for all product, test, and environment components
Requirements: analyzing the baseline plan, the baseline architecture, and the baseline
requirements set artifacts to fully elaborate the use cases to be demonstrated at the
end of this iteration and their evaluation criteria; updating any requirements set
artifacts to reflect changes necessitated by results of this iteration's engineering
activities
Design: evolving the baseline architecture and the baseline design set artifacts to
elaborate fully the design model and test model components necessary to demonstrate
against the evaluation criteria allocated to this iteration; updating design set artifacts
Assessment: evaluating the results of the iteration, including compliance with the
allocated evaluation criteria and the quality of the current baselines; identifying any
rework required and determining whether it should be performed before deployment
of this release or allocated to the next release; assessing results to improve the basis of
the subsequent iteration's plan
Deployment: transitioning the release either to an external organization (such as a
user, independent verification and validation contractor, or regulatory agency) or to
internal closure by conducting a post-mortem so that lessons learned can be captured
and reflected in the next iteration
Iterations in the inception and elaboration phases focus on management. Requirements, and
design activities. Iterations in the construction phase focus on design, implementation, and
assessment. Iterations in the transition phase focus on assessment and deployment. Figure 8-3
shows the emphasis on different activities across the life cycle. An iteration represents the state
of the overall architecture and the complete deliverable system. An increment represents the
current progress that will be combined with the preceding iteration to from the next iteration.
Figure 8-4, an example of a simple development life cycle, illustrates the differences between
iterations and increments.
Checkpoints of the process: Major mile stones, Minor Milestones, Periodic status assessments.
Iterative Process Planning: Work breakdown structures, planning guidelines, cost and
schedule estimating, Iteration planning process, Pragmatic planning.
1. Major milestones. These system wide events are held at the end of each development
phase. They provide visibility to system wide issues, synchronize the management
and engineering perspectives, and verify that the aims of the phase have been
achieved.
2. Minor milestones. These iteration-focused events are conducted to review the
content of an iteration in detail and to authorize continued work.
3. Status assessments. These periodic events provide management with frequent
and regular insight into the progress being made.
Each of the four phases-inception, elaboration, construction, and transition consists of one or
more iterations and concludes with a major milestone when a planned technical capability is
produced in demonstrable form. An iteration represents a cycle of activities for which there is a
well-defined intermediate result-a minor milestone-captured with two artifacts: a release
specification (the evaluation criteria and plan) and a release description (the results). Major
milestones at the end of each phase use formal, stakeholder-approved evaluation criteria and
release descriptions; minor milestones use informal, development-team-controlled versions of
these artifacts.
Figure 9-1 illustrates a typical sequence of project checkpoints for a relatively large project.
MAJOR MILESTONES
The four major milestones occur at the transition points between life-cycle phases. They can be
used in many different process models, including the conventional waterfall model. In an
iterative model, the major milestones are used to achieve concurrence among all stakeholders
on the current state of the project. Different stakeholders have very different concerns:
The technical data listed in Figure 9-2 should have been reviewed by the time of the lifecycle
architecture milestone. Figure 9-3 provides default agendas for this milestone.
completion of the software and its transition to the support organization, if any. The results of
acceptance testing are reviewed, and all open issues are addressed. Software quality metrics are
reviewed to determine whether quality is sufficient for transition to the support organization.
MINOR MILESTONES
For most iterations, which have a one-month to six-month duration, only two minor milestones
are needed: the iteration readiness review and the iteration assessment review.
Iteration Readiness Review. This informal milestone is conducted at the start of each
iteration to review the detailed iteration plan and the evaluation criteria that have been
allocated to this iteration .
Iteration Assessment Review. This informal milestone is conducted at the end of each
iteration to assess the degree to which the iteration achieved its objectives and
satisfied its evaluation criteria, to review iteration results, to review qualification test
results (if part of the iteration), to determine the amount of rework to be done, and to
review the impact of the iteration results on the plan for subsequent iterations.
The format and content of these minor milestones tend to be highly dependent on the project
and the organizational culture. Figure 9-4 identifies the various minor milestones to be
considered when a project is being planned.
following:
A mechanism for openly addressing, communicating, and resolving
management issues, technical issues, and project risks
Objective data derived directly from on-going activities and evolving
product configurations
A mechanism for disseminating process, progress, quality trends, practices,
and experience information to and from all stakeholders in an open forum
Periodic status assessments are crucial for focusing continuous attention on the evolving
health of the project and its dynamic priorities. They force the software project manager to
collect and review the data periodically, force outside peer review, and encourage dissemination
of best practices to and from other stakeholders.
The default content of periodic status assessments should include the topics identified in Table
9-2.
Figure 10-1 Conventional work breakdown structure, following the product hierarchy
Management
System requirement and design
Subsystem 1
Component 11
Requirements Design
Code Test
Documentation
Documentation
Integration and test
Test planning
Test procedure
preparation Testing
Test reports
Other support
areas
Configuration
control Quality
assurance System
administration
element.
Precedent experience. Very few projects start with a clean slate. Most of them are
maintenance CD Transition
phase requirements maintenance
D
Design
DA Inception phase architecture
prototyping DB Elaboration
phase architecture baselining DBA
Architecture design modeling
DBB Design demonstration planning and
conduct DBC Software architecture
description
DC Construction phase design modeling
DCA Architecture design model
maintenance DCB Component design
modeling
DD Transition phase design
maintenance E Implementation
EA Inception phase component prototyping
EB Elaboration phase component implementation
EBA Critical component coding demonstration
integration EC Construction phase component
implementation
ECA Initial release(s) component coding and stand-
alone testing ECB Alpha release component
coding and stand-alone testing ECC Beta release
component coding and stand-alone testing
ECD Component
maintenance F Assessment
Dept of CSE Page 123
Teegala Krishna Reddy Engineering College
Software process and project management
FA Inception phase
assessment FB
Elaboration phase
assessment
FBA Test modeling
FBB Architecture test scenario implementation
Inception Elaboration
M.THANMAYEE Page
115
Software process and project management
Transition Construction
PLANNING GUIDELINES
Software projects span a broad range of application domains. It is valuable but risky
to make specific planning recommendations independent of project context. Project-
independent planning advice is also risky. There is the risk that the guidelines may
pe adopted blindly without being adapted to specific project circumstances. Two
simple planning guidelines should be considered when a project plan is being
initiated or assessed. The first guideline, detailed in Table 10 -1, prescribes a default
allocation of costs among the first-level WBS elements. The second guideline,
detailed in Table 10-2, prescribes the allocation of effort and schedule across the
lifecycle phases.
The second perspective is a backward-looking, bottom-up approach. We start with the end in
mind, analyze the micro-level budgets and schedules, then sum all these elements into the
higher level budgets and intermediate milestones. This approach tends to define and
populate the WBS from the lowest levels upward. From this perspective, the following
planning sequence would occur:
1. The lowest level WBS elements are elaborated into detailed tasks
2. Estimates are combined and integrated into higher level budgets and milestones.
3. Comparisons are made with the top-down budgets and schedule milestones.
Milestone scheduling or budget allocation through top-down estimating tends to exaggerate the
project management biases and usually results in an overly optimistic plan. Bottom-up estimates
usually exaggerate the performer biases and result in an overly pessimistic plan.
These two planning approaches should be used together, in balance, throughout the life
cycle of the project. During the engineering stage, the top-down perspective will dominate
because there is usually not enough depth of understanding nor stability in the detailed task
sequences to perform credible bottom-up planning. During the production stage, there should be
enough precedent experience and planning fidelity that the bottom-up planning perspective will
dominate. Top- down approach should be well tuned to the project-specific parameters, so it
should be used more as a global assessment technique. Figure 10-4 illustrates this life-cycle
planning balance.
Releases
Macro level task estimation for Micro level task estimation for
production stage artifacts production stage artifacts
Micro level task estimation for Macro level task estimation for
engineering artifacts maintenance of engineering
artifacts
Stakeholder concurrence Stakeholder concurrence
PRAGMATIC PLANNING
Even though good planning is more dynamic in an iterative process, doing it
accurately is far easier. While executing iteration N of any phase, the software
project manager must be monitoring and controlling against a plan that was
initiated in iteration N - 1 and must be planning iteration N
+ 1. The art of good project· management is to make trade-offs in the current
iteration plan and the next iteration plan based on objective results in the current
iteration and previous iterations. Aside from bad architectures and misunderstood
requirements, inadequate planning (and subsequent bad management) is one of
the most common reasons for project failures. Conversely, the success of every
successful project can be attributed in part to good planning.
A project's plan is a definition of how the project requirements will be transformed into' a
product within the business constraints. It must be realistic, it must be current, it must be a team
product, it must be understood by the stakeholders, and it must be used. Plans are not just for
managers. The more open and visible the planning process and results, the more ownership there
is among the team members who need to execute it. Bad, closely held plans cause attrition.
Good, open plans can shape cultures and encourage teamwork.
UNIT IV
Project Organizations
Line-of- business organizations, project organizations, evolution of organizations, process
automation.
Project Control and process instrumentation
The seven-core metrics, management indicators, quality indicators, life-cycle expectations,
Pragmatic software metrics, metrics automation.
of the organization’s process maturity & its plan for future improvement
Project Review Authority (PRA)
The PRA is the single individual responsible for ensuring that a software project
complies with all organizational & business unit software policies, practices &
standards
A software Project Manager is responsible for meeting the requirements of a contract or
some other project compliance standard
Software Engineering Environment Authority( SEEA )
The SEEA is responsible for automating the organization’s process, maintaining
the organization’s standard environment, Training projects to use the environment
& maintaining organization-wide reusable assets
The SEEA role is necessary to achieve a significant ROI for common process.
RojaRamani.Adapa Page 122
Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management
Infrastructure
An organization’s infrastructure provides human resources support, project-
independent research & development, & other capital software engineering assets.
2) Project organizations:
Artifact Activitie
Administratio
Software
• The above figure shows a default project organization and maps project-level roles
and responsibilities.
• The main features of the default organization are as follows:
• The project management team is an active participant, responsible for producing
as well as managing.
• The architecture team is responsible for real artifacts and for the integration
of components, not just for staff functions.
• The development team owns the component construction and
maintenance activities.
• The assessment team is separate from development.
• Quality is everyone’s into all activities and checkpoints.
• Each team takes responsibility for a different quality perspective.
3) EVOLUTION OF ORGANIZATIONS:
Software Software
Manageme Manageme
nt 50% nt
Inceptio Elaboratio
Software Software
Manageme Manageme
nt nt 10%
10
%
Transition Construction
Inception: Elaboration:
Software management: Software management:
50% 10%
Software Architecture: Software Architecture:
20% Software 50% Software
development: 20% development: 20%
Software Software
Assessment Assessment
(measurement/evaluation): (measurement/evaluation):
10% 20%
Construction: Transition:
Software management: Software management:
10% 10%
Software Architecture: 10% Software Architecture: 5%
Software development: Software development:
50% Software Assessment 35% Software
(measurement/evaluation): Assessment
30 (measurement/evaluation
% ):50
%
Many tools are available to automate the software development process. Most of
the core software development tools map closely to one of the process workflows
Workflows Environment Tools & process
Automation Management Workflow automation, Metrics automation
There are four important environment disciplines that are critical to management
context & the success of a modern iterative development process.
Round-Trip engineering
Change Management
Software Change
Orders (SCO)
RojaRamani.Adapa Page 127
Asst.Prof
Dept of CSE
Teegala Krishna Reddy Engineering College
Software process and project management
Infrastructure
Organization
Policy
Organization
Environment
Stakeholder
Environment.
Round-Trip engineering is the term used to describe this key requirement for
environment that support iterative development.
As the software industry moves into maintaining different information sets for the
engineering artifacts, more automation support is needed to ensure efficient & error
free transition of data from one artifacts to another.
Round-trip engineering is the environment support necessary to maintain
Consistency among the engineering artifacts.
Change Management
Change management must be automated & enforced to manage multiple iterations
& to enable change freedom.
Change is the fundamental primitive of iterative Development.
I. Software Change Orders
The atomic unit of software work that is authorized to create, modify or obsolesce
components within a configuration baseline is called a software change orders (
SCO )
The basic fields of the SCO are Title, description, metrics, resolution, assessment & disposition
Change management
II. Configuration Baseline
Release Internal
testing Release
Three levels of baseline releases are required for most Systems
1.Major release (N)
2.Minor Release (M)
3.Interim (temporary) Release (X)
Major release represents a new generation of the product or project
A minor release represents the same basic product but with enhanced
features, performance or quality.
Major & Minor releases are intended to be external product releases
that are persistent & supported for a period of time.
An interim release corresponds to a developmental
configuration that is intended to be transient.
Once software is placed in a controlled baseline all changes are
tracked such that a distinction must be made for the cause of the
change. Change categories are
Type 0: Critical Failures (must be fixed before release)
Type 1: A bug or defect either does not impair (Harm) the usefulness of
the system or can be worked around
Type 2: A change that is an enhancement rather than a response to a defect
Type 3: A change that is necessitated by the update to the environment
Change Management
III Configuration Control Board (CCB)
A CCB is a team of people that functions as
the decision Authority on the content of
configuration baselines
A CCB includes:
1. Software managers
2. Software Architecture managers
3. Software Development managers
4. Software Assessment managers
5. Other Stakeholders who are integral to the maintenance of
the controlled software delivery system?
Infrastructure
The organization infrastructure provides the organization’s capital assets
including two key artifacts - Policy & Environment
I Organization Policy:
A Policy captures the standards for project software development processes
The organization policy is usually packaged as a handbook that defines
the life cycles & the process primitives such as
Major milestones
Intermediate Artifacts
Engineering repositories
Metrics
Roles & Responsibilities
Infrastructure
II Organization Environment
The Environment that captures an inventory of tools which are building
blocks from which project environments can be configured efficiently &
economically
Stakeholder Environment
Many large scale projects include people in external organizations
that represent other stakeholders participating in the development
process they might include
Procurement agency contract monitors
End-user engineering support personnel
Third party maintenance contractors
Independent verification & validation contractors
Representatives of regulatory agencies & others.