0% found this document useful (0 votes)
27 views

SOFTWARE PROCESS & PROJECT MANAGEMENT

hello
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

SOFTWARE PROCESS & PROJECT MANAGEMENT

hello
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

SOFTWARE PROCESS &PROJECT MANAGEMENT

UNIT – 1 CONVENTIONAL SOFTWARE MANAGEMENT &


IMPROVING SOFTWARE ECONOMICS
PART – 1 CONVENTIONAL SOFTWARE MANAGEMENT
Conventional software management practices are sound in theory, but practice is still tied
to archaic (outdated)technology and techniques.
Conventional software economics provides a benchmark of performance for
conventional software manage-ment principles.
The best thing about software is its flexibility: It can be programmed to do almost
anything.
The worst thing about software is also its flexibility: The "almost anything"
characteristic has made it difficultto plan, monitors, and control software development.
Three important analyses of the state of the software engineering industry are
1. Software development is still highly unpredictable. Only about 10% of
software projects aredelivered successfully within initial budget and schedule
estimates.
2. Management discipline is more of a discriminator in success or failure than are
technology advances.
3. The level of software scrap and rework is indicative of an immature process.
All three analyses reached the same general conclusion: The success rate for software
projects is very low.The three analyses provide a good introduction to the magnitude of the
software problem and the current norms for conventional software management performance.
THE WATERFALL MODEL
he waterfall model is a software development model used in the context of large, complex
projects, typically in the field of information technology. It is characterized by a structured,
sequential approach to project management and software development.
The waterfall model is useful in situations where the project requirements are well-defined
and the project goals are clear. It is often used for large-scale projects with long timelines,
where there is little room for error and the project stakeholders need to have a high level of
confidence in the outcome.
Features of Waterfall Model
Following are the features of the waterfall model:
1. Sequential Approach: The waterfall model involves a sequential approach to
software development, where each phase of the project is completed before moving
on to the next one.
2. Document-Driven: The waterfall model depended on documentation to ensure that
the project is well-defined and the project team is working towards a clear set of
goals.
3. Quality Control: The waterfall model places a high emphasis on quality control and
testing at each phase of the project, to ensure that the final product meets the
requirements and expectations of the stakeholders.
4. Rigorous Planning: The waterfall model involves a careful planning process, where
the project scope, timelines, and deliverables are carefully defined and monitored
throughout the project lifecycle.
Five necessary improvements for waterfall model are:-

1. Program design comes first. Insert a preliminary program design phase


between the software requirements generation phase and the analysis phase. By
this technique, the program designer assures that the software will not fail
because of storage, timing, and data flux (continuous change). As analysis
proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he
senses the consequences. If the total resources to be applied are insufficient or if
the embryonic(in an early stage of development) operational design is wrong, it
will be recognized at this early stage and the iteration with requirements and
preliminary design can be redone before final design, coding, and test
commences.
2. Document the design. The amount of documentation required on most software
programs is quite a lot, certainly much more than most programmers, analysts, or program
designers are willing to do if left to their own devices. Why do we need so much
documentation? (1) Each designer must communicate with interfacing designers, managers,
and possibly customers. (2) During early phases, the documentation is the design. (3) The
real monetary value of documentation is to support later modifications by a separate test
team, a separate maintenance team, and operations personnel who are not software literate.

3. Do it twice. If a computer program is being developed for the first time, arrange matters
so that the version finally delivered to the customer for operational deployment is
actually the second version insofar as critical design/operations are concerned. Note that
this is simply the entire process done in miniature, to a time scale that is relatively small
with respect to the overall effort. In the first version, the team must have a special broad
competence where they can quickly sense trouble spots in the design, model them,
model alternatives, forget the straightforward aspects of the design that aren't worth
studying at this early point, and, finally, arrive at an error-free program.

4. Plan, control, and monitor testing. Without question, the biggest user of project
resources-manpower, computer time, and/or management judgment-is the test phase.
This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest
point in the schedule, when backup alternatives are least available, if at all. The previous
three recommendations were all aimed at uncovering and solving problems before
entering the test phase. However, even after doing these things, there is still a test phase
and there are still important things to be done, including: (1) employ a team of test
specialists who were not responsible for theoriginal design; (2) employ visual
inspections to spot the obvious errors like dropped minus signs, missing factors of two,
jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too
expensive); (3) test every logic path; (4) employ the final checkout on the target
computer.

5. Involve the customer. It is important to involve the customer in a formal way so that
he has committed himself at earlier points before final delivery. There are three points
following requirements definition where the insight, judgment, and commitment of the
customer can bolster the development effort. These include a "preliminary software
review" following the preliminary program design step, a sequence of "critical software
design reviews" during program design, and a "final software acceptance review".
Importance of Waterfall Model
Following are the importance of waterfall model:
1. Clarity and Simplicity: The linear form of the Waterfall Model offers a simple and
unambiguous foundation for project development.
2. Clearly Defined Phases: The Waterfall Model phases each have unique inputs and
outputs, guaranteeing a planned development with obvious checkpoints.
3. Documentation: A focus on thorough documentation helps with software
comprehension, maintenance, and future growth.
4. Stability in Requirements: Suitable for projects when the requirements are clear and
stable, reducing modifications as the project progresses.
5. Resource Optimization: It encourages effective task-focused work without
continuously changing contexts by allocating resources according to project phases.
6. Relevance for Small Projects: Economical for modest projects with simple
specifications and minimal complexity.
Phases of Waterfall Model
The Waterfall Model has six phases which are:
1. Requirements: The first phase involves gathering requirements from stakeholders
and analyzing them to understand the scope and objectives of the project.
2. Design: Once the requirements are understood, the design phase begins. This involves
creating a detailed design document that outlines the software architecture, user
interface, and system components.
3. Development: The Development phase include implementation involves coding the
software based on the design specifications. This phase also includes unit testing to
ensure that each component of the software is working as expected.
4. Testing: In the testing phase, the software is tested as a whole to ensure that it meets
the requirements and is free from defects.
5. Deployment: Once the software has been tested and approved, it is deployed to the
production environment.
6. Maintenance: The final phase of the Waterfall Model is maintenance, which involves
fixing any issues that arise after the software has been deployed and ensuring that it
continues to meet the requirements over time.
Advantages of Waterfall Model
The classical waterfall model is an idealistic model for software development. It is very
simple, so it can be considered the basis for other software development life cycle models.
Below are some of the major advantages of this SDLC model.
• Easy to Understand: The Classical Waterfall Model is very simple and easy to
understand.
• Individual Processing: Phases in the Classical Waterfall model are processed one at a
time.
• Properly Defined: In the classical waterfall model, each stage in the model is clearly
defined.
• Clear Milestones: The classical Waterfall model has very clear and well-understood
milestones.
• Properly Documented: Processes, actions, and results are very well documented.
• Reinforces Good Habits: The Classical Waterfall Model reinforces good habits like
define-before-design and design-before-code.
• Working: Classical Waterfall Model works well for smaller projects and projects
where requirements are well understood.
Disadvantages of Waterfall Model
The Classical Waterfall Model suffers from various shortcomings we can’t use it in real
projects, but we use other software development lifecycle models which are based on the
classical waterfall model. Below are some major drawbacks of this model.
• No Feedback Path: In the classical waterfall model evolution of software from one
phase to another phase is like a waterfall. It assumes that no error is ever committed
by developers during any phase. Therefore, it does not incorporate any mechanism for
error correction.
• Difficult to accommodate Change Requests: This model assumes that all the
customer requirements can be completely and correctly defined at the beginning of
the project, but the customer’s requirements keep on changing with time. It is difficult
to accommodate any change requests after the requirements specification phase is
complete.
• No Overlapping of Phases: This model recommends that a new phase can start only
after the completion of the previous phase. But in real projects, this can’t be
maintained. To increase efficiency and reduce cost, phases may overlap.
• Limited Flexibility: The Waterfall Model is a rigid and linear approach to software
development, which means that it is not well-suited for projects with changing or
uncertain requirements. Once a phase has been completed, it is difficult to make
changes or go back to a previous phase.
• Limited Stakeholder Involvement: The Waterfall Model is a structured and
sequential approach, which means that stakeholders are typically involved in the early
phases of the project (requirements gathering and analysis) but may not be involved in
the later phases (implementation, testing, and deployment).
• Late Defect Detection: In the Waterfall Model, testing is typically done toward the
end of the development process. This means that defects may not be discovered until
late in the development process, which can be expensive and time-consuming to fix.
• Lengthy Development Cycle: The Waterfall Model can result in a lengthy
development cycle, as each phase must be completed before moving on to the next.
This can result in delays and increased costs if requirements change or new issues
arise.
When to Use Waterfall Model?
Here are some cases where the use of the Waterfall Model is best suited:
• Well-understood Requirements: Before beginning development, there are precise,
reliable, and thoroughly documented requirements available.
• Very Little Changes Expected: During development, very little adjustments or
expansions to the project’s scope are anticipated.
• Small to Medium-Sized Projects: Ideal for more manageable projects with a clear
development path and little complexity.
• Predictable: Projects that are predictable, low-risk, and able to be addressed early in
the development life cycle are those that have known, controllable risks.
• Regulatory Compliance is Critical: Circumstances in which paperwork is of utmost
importance and stringent regulatory compliance is required.
• Client Prefers a Linear and Sequential Approach: This situation describes the
client’s preference for a linear and sequential approach to project development.
• Limited Resources: Projects with limited resources can benefit from a set-up
strategy, which enables targeted resource allocation.
CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE
Conventional software management refers to traditional methods used in software
development and project management, which generally follow structured, sequential
processes. These conventional methods, like the Waterfall model, have a series of predefined
stages such as requirements gathering, design, development, testing, and deployment.
Software management performance in this context focuses on measuring how well these
stages are executed to ensure the project meets its objectives within set constraints, such as
budget, time, and scope. Here are key aspects of conventional software management
performance:
1. Predictability and Planning :Conventional methods emphasize upfront planning, aiming
for a clear, predictable project timeline and budget. This planning helps set specific
milestones for progress tracking and allocate resources accordingly.
• Performance is measured based on how closely actual progress aligns with this initial
plan, which is typically rigid. Deviations from the plan can indicate inefficiencies or
issues in performance.
2. Quality Assurance and Testing
• Quality control is typically concentrated at the testing phase, which occurs toward the
end of the project. Conventional performance metrics here focus on defect detection
rates, code quality, and meeting design specifications.
• Since testing often happens after significant development, quality issues can be costly
to fix, and performance here hinges on how well the initial requirements were
captured and implemented.
3. Resource Allocation and Utilization
• In conventional management, resource allocation is largely fixed early in the project.
Performance is assessed based on how well team members and tools are utilized
within these parameters, aiming for high productivity without overburdening.
• Overuse of resources or unexpected costs often points to gaps in initial planning or
unforeseen project complexities.
4. Schedule Adherence
• Meeting project deadlines is a core metric in conventional software management. The
structured nature of conventional methods aims to minimize the risk of delays by
setting a strict schedule.
• Performance is measured by the team’s ability to deliver milestones on time, and
missed deadlines typically impact the entire project schedule.
5. Cost Management
• Budget adherence is another performance metric, as cost overruns are common
challenges. Budget control in conventional methods is based on detailed initial
estimates that factor in development, testing, and deployment costs.
• Performance issues related to costs often arise from scope changes, unexpected issues,
or underestimation of time and resources, which are all risks in sequential planning.
6. Customer Satisfaction
• Since conventional models gather requirements early and deliver at the end, customer
satisfaction depends heavily on how well initial requirements were understood and
fulfilled. Poor alignment between customer expectations and delivered software can
hurt performance evaluations.
Challenges of Conventional Software Management Performance:
• Rigidity: Limited flexibility in handling changes after project initiation can affect the
project’s success, especially in dynamic environments.
• Late Detection of Issues: Problems are often found late in the process (e.g., during
final testing), making them costly to fix.
• Limited Customer Feedback: Customer input is usually incorporated only at the
beginning and end, so mid-project changes are challenging to address, potentially
affecting performance on user satisfaction.
Conventional software management performance is thus heavily dependent on planning
accuracy, schedule adherence, resource optimization, and successful execution of sequential
tasks. However, this model often faces limitations in environments requiring adaptability,
leading many teams to adopt Agile and other iterative methods in recent years.
OVERVIEW OF PROJECT PLANNING – STEPWISE PROJECT PLANNING
A good project plan sets out the processes that everyone is expected to follow, so it avoids a
lot of headaches later. For example, if you specify that estimates are going to be worked out
by subject matter experts based on their judgement, and that’s approved, later no one can
complain that they wanted you to use a different estimating technique. They’ve known the
deal since the start.
Project plans are also really helpful for monitoring progress. You can go back to them and
check what you said you were going to do and how, comparing it to what you are actually
doing. This gives you a good reality check and enables you to change course if you need to,
bringing the project back on track.
Tools like dashboards can help you make sure that your project is proceeding according to
plan. ProjectManager has a real-time dashboard that updates automatically whenever tasks
are updated.
How to Create a Project Plan
Your project plan is essential to the success of any project. Without one, your project may be
susceptible to common project management issues such as missed deadlines, scope creep and
cost overrun. While writing a project plan is somewhat labor intensive up front, the effort will
pay dividends throughout the project life cycle.
The basic outline of any project plan can be summarized in these five steps:
1. Define your project’s stakeholders, scope, quality baseline, deliverables, milestones,
success criteria and requirements. Create a project charter, work breakdown structure
(WBS) and a statement of work (SOW).
2. Identify risks and assign deliverables to your team members, who will perform the
tasks required and monitor the risks associated with them.
3. Organize your project team (customers, stakeholders, teams, ad hoc members, and so
on), and define their roles and responsibilities.
4. List the necessary project resources, such as personnel, equipment, salaries, and
materials, then estimate their cost.
5. Develop change management procedures and forms.
6. Create a communication plan, schedule, budget and other guiding documents for the
project.
What Are the 5 Phases of the Project Life Cycle?
Any project, whether big or small, has the potential to be very complex. It’s much easier to
break down all the necessary inclusions for a project plan by viewing your project in terms of
phases. The Project Management Institute, within the Project Management Book of
Knowledge (PMBOK), has identified the following 5 phases of a project:
1. Initiation: The start of a project, in which goals and objectives are defined through a
business case and the practicality of the project is determined by a feasibility study.
2. Planning: During the project planning phase, the scope of the project is defined by a
work breakdown structure (WBS) and the project methodology to manage the project
is decided on. Costs, quality and resources are estimated, and a project schedule with
milestones and task dependencies is identified. The main deliverable of this phase is
your project plan.
3. Execution: The project deliverables are completed during this phase. Usually, this
phase begins with a kick-off meeting and is followed by regular team meetings
and status reports while the project is being worked on.
4. Monitoring & Controlling: This phase is performed in tandem with the project
execution phase. Progress and performance metrics are measured to keep progress on
the project aligned with the project plan.
5. Closure: The project is completed when the stakeholder receives the final deliverable.
Resources are released, contracts are signed off on and, ideally, there will be an
evaluation of the successes and failures.

PART – 2 IMPROVING SOFTWARE ECONOMICS


Five basic parameters of the software cost model are
1.Reducing the size or complexity of what needs to be developed.
2. Improving the development process.
3. Using more-skilled personnel and better teams (not necessarily the same thing).
4. Using better environments (tools to automate the process).
5. Trading off or backing off on quality thresholds.
These parameters are given in priority order for most software domains. Table 3-1
lists some of the technology developments, process improvement efforts, and
management approaches targeted at improving the economics of software
development and integration.
REDUCING SOFTWARE PRODUCT SIZE

Reducing software product size is a strategy in software process and project management
focused on decreasing the complexity, code volume, and storage needs of a software product.
This effort aims to streamline development, improve maintainability, and optimize system
resources, which can lead to faster performance and lower operational costs. Here’s an
overview of why and how reducing product size can be beneficial:
Benefits of Reducing Software Product Size
1. Improved Performance and Efficiency
o Smaller codebases are typically easier to run and often execute faster,
especially in resource-constrained environments like embedded systems or
mobile devices.
o Reducing the software size can also reduce memory and storage requirements,
enhancing performance on systems with limited resources.
2. Lower Maintenance Costs
o A smaller codebase is easier to maintain and debug, which reduces the long-
term costs associated with fixes, updates, and enhancements.
o Simplified code often leads to fewer bugs, and smaller, more modular
components are easier to update and test.
3. Enhanced Security and Reliability
o Large, complex codebases often have more areas where vulnerabilities can
arise. By minimizing code, teams can reduce potential security flaws and
increase overall reliability.
o Less code means a smaller attack surface, making it easier to conduct
thorough security audits and testing.
4. Reduced Development Time and Costs
o Smaller products typically require less time to develop and test. By cutting out
redundant or unnecessary code, developers can speed up development cycles
and deliver more streamlined, efficient software.
o Less time spent coding and testing translates to lower development costs,
which is especially beneficial for projects with tight budgets.
5. Smoother User Experience
o Smaller, optimized applications generally lead to quicker load times, less
strain on system resources, and a smoother experience for users, especially on
devices with limited processing power or storage.
Strategies for Reducing Software Product Size
1. Code Refactoring and Optimization
o Review and refactor code regularly to eliminate redundancy, dead code, and
any functions or libraries that do not contribute meaningfully to the product’s
functionality.
o Optimizing algorithms and data structures can also help reduce the product
size and improve performance.
2. Use of Modular and Component-Based Design
o By breaking the software into smaller, reusable modules, teams can isolate
functionality, making it easier to replace or remove components without
affecting the whole product.
o Modular design encourages code reuse, which can reduce the overall size by
avoiding duplication.
3. Optimize Third-Party Libraries and Dependencies
o Third-party libraries can significantly increase software size, especially when
they include features that are not fully utilized.
o Only include essential libraries and consider lightweight alternatives that offer
the necessary functionality without excessive overhead.
4. Remove Unused Features
o Regularly conduct feature audits to identify underused or unnecessary features
that add to the software’s size. Reducing feature bloat (extra, rarely used
features) can help keep the product lightweight and focused.
o Engage with users to understand which features are essential and which can be
scaled down or removed without affecting user satisfaction.
5. Data Compression and Resource Optimization
o Compress assets such as images, audio, and video to reduce their size.
Optimizing assets can significantly reduce the overall storage requirements of
a product.
o Remove unused resources, including old configuration files, logs, or large
datasets, especially if they are not directly used by the product.
6. Efficient Use of Data Storage
o Implement efficient data storage solutions, such as binary serialization, to save
space when data storage is necessary within the application.
o Consider offloading some data storage or processing tasks to cloud services if
possible, especially for mobile or embedded devices where local storage is
limited.
Managing Product Size Reduction in Project Management
1. Set Clear Size-Related Goals Early
o Establish size goals and constraints at the beginning of the project as part of
the requirements, so they can guide development decisions throughout the
process.
o Factor size goals into project planning, budgeting, and time estimates.
2. Continuous Monitoring of Size Metrics
o Track metrics like lines of code, binary size, and resource usage continuously
to ensure the project remains within target boundaries.
o Use automated tools to provide real-time feedback on size impact, making it
easier to manage and adjust before issues become critical.
3. Encourage a Lean Development Culture
o Encourage developers to follow best practices such as writing clean, efficient
code, and to use design patterns that promote simplicity and modularity.
o Regular code reviews focused on size and optimization can reinforce a culture
of lean software development.
4. Use Prototyping and Testing for Feedback
o Prototyping helps identify which features are essential and what can be
optimized, reduced, or removed without sacrificing user satisfaction.
o Conduct testing with a focus on performance and user experience in
environments that mimic target constraints (e.g., lower memory, limited
processing power).
Reducing software product size ultimately contributes to a more efficient, maintainable, and
user-friendly product. It requires a proactive approach throughout the software development
lifecycle to ensure size goals are met without compromising functionality or quality.
IMPROVING SOFTWARE PROCESSES
Improving software processes in software process and project management involves refining
and optimizing the methods, practices, and workflows that teams use to design, develop, test,
and deliver software. The goal of process improvement is to make development more
efficient, reliable, and responsive to business needs, ultimately delivering high-quality
products with fewer defects, reduced costs, and shorter timelines. Here are key elements and
approaches to improving software processes:
1. Defining Process Improvement Goals
• Process improvement goals should align with the organization’s strategic objectives.
For instance, if the organization values speed to market, the goal may be to reduce
development cycles. If reliability is prioritized, the focus may be on defect reduction.
• Common improvement goals include increasing development speed, reducing defect
rates, enhancing collaboration, and improving customer satisfaction.
2. Understanding Current Process Maturity Levels
• Software process maturity frameworks, such as the Capability Maturity Model
Integration (CMMI), help assess current processes and determine areas for
improvement. These models categorize process maturity levels from initial (ad hoc) to
optimized (proactively improving).
• Assessing maturity levels gives organizations a structured understanding of their
strengths and weaknesses, identifying practices to improve efficiency, consistency,
and quality.
3. Implementing Agile Methodologies
• Agile practices like Scrum, Kanban, and Extreme Programming (XP) focus on
iterative development, adaptability, and frequent feedback. These methods improve
flexibility, allowing teams to respond quickly to changing requirements.
• Agile’s emphasis on continuous delivery, customer feedback, and short iterations can
improve overall productivity, reduce the risk of project failure, and ensure alignment
with customer needs.
4. Using DevOps for Continuous Improvement
• DevOps combines development and operations practices to streamline software
delivery and operations, automating tasks like testing, integration, deployment, and
monitoring.
• By automating repetitive processes, DevOps can improve deployment speed, reduce
errors, and ensure a continuous flow from development to production, enhancing
software quality and customer responsiveness.
5. Quality Assurance and Testing Improvements
• Enhancing quality assurance (QA) processes involves shifting testing left in the
development cycle (testing early and often) to catch defects sooner and improve code
quality.
• Automated testing, continuous integration, and test-driven development (TDD) can all
help increase testing efficiency, reduce defects, and improve the reliability of
software.
6. Process Standardization and Best Practices
• Establishing standardized processes across teams ensures consistent quality and
efficiency. Standard practices can include coding standards, documentation
requirements, code reviews, and testing protocols.
• Standardization minimizes errors, improves team alignment, and enhances
collaboration, especially in larger organizations with multiple teams.
7. Process Metrics and Data-Driven Decisions
• Monitoring and analyzing process metrics (such as defect density, cycle time, and
code churn) helps track performance and identify bottlenecks or inefficiencies.
• Using data-driven decisions allows teams to make informed changes, enabling them
to identify and eliminate wasteful practices and improve overall efficiency.
8. Encouraging Continuous Feedback and Retrospectives
• Frequent feedback cycles with stakeholders and retrospectives with the development
team allow for quick identification of issues and opportunities for improvement.
• Retrospectives, common in Agile practices, encourage teams to reflect on successes
and failures after each sprint or iteration, fostering a culture of continuous learning
and improvement.
9. Improving Project Management Practices
• Effective project management practices such as clear requirement gathering, scope
management, and risk mitigation improve project outcomes by reducing
misunderstandings and scope creep.
• Tools like Gantt charts, Kanban boards, and project management software (e.g., Jira,
Asana) can help teams better track progress, assign responsibilities, and manage
timelines.
10. Training and Skill Development
• Investing in training and skill development is essential for any process improvement.
As technology and best practices evolve, continuous learning helps teams stay
updated on the latest tools, techniques, and methodologies.
• Training can cover coding standards, new technologies, testing techniques, or process
improvement methods like Lean and Six Sigma.
11. Incorporating Lean Principles to Reduce Waste
• Lean principles focus on eliminating waste, improving efficiency, and delivering
value to customers. This can involve cutting down on unnecessary processes,
simplifying workflows, or automating repetitive tasks.
• Lean methodologies like Value Stream Mapping can help identify and remove
bottlenecks, streamline processes, and enhance value delivery to customers.
12. Customer and Stakeholder Involvement
• Engaging customers and stakeholders throughout the software development process
ensures that the final product meets their needs and expectations, minimizing rework
and dissatisfaction.
• Customer feedback can be integrated at regular intervals, helping shape the product to
better suit end-user requirements and improve overall customer satisfaction.
Benefits of Improving Software Processes
1. Enhanced Efficiency and Reduced Cycle Time
o Optimized processes reduce time spent on each development stage, leading to
faster delivery of software products and shorter time-to-market.
2. Higher Quality and Reliability
o Improved QA practices and continuous testing reduce defects, making the
software more reliable and robust and increasing end-user satisfaction.
3. Better Resource Utilization
o Streamlined workflows help make the best use of resources (people, time, and
technology), reducing unnecessary costs and allowing teams to focus on high-
value activities.
4. Greater Team Collaboration and Morale
o Clear processes and roles improve team communication and collaboration,
reducing friction and fostering a positive work environment.
5. Increased Customer Satisfaction
o Software that is delivered on time, with high quality, and aligned with
customer needs, contributes to greater customer satisfaction and loyalty.
6. Scalability and Adaptability
o Improved processes make it easier for organizations to scale up or adapt to
new technologies or project requirements, maintaining efficiency as they grow.
IMPROVING TEAM EFFECTIVENESS
No matter how productive the team is, there are always some ways available with the help of
which we can be incorporated to take the productivity of the workplace to a whole new level.
Efficiency generally represents a level of performance that explains and describes the process
that just uses the lowest amount of inputs to develop or create the greatest or highest amount
of outputs. Rather than the sum of the individuals, teamwork is much more efficient and
important nowadays. A team is vulnerable whenever it is out of balance. Some of the true
statements of team management are given below:

• With the help of the engineering team that is nominal or are not expert, a project that
is well and carefully managed can succeed in achieving their purpose.
• Even if the team of engineers is highly expert, a project that is not carefully managed
can never succeed in achieving its purpose.
• With the help of a team of software builders or developers that are not experts or are
nominal, a system that is well-architected can be developed or built.
• Even with the help of a team of software builders or developers that are experts, a
system that is poorly architected will experience many difficulties.
Boehm (1981) have suggested and offered five staffing principles generally to examine how
to staff a software project. These principles are given below:

1. The principle of Top Talent –


This principle simply suggests using fewer or fewer people that are better. The quality
of people is very essential and important. So, it is highly suggested to use a smaller
number of peoples with more skills and talent.
2. The Principle of Job Matching –
This principle simply suggests to fit and adjust and task to the skills and the
motivation of the people that are available. The skillset of the people usually varies
from one to another. The best programmer might not be suitable as an architect or a
manager. In the same way, an architect or manager is not suitable as a programmer.
So, it is highly suggested to motive them to do their jobs.
3. The Principle of Career Progression –
This principle states that an organization does best and better in the long run just be
providing help to its people to self-actualize. All the training programs of the
organization that are having educational value and project training programs are very
helpful and essential for all of the individuals to develop their careers.
4. The Principle of Team Balance –
This principle simply suggests to use and select the people who will complement and
harmonize with one other. The psychology of all the team members will help in
balancing the team. All of them should be friendly and should be quickly mingle with
their co-member of the team.
5. The Principle of Phaseout –
This principle suggests that keeping a misfit on the team won’t be going to benefit
anyone. A misfit basically gives the reason to identify a better person or to live with
fewer people. It might demotivate other team members, will not even self-actualize,
and generally destroys the team balance.
IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS
Nowadays, many software tools that are available in the market that helps in automating the
development and maintenance of the artifacts. Some of these tools can be planning tools,
requirement management tools, visual modeling tools, quality assurance tools, and many
more. The development environment should also support requirement management,
document automation, host/target programming tools, testing, and feature/defect tracking, etc.
All of the tools should allow the designer or the developer to easily traverse between various
artifacts. Tools and the environments are viewed as the primary deliver vehicle for the
process automation and improvement, so the effect and impact are also much higher and
greater. Some of the important concepts include:

• Forward Engineering –
It is a process usually applied in the software engineering principles, concepts, and
methods to simply recreate an existing application. Forward Engineering is the
automation of one artifact from another more abstract representation. Examples
include compilers, linkers, etc.
• Reverse Engineering –
It is generally a process of recovering the design, requirement specifications, and
functions of a product from its code analysis. Reverse Engineering is the generation of
a more abstract representation from an existing artifact. Examples include creating or
developing a visual model from source code.
• Round-trip Engineering –
It is actually a functionality of software development tools that simply synchronizes
two or more software artifacts that are related such as source code, models, etc.
Round-trip Engineering term is used to describe and explain the key capability of
environments that usually support iterative development.
ACHIEVING REQUIRED QUALITY
Some of the software’s best practices are usually derived from the development process and
technologies. All of these practices have a greater impact on the addition of improving cost
efficiency. Some of them also permit and allow the improvement in quality for the same cost.
Some of the quality improvements with a modern process are given below:

Conventional
Quality Driver Process Modern Iterative Processes

Requirements It is usually
misunderstandings discovered late Usually resolved early

It is unknown till Usually understood and resolved at early


Development risk date stage

It is still a quality driver, but resolving


Commercial It is mostly not trade-offs is essential to be done at early
components available stage in the life cycle

It is discovered late It is resolved at early stage in the life


Change management in the life cycle cycle process

It is managed, measured or calculated,


Software Process Rigor It is document based/ and tool supported
Key elements that basically improves the quality of software include focusing on the
powerful requirements and use case as early as possible in the overall life cycle process,
focusing on the completeness of the requirements and traceability as late as possible in the
life cycle process, use of metrics and indicators simply to measure the progress and the
quality of architecture, etc.
PEER INSPECTION

Peer inspection, also known as peer review, is a collaborative quality assurance activity in
software process and project management where team members review each other's work to
identify defects, improve code quality, and ensure alignment with standards and
requirements. This practice involves peers reviewing code, designs, documentation, or other
deliverables, typically before they move to the next stage in development. Peer inspections
help catch issues early in the development process, which can significantly reduce the cost
and effort associated with fixing bugs later in the project lifecycle.
Key Objectives of Peer Inspection
1. Defect Detection: Identify errors, inconsistencies, and potential bugs early on,
reducing the likelihood of defects being found in later stages where they are more
costly to fix.
2. Code Quality Improvement: Promote best practices, coding standards, and
consistent design, leading to cleaner, more maintainable code.
3. Knowledge Sharing: Facilitate knowledge transfer among team members, enhancing
collective skills, understanding of the codebase, and familiarity with different parts of
the project.
4. Process Improvement: Provide feedback on the development process, highlighting
areas where methodologies or practices could be improved.
5. Alignment and Consistency: Ensure all deliverables meet project requirements and
standards, which is particularly useful when multiple developers are working on the
same codebase.
Types of Peer Inspection
1. Code Review: A systematic examination of source code by team members. This is the
most common type of peer inspection, focusing on code logic, adherence to coding
standards, and potential errors.
2. Design Review: A review of architectural or design documents to ensure they meet
requirements, follow design principles, and are feasible for implementation.
3. Documentation Review: Checking project documentation, including requirements
and test plans, for accuracy, completeness, and clarity.
4. Testing and QA Review: Reviewing test cases, test scripts, and QA processes to
ensure they adequately cover requirements and are efficient.
The Peer Inspection Process
1. Preparation: The author of the deliverable (code, document, or design) notifies the
team about the inspection and provides materials for review. Reviewers may prepare
by understanding the objectives and gathering necessary information.
2. Review Session: The review team (usually 2-5 peers) examines the deliverable and
discusses identified issues, improvements, and suggestions. This session can take
place in a meeting or via a collaborative online tool.
3. Issue Identification and Documentation: Reviewers document any defects,
questions, or improvement suggestions, often categorizing them by severity or impact.
Common tools used for documentation are issue tracking systems (like Jira) or code
review platforms (like GitHub, GitLab, or Bitbucket).
4. Feedback and Action: The author addresses the feedback by making necessary
changes. In some cases, a follow-up review session may occur to verify that feedback
has been implemented.
5. Evaluation and Follow-Up: The team may review the peer inspection process itself
to identify any areas for improvement in future reviews, encouraging continuous
improvement in both the process and team collaboration.
Benefits of Peer Inspection
1. Early Defect Detection: By catching defects early, teams reduce the need for rework
and prevent issues from reaching later stages, where they are more time-consuming
and costly to fix.
2. Enhanced Code Quality: Reviews improve code readability, maintainability, and
performance, contributing to a more robust codebase.
3. Increased Knowledge Sharing: Peer reviews encourage collaboration, helping team
members learn from each other’s expertise and become familiar with different parts of
the codebase.
4. Higher Project Efficiency: Peer inspections reduce the load on testing and quality
assurance by catching defects early, leading to faster development cycles and fewer
delays.
5. Improved Team Cohesion and Standards: With regular feedback, teams naturally
align on coding standards, documentation practices, and best practices, building a
cohesive work culture.
Best Practices for Effective Peer Inspection
1. Set Clear Goals and Guidelines: Define the purpose, scope, and standards for the
review. Ensure reviewers know what to focus on, such as functionality, performance,
or security.
2. Keep Review Groups Small: Limit the number of reviewers to ensure productive
discussions without overwhelming feedback, typically involving two to five people.
3. Use Review Tools: Utilize version control systems, code review platforms, or issue
trackers to facilitate discussions, provide structured feedback, and document issues.
4. Focus on Objective Feedback: Encourage a collaborative and constructive
atmosphere to ensure feedback is given objectively, focusing on the deliverable, not
the author.
5. Limit Review Sessions to Manageable Time Blocks: Research shows that review
quality declines if sessions are too long. Short, focused reviews (usually 60-90
minutes) are more effective.
6. Follow Up on Action Items: Ensure that issues identified in the review are tracked
and addressed before the deliverable moves to the next stage. Re-inspect critical
issues if necessary.
Challenges in Peer Inspection
1. Time Constraints: Reviews can be time-consuming, especially if team members
have tight deadlines. To manage this, teams should balance review rigor with project
timelines.
2. Feedback Sensitivity: Not all feedback is easy to receive, and reviews may
inadvertently cause tension. To mitigate this, teams should cultivate a culture of
constructive feedback and learning.
3. Skill Disparities: If reviewers have varied skill levels, the review process might be
less effective. Training and mentoring can help bridge knowledge gaps.
4. Review Scope Creep: Overly detailed reviews can become unproductive. Teams
should ensure reviews stay focused on high-impact areas and avoid getting bogged
down in minor details.
Tools for Peer Inspection
1. Code Review Tools: GitHub, GitLab, Bitbucket, and Crucible provide platforms for
collaborative code reviews and feedback.
2. Documentation Review Tools: Tools like Confluence or Google Docs enable
documentation reviews and real-time feedback.
3. Issue Tracking Systems: Jira, Trello, and Azure DevOps help track review issues and
action items for better follow-up and accountability.
UNIT – 2 THE OLD WAY AND THE NEW WAY, LIFE CYCLE
PHASES & ARTIFACTS OF THE PROCESS
PART – 1 THE OLD WAY & THE NEW WAY
THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING
We know that there are many explanations and descriptions of engineering software “the old
way”. After many years of experience regarding software development, the software industry
has learned and understood many lessons and formulated or created many principles. Some of
these principles are given below :
1. Create or Make Quality –
Quality of software must be measured or quantified and mechanisms put into place to
motivate towards achieving the goal.
2. Large or high-quality software is possible –
There are several techniques that have been given a practical explanation to raise and
increase and quality includes involving customer, prototyping, simplifying design,
conducting inspections, and hiring good and best people.
3. Give end-products to Customers early –
It doesn’t matter how hard we try to learn and know about needs of users during
requirements stage, most important and effective to determine and identify their real
and realistic needs is to give product to users and let them play with it.
4. Determine problem or issue before writing and needs or requirements –
When engineers usually face with what they believe is a problem, they rush towards
offering a solution. But we should keep in mind that before we try to solve any
problem, be sure to explore all alternatives and just don’t get blinded by obvious
solution.
5. Evaluate all Design Alternatives –
When requirements or needs are agreed upon, we must examine and understand
various architecture and algorithms. We usually don’t want to use “architecture”
generally due to its usage in requirements specification.
6. Use an appropriate and correct process model –
Each and every project must select an appropriate process that makes most sense for
that project generally on basis of corporate culture, willingness to take risks,
application area, volatility of requirements, and extent to which all requirements and
needs are understood in a good manner.
7. Use different languages for various phases –
Our industry generally gives simple solutions to large complex problems. Due to his,
many declare that best development method is only one that makes use of notation
throughout life cycle.
8. Minimize or reduce intellectual distance –
The structure of software must be very close enough to a real-world structure to
minimize intellectual distance.
9. Before tools, put Techniques –
An undisciplined software engineer with a tool becomes very dangerous and harmful.
10. Get it right just before we make it very faster –
It is very easy and simple to make a program that’s being working run very faster than
it is to simply make a program work fast.
11. Inspect Code –
Assessing or inspecting design with its details and code is most and better way of
identifying errors other than testing.
12. Rather than Good technology, Good management is more important –
Good management simply motivates other people also to do their work at best, but
there are no universal “right” styles for management.
13. Key to success is “PEOPLE” –
People with high and many skills, experience, talent, and training are key to success.
14. Follow with Care and in a proper manner –
just because everyone is doing something doesn’t mean that it is right for us. It may or
may not be right for us. So, we must inspect or assess its applicability to our
environment very carefully.
15. Take Responsibility –
When a bridge collapse we only ask one question, “What did engineers do wrong?”
Even when software fails, we rarely ask this question. This is due to that in any
engineering discipline, best and important methods can be used to produce or develop
an awful design, and best and largely antiquated methods to develop an elegant
design.
16. Understanding priorities of customers –
It is simply possible that customer would tolerate 90 percent of performance that is
delivered late only if they could have 10 percent of it on time.
17. More they see, more they need –
More the functionality or performance we provide to user, more functionality or
performance they will want. Their expectation increases by time to time.
18. Plan to throw one away –
The most essential and critical factor is whether a product is entirely new or not.
Brand-new applications, architectures, interfaces, or algorithms generally work rarely
for first time.
19. Design for update or change –
We should update or accommodate change architecture, components, and
specification techniques that we generally use.
20. Without documentation, design is not a design –
We have some engineers often saying that “I have finished and completed design and
all things left is documentation”.
21. Be realistic while using tools –
Tools that software used make their customers and users more efficient.
22. Encapsulate –
Encapsulate simply means to hide information and it is easy, simply. It is a concept
that simply results in software that is simpler and easier to test and very easy to
maintain.
23. Avoid tricks –
Some of programmers love to create and develop programs with some tricks
constructs that perform a function in correct way but in a non-understandable manner.
Rather just prove to world how smart we are by avoiding trick code.
24. Don’t test-own software –
Software Developers should not test their own software. They should not be primary
testers of their own software.
25. Coupling and Cohesion –
Using coupling and cohesion are best way to measure software’s inherent
maintainability and adaptability.
25. Except for excellence –
Our employees will work in a far better way if we have high expectations for them.
PRINCIPLES OF MODERN SOFTWARE MANAGEMENT
There are some modern principles for the development of software. By following these
modern principles we can develop an efficacious software meeting all the needs of customer.
To develop a proper software one should follow the following 10 Principles of software
development:
Principles Of Software development:

These are explained as following below.


1. Architecture first approach:
In this approach over the main aim is to build a strong architecture for our software.
All the ambiguities and flaws are being identified during the very trivial phase. Also,
we can take all the decisions regarding the design of the software which will enhance
the productivity of our software.
2. Iterative life cycle process:
An iterative life cycle process we repeated the process again and again to eliminate
the risk factors. An iterative life cycle we mainly have four steps requirement
gathering, design, implementation, and testing. All these steps are repeated again and
again until we mitigate the risk factor. Iterative life cycle process is important to
alleviate risk at an early stage by repeating the above-mentioned steps again and
again.
3. Component Based Approach:
In component-based approach is a widely used and successful approach in which we
reuse the previously defined functions for the software development. We reuse the
part of code in the form of components. Component-based UI Development
Optimizes the Requirements & Design Process and thus is one of the important
modern software principle.
4. Change Management system:
Change Management is the process responsible for managing all changes. The main
aim of change management is to improve the quality of software by performing
necessary changes. All changes implemented are then tested and certified.
5. Round Trip Engineering:
In round trip engineering code generation and reverse engineering take place at the
same time in a dynamic environment. Both components are integrated so that
developers can easily work on both of them. In round trip engineering, the main
characteristic is automatic update of artifacts.
6. Model Based Evolution:
Model-based evolution is an important principle of software development. A model-
based approach supports the evolution of graphics and textual notions.
7. Objective Quality Control:
The objective of quality control is to improve the quality of our software. It involves
Quality management plan, Quality metrics, Quality checklist, Quality baseline, and
Quality Improvement measures.
8. Evolving levels of details:
Plan intermediate releases in groups of usage scenarios with evolving levels of details.
We must plan an incremental realize in which we have an evolving level of use case,
architecture, and details.
9. Establish a configurable process:
Establish a configurable process that is economically scalable. One single process is
not suitable for all the development so we must use a configurable process which can
deal with various applications.
10. Demonstration Based approach:
In this approach, we mainly focus on demonstration. It helps in the increase of
productivity and quality of our software by representing a clear description about
problem domain, approaches used and the solution.
TRANSITIONING TO AN ITERATIVE PROCESS
Nowadays, Modern Software Development has moved far away from Conventional waterfall
model, in which each and every stage or phase of the development process independent of
completion of previous stage. All benefits of economic as basic part of transitioning from
conventional waterfall model to an iterative development process are very significant but are
also very difficult and hard to quantify. As benchmark of economic impact of improvement of
process being expected, consider exponent parameters of process of COCOMO II model.
These exponent parameters generally range from 1.01 (no dis-economy of scale) to 1.26 (dis-
economy of scale). All parameters that simply control and govern value of exponent of
process are application precedentedness, flexibility of process, architecture risk resolution,
team cohesion, and also maturity of software process.
The following are in list that maps exponent parameters of process of COCOMO II :
• Application Precedentedness –
In understanding how to plan and execute project of software development, the main
critical experience is Domain experience. The key goal or target for unprecedented
systems is to confront risks and to establish precedents early, even if they not
complete either experimental. This is main essential reason due to which software
industries have moved to an iterative life-cycle process. In life cycle, early iterations
generally establish or develop precedents from which product, process, and plans can
be explained and elaborated in evolving levels detail.
• Process Flexibility –
Modern software development is explained by large solution space and various
interrelated concerns that there is paramount requirement for non-stop incorporation
of changes. These alterations or changes might be inherent in understating issue or
problem, solution space, or plans. By the efficient change management that
commensurate’s with the project requirements must support artifacts of project. To
achieve software return on investment, it is very important and necessary to have
configurable process that allows common framework to accepted and adopted across
range of various projects.
• Architecture Risk Resolution –
For successful iterative development process, development of crucial theme i.e.
Architecture risk resolution is responsible. Architecture is generally developed or
established and stabilized by project team before development of components that
usually make up whole suite of applications components. The infrastructure, common,
and control mechanisms are generally forced by architecture-first and component-
based development approach so that it can be elaborated very early in life cycle and
drives all components to make or buy decisions into process of architecture.
• Team Cohesion –
Generally, teams that are successful are cohesive and teams that are cohesive are
successful. Objectives and priorities that are common are shared among these
cohesive and successful teams. Nowadays, advancement in technology like
programming languages, Unified Modeling Language (UML) has allowed more
rigorous and understandable notations for communicating information of software
engineering, especially in needs or requirements and design artifacts that were ad hoc
are completely dependent on exchange of paper previously. These model-based
formats have allowed support of round-trip engineering that is very much needed to
develop change freedom sufficient for developing design representations.
• Software Process Maturity –
For Software Process Assessment, Capability Maturity Model (CMM) is Software
Engineering Institute which is well-accepted benchmark. A highly mature process is
allowed through an integrated environment that generally provides appropriate and
correct level of automation to instrument process for control of objective quality.

PART – 2 LIFE CYCLE PHASES


ENGINEERING AND PRODUCTION STAGES

The project lifecycle in software development consists of organized, sequential phases that
guide a project from initial planning through final deployment and support. These phases,
designed to manage complexity and enhance productivity, are categorized into two main
segments: Engineering Phase and Production Phase. Each of these broad categories
includes specific stages that structure project progress, assess feasibility, manage risk, and
ensure the efficient, timely delivery of high-quality software.
1. Engineering Phase
The Engineering Phase focuses on laying the foundational structure for the software project.
It defines the scope, goals, and preliminary architecture and ensures that all initial
considerations are made to inform the rest of the project. This phase generally involves a
smaller team and faces higher uncertainty, as it lays the groundwork for subsequent stages. It
is divided into two sub-phases: the Inception Phase and the Elaboration Phase.
a. Inception Phase
The Inception Phase is the initial stage where project goals are established, requirements are
gathered, and preliminary analyses are performed. Key activities in this phase include:
• Goal Setting: Defining the purpose of the project, desired outcomes, and high-level
objectives.
• Requirements Gathering: Collecting functional and non-functional requirements to
understand the features and capabilities the software must deliver.
• Cost Estimation: Creating an initial budget to understand the financial requirements
and constraints of the project.
• Risk Identification: Identifying possible risks that could impact the project, such as
technical limitations, time constraints, or resource availability.
• Scope Definition: Clearly outlining what will be included in the project and what will
not, to maintain focus and avoid scope creep.
• Architecture and Feasibility Analysis: Conducting a high-level architectural
assessment and feasibility study to determine whether the project can realistically be
executed with the available resources and technology.
This phase establishes a blueprint for the project by analyzing feasibility and requirements,
providing a structured start to the development process.
b. Elaboration Phase
The Elaboration Phase expands on the Inception Phase by diving deeper into technical
aspects, refining requirements, and establishing a solid architectural foundation for the
software. Activities in this phase include:
• Architecture Evaluation: A more thorough examination of the initial architecture,
with an emphasis on making it efficient, scalable, and robust.
• Use Case Analysis: Defining specific use cases to identify how the software will be
used, which helps clarify functionality and user requirements.
• Software Diagrams and Models: Creating detailed diagrams (like UML diagrams)
that map out the software structure, component interactions, and other essential
details.
• Risk Reduction: Taking steps to mitigate the highest priority risks identified in the
Inception Phase, often by addressing potential technical or resource challenges early
on.
• Preliminary Module Creation: Developing initial versions or prototypes of core
modules to validate the architecture and establish a foundation for later development.
The Elaboration Phase solidifies the project structure, architecture, and requirements, giving
teams a reliable roadmap for the subsequent production activities.
2. Production Phase
The Production Phase encompasses the implementation, optimization, testing, and
deployment stages. During this phase, the project involves a larger team and operates with
more predictability, as many of the technical unknowns are resolved. The Production Phase is
divided into two sub-phases: the Construction Phase and the Transition Phase.
a. Construction Phase
The Construction Phase is the main development stage, where coding and testing are the
primary focus. This phase involves integrating all features and components into a working
application and refining them to meet functional and performance requirements. Key
activities in this phase include:
• Implementation: Writing code based on the designs and requirements finalized in the
Engineering Phase. This is where the actual software product is built.
• Risk Minimization: Addressing and eliminating risks as they arise during
implementation to ensure a smoother development process.
• Component Integration: Combining individual features, components, or modules
into a cohesive application.
• Testing: Performing rigorous tests, including unit testing, integration testing, and
performance testing, to verify that each component works correctly and meets quality
standards.
• Process Optimization: Identifying and implementing improvements to streamline
workflows, reduce development costs, and improve project efficiency.
The Construction Phase emphasizes the development and integration of the application,
striving to minimize cost while maximizing quality through testing and optimization.
b. Transition Phase
The Transition Phase is the final stage of the Production Phase and includes final testing,
deployment, and post-release modifications. This phase ensures that the software is user-
ready and meets all requirements. Activities include:
• Beta Testing: Conducting beta tests with a small group of end-users to validate the
software’s functionality in real-world settings and gather feedback on usability and
performance.
• Deployment: Launching the software in a live production environment, making it
accessible to all intended users.
• User Feedback Collection: Gathering user feedback post-deployment to identify any
additional requirements or usability issues.
• Post-Release Adjustments: Implementing minor fixes or updates based on user
feedback to enhance the software’s efficacy and ensure a high level of user
satisfaction.
• User Training and Support: Providing training sessions, documentation, and
ongoing support to help users adopt the software effectively and address any initial
issues they encounter.
During the Transition Phase, developers work with a “user perspective” to fine-tune the
software, ensuring it is user-friendly, stable, and ready for sustained use in a live
environment.
PART – 3 ARTIFACTS OF THE PROCESS
THE ARTIFACT SETS
Artifact is highly associated and related to specific methods or processes of development.
Methods or processes can be project plans, business cases, or risk assessments. Distinct
gathering and collections of detailed information are generally organized and incorporated
into artifact sets. A set generally represents complete aspect of system. This is simply done to
make development and establishment of complete software system in manageable manner.
Artifacts of the life-cycle of software are generally organized and divided into two sets i.e.,
Management set and Engineering set. These sets are further divided or partitioned by
underlying language of set. These artifact sets are shown below in diagram :

1. Engineering Sets :
In this set, primary mechanism or method for forming an idea regarding evolving quality of
these artifact sets in transitioning of information from one set to another. This set is further
divided into four distinct sets that include requirements set, design set, implementation set,
and deployment set.
1. Requirement Set –
This set is primary engineering context simply used for evaluating other three artifact
sets of engineering set and basis of test cases. Artifacts of this set are evaluated,
checked, and measured through combination of following:
• Analysis of consistency among present vision and requirements models.
• Analysis of consistency with supplementary specifications of management set.
• Analysis of consistency among requirement models.
2. Design Set –
Tools that are used in Visually modeling tools. To engineer design model, UML
(Unified Modeling Language) notations are used. This sets simply contains many
different levels of abstractions. Design model generally includes all structural and
behavioral data or information to ascertain bill of material. These set artifacts mostly
include test models, design models, software architecture descriptions.
3. Implementation Set –
Tools that are used are debuggers, compilers, code analyzers, tools for test
management. This set generally contains source code as implementation of
components, their form, interfaces, and executables that are necessary for stand-alone
testing of components.
4. Deployment Set –
Tools that are used are network management tools, test coverage, and test automation
tools, etc. To simply use end result or product in its environment where it is supposed
to be used, these set generally contains executable software’s, build scripts, ML
notations, installation scripts.
2. Management Set :
This set usually captures artifacts that are simply associated with planning and execution or
running process. These artifacts generally make use of ad hoc notations. It also includes text,
graphics or whatever representation is required or needed to simply capture “contracts”
among all project personnel (such as project developers, project management, etc.), among
different stakeholders (such as user, project manager, etc.), and even between stakeholders
and project personnel.
This set includes various artifacts such as work breakdown structure, business case, software
development plan, deployment, Environment. Artifacts of this set are evaluated, checked, and
measured through combination of following :
• Review of relevant stakeholder.
• Analyzing alterations or changes among present version of artifact and previous
versions.
• Demonstrations of major milestone regarding balance between all artifacts and, in
specific or particular, accuracy of business case and vision artifacts.
MANAGEMENT ARTIFACTS
Management Artifacts includes overseeing whole condition or situation to confirm and
ensure that instructional technology project gets done. It contains various activities that
capture intermediate results and supporting information and data that are essential to
document the end-product/process legacy, maintain the end-product, increase quality of
product, and also increases performance of the process.
Some types of Management Artifacts :
1. Business Case –
The business case generally provides justification for initiating project, task, program,
or portfolio. This justification is simply based on the estimated cost of the
development and implementation against risks and issues and evaluated business
benefits and savings to be gained.
It is created and developed during early stages of project and simply explains why,
what, how, and who necessary to decide whether if it is worthwhile continuing or
initiating project. A business case is good if describes problems and issues, determines
all the possible options to address it, and gives permission to the decision-makers to
decide which of the course of the action will be best for organization. The main goal
of business case to convert vision into economic teams so that organization can
develop exact and accurate ROI (Return on Investment) assessment.
2. Software Development Plan (SDP) –
Software development plan aims to lay out whole plan which is necessary and
required in order to develop, modify, and upgrade software system. It is ready-made
solution for managers for software development. It provides acquirer insight and tool
for checking processes that have to be followed for development of software.
It simply indicates two things: Periodic updating and understanding and approval by
managers and practitioners alike.
3. Work Breakdown Structure (WBS) –
Work breakdown structure is deliverable-oriented breakdown of project into
component of small size. WBS is created and developed to establish similar
understanding of scope of project.
It is hierarchical tree structure that layout project and simply breaks it down into
smaller and manageable portions or components. It is vehicle for budgeting and
collecting or gathering costs.
4. Software Change Order Database –
For Iterative development process, primary task to manage change. A project can
iterate (perform repeatedly) more productively with large change freedom. This
change of freedom has been gained due to automation.
5. Release Specification –
Release specifications generally mean tests and limits against that which raw material,
intermediate and end product are accurately measured just before use or release.
Two important forms of requirements in release specifications are Vision Statement
(captures contract between development group and buyer) and Evaluation Criteria
(management-oriented requirements that can be showed and represented using use
cases, use cases realizations, etc).
6. Deployment –
The deployment includes numerous subsets of documents for transitioning product
into operational status. It is simply application code as it runs on production: built,
bundled, compiled, etc. It is process of putting artifact where it is necessary and
performing any tasks it needs so as to achieve its purpose. It can also include
computer system operations manuals, software installations manuals, plans and
procedures for cutover, etc.
7. Environment –
Automation of development process needs and important to get supported by robust
development environment. It must include following points :
• Management of requirements.
• Visual Modeling.
• Automation of document.
• Automated regression testing.
• Tools of host and target programming.
• Tracking of features and defects or errors.
ENGINEERING ARTIFACTS
Engineering artifacts are key elements in the engineering process, captured using rigorous
notations like UML, programming languages, or machine code. They come in various forms
such as vision documents, architecture descriptions, and software user manuals, which
support the development and functionality of software systems. Additionally, artifacts span
multiple fields, including mechanical, electrical, chemical, biomedical, and geotechnical
engineering, each contributing to their respective disciplines.
Engineering Artifacts are generally captured in rigorous engineering notations. These
notations can be unified modeling languages (UML), programming languages, or executable
machine codes.
Types of Engineering Artifacts
There are generally three types of engineering artifacts.
• Vision Document
• Architecture Description
• Software User Manual
1. Vision Document
A vision document generally provides complete vision for software system that is under
development. It is document that describes and explains compelling idea, project, or another
future state simply for specific organization, product, or service. It also supports contract
between funding authority and development organization. A vision document is specially
written by keeping user’s perspective into consideration and also by focusing on essential
features of system. A good vision document should include two appendixes: first one should
explain concept of operation using use cases and second one should explain change risks
inherent in vision statement.
2. Architecture Description
An architecture description is collection of artifacts that document an architecture that
includes an organized view of software architecture under development. It is generally taken
and extracted from design model and also contains views of design, implementation, and
deployment sets. In Architecture description, architecture views are generally key artifacts.

3. Software User Manual


Software user manual provides important documentation that is very much needed to support
software that is delivered. This documentation is generally provided to user. It should include
procedures of installation, procedures regarding its usage, guidance, operational constraints,
and an explanation regarding user interface. Test team members should write this user manual
and should be developed at an early stage in life cycle. This is due to reason that user manual
is an essential mechanism simply used for communicating and stabilizing an essential
requirement subset.
More common types of Engineering Artifacts are
Mechanical Artifacts
• Machines: Things like conveyor systems, pumps, turbines, and engines.
• Equipment & Tools: From little hand tools to large machines.
• Structures: Constructions such as bridges, dams and various other civil engineering
projects are examples of structures.
• Mechanical Components: Gears, the bearings, pistons and other mechanical parts.
Electrical or electronic Artifacts
• Circuits and circuit boards: For use in electronic systems and devices.
• Actuators and sensors: They are used to monitor and react to physical occurrences.
• Microchips and printed circuit boards (PCBs).
Chemical and Material Artifacts
• Chemical Compounds: Formulations and materials for various applications.
• Polymers, composites, and alloys.
• Pharmaceuticals and drugs.
• Chemical reactors and processing equipment.
Biomedical Artifacts
• Medical Devices: Such as pacemakers, prosthetics, and diagnostic equipment.
• Biotechnology products: Genetically engineered organisms, vaccines, and
biopharmaceuticals.
Geotechnical Artifacts
• Soil and rock mechanics testing equipment.
• Geosynthetic materials and geotechnical structures.
• Retaining walls, foundations, and tunnels.
PROGRAMMATIC ARTIFACTS
In Software Process and Project Management (SPPM), programmatic artifacts refer to the
tangible outputs, documents, and products created throughout the software development
lifecycle. These artifacts represent the various stages of planning, designing, coding, testing,
and deploying software, serving as essential records for tracking progress, communicating
requirements, and ensuring that the project aligns with its goals. They also help teams verify
that each phase meets its intended objectives and provides a reference for maintaining,
enhancing, or scaling the software in the future.
Types of Programmatic Artifacts
Programmatic artifacts can be broadly classified based on their roles in different stages of the
software lifecycle:
1. Requirements Artifacts
o Requirement Specifications: Define the functional and non-functional
requirements of the software, often in the form of a Software Requirements
Specification (SRS) document.
o User Stories and Use Cases: Describe specific scenarios and tasks users need
to perform, capturing what the software should achieve from the user's
perspective.
o Requirements Traceability Matrix (RTM): Maps requirements to their
corresponding design or test cases, ensuring that all requirements are covered
and can be traced back during testing.
2. Design Artifacts
o Architecture Diagrams: Illustrate the high-level structure of the software,
including its components, modules, and their interactions. Common formats
include UML diagrams.
o Detailed Design Documents: Outline the specific design of each component,
module, or subsystem, covering aspects like data structures, algorithms, and
how they will work together.
o Interface Specifications: Define how different components of the software
interact, specifying APIs, data formats, protocols, and external system
integrations.
3. Development Artifacts
o Source Code: The actual codebase for the software application. It represents
the implementation of the designs and is often organized in version-controlled
repositories for collaboration.
o Configuration Files: Define application settings and environment
configurations that may vary between development, testing, and production
environments.
o Code Documentation: Includes comments within the code and additional
documentation for describing functions, classes, modules, and logic, making it
easier for future developers to understand and maintain the software.
4. Testing Artifacts
o Test Plan: A comprehensive document that describes the testing approach,
scope, objectives, resources, and schedule for testing activities.
o Test Cases and Test Scripts: Individual scenarios or scripts used to verify that
the software behaves as expected under various conditions, covering both
functional and non-functional requirements.
o Defect Logs and Test Reports: Record identified defects, their severity, and
their status throughout the testing phase, as well as summarize testing results
and coverage.
5. Deployment Artifacts
o Deployment Scripts: Automated scripts to deploy the application in various
environments, often utilizing CI/CD pipelines.
o Release Notes: Documents the new features, bug fixes, known issues, and
improvements included in a specific release.
o User and Maintenance Documentation: Includes user manuals, maintenance
guidelines, and other resources to support end-users and operations teams
post-deployment.
6. Project Management Artifacts
o Project Plan: Outlines the project’s scope, schedule, milestones, and
resources. It may include Gantt charts or timelines to track progress.
o Risk Register: A log of identified risks, their potential impact, mitigation
strategies, and current status, helping the team manage and prioritize risks
throughout the project.
o Progress Reports and Meeting Minutes: Provide updates on the project's
current state, decisions made, and actions to be taken, ensuring
communication and alignment across stakeholders.
7. Maintenance and Support Artifacts
o Change Requests: Document requests for changes after the software has been
released, including details about the requested modification and its potential
impact.
o Issue Tracker: Tracks reported issues, bugs, or requests from users or support
teams, helping prioritize fixes and enhancements in future releases.
o Knowledge Base and FAQs: Reference materials created to assist users in
troubleshooting common issues and provide quick answers to frequently asked
questions.
Importance of Programmatic Artifacts
Programmatic artifacts are critical for several reasons:
• Documentation and Knowledge Transfer: They provide a documented history of
the project’s lifecycle, ensuring that future team members can understand the
software’s purpose, structure, and key decisions.
• Verification and Validation: Artifacts like test plans and requirements traceability
matrices allow teams to systematically verify that all requirements have been met and
that the product is working as intended.
• Risk Management: Through documents like risk registers and issue logs, teams can
identify and manage risks, track issues, and reduce the likelihood of project delays or
costly rework.
• Accountability and Transparency: Artifacts ensure transparency among
stakeholders, allowing them to track progress, understand project challenges, and hold
teams accountable to deliverables and timelines.
• Maintenance and Scalability: Artifacts support ongoing maintenance, providing
essential information needed to troubleshoot, upgrade, and scale the software as the
organization’s needs evolve.
UNIT – 3 WORK FLOWS OF THE PROCESS, CHECKPOINTS OF
THE PROCESS & ITERATIVE PROCESS PLANNING
PART – 1 WORK FLOWS OF THE PROCESS
SOFTWARE PROCESS WORKFLOWS
Software Process and Project Management (SPPM) workflows provide a structured
framework for managing software development processes. They break down complex tasks
into manageable stages, helping project teams to deliver high-quality software within
constraints like time, budget, and resources. Here’s an overview of the primary types of
software process workflows typically managed within SPPM:
1. Requirement Workflow
• Purpose: Define the project scope and what the software should do.
• Activities: Includes requirement gathering, analysis, and documentation. Stakeholders
and clients collaborate with project teams to outline functional and non-functional
requirements.
• Outcome: A detailed requirements document, which acts as a foundation for future
workflows.
2. Design Workflow
• Purpose: Translate requirements into an architecture and design that will meet the
outlined objectives.
• Activities: High-level architecture design, component design, and user interface
design. This phase also includes defining data structures and choosing algorithms.
• Outcome: Design documents, architecture blueprints, and sometimes initial
prototypes to validate the design approach.
3. Implementation Workflow
• Purpose: Code the software as per the design specifications.
• Activities: Coding, unit testing, and integrating different modules. Developers write,
compile, and debug code, following coding standards and guidelines.
• Outcome: Functional modules and code that are ready for integration testing.
4. Testing Workflow
• Purpose: Ensure that the software meets quality standards and requirements.
• Activities: Functional testing, system testing, integration testing, and sometimes user
acceptance testing. Testing teams execute test cases, identify bugs, and confirm fixes.
• Outcome: A tested and verified software product, along with test reports and quality
metrics.
5. Deployment Workflow
• Purpose: Release the final product to the user environment.
• Activities: Deployment planning, setting up infrastructure, data migration, and
environment configuration. This workflow may also include training users and setting
up a feedback loop.
• Outcome: The software is deployed in a live environment, ready for end-user
interaction.
6. Maintenance and Support Workflow
• Purpose: Ensure the software continues to meet user needs after deployment.
• Activities: Monitoring software performance, handling bug fixes, and implementing
minor enhancements. This stage may also address compatibility with new technology
standards.
• Outcome: Continuous software performance improvements, with updates released as
required.
7. Project Management Workflow
• Purpose: Oversee and manage the entire project lifecycle to ensure on-time, within-
budget delivery.
• Activities: Planning, tracking progress, resource allocation, and risk management.
Project managers coordinate between stakeholders, resolve issues, and ensure smooth
transitions across workflows.
• Outcome: A structured, controlled project with clear documentation and reporting to
stakeholders on project progress and any issues.
ITERATION WORKFLOWS

Iteration workflows and project workflows are two fundamental aspects of the software
development process, especially in iterative and Agile methodologies. Here’s a breakdown of
each and their differences.
Iteration Workflows
Iteration workflows refer to the sequence of steps repeated during each iteration of the
software development process. In iterative models like Agile, Scrum, and Spiral, the
workflow is typically broken into cycles or iterations. Each iteration produces a potentially
shippable increment of the software, adding value continuously.
Key Components of Iteration Workflows:
1. Planning: Define what the iteration will accomplish based on requirements, often
from a prioritized backlog.
2. Designing: Outline the architectural and technical approach for this iteration’s
features.
3. Development: Implement the functionality, often in code, using small, manageable
tasks.
4. Testing: Verify the code through various tests, including unit, integration, and
sometimes acceptance testing.
5. Review and Feedback: Evaluate the outcomes of the iteration, gather feedback from
stakeholders, and refine the product.
6. Retrospective: Discuss what went well, what didn’t, and areas for improvement to
enhance future iterations.
Iteration workflows are typically short, ranging from 1-4 weeks, ensuring the team can
respond to changes quickly.
Project Workflows
Project workflows encompass the entire lifecycle of a software project from inception to
delivery and maintenance. It’s a broader, higher-level view that includes multiple iterations as
well as the long-term goals, planning, and management of the software development project.
Key Components of Project Workflows:
1. Project Planning: Establish the scope, timeline, resources, and goals for the entire
project.
2. Requirements Analysis: Gather, analyze, and document the system requirements
from stakeholders.
3. Architecture and Design: Define the overall architecture, technology stack, and
design principles that will guide the project.
4. Development and Iterations: Execute the development phase in multiple iterations,
following the iteration workflow within each cycle.
5. Integration and Testing: Validate the integrated components across iterations to
ensure functionality, performance, and security.
6. Deployment: Release the software to the production environment or client.
7. Maintenance and Support: Address any post-release issues, apply patches, and make
updates.
Project workflows emphasize the overall structure and roadmap for the project, while
iteration workflows are more focused on delivering incremental value throughout the
development process.
Differences Between Iteration and Project Workflows

Aspect Iteration Workflow Project Workflow

Broad, encompassing the entire


Scope Small, focused on current iteration
project

Time Frame 1-4 weeks Project duration (months/years)

Delivering a potentially shippable product Managing the overall project


Focus
increment goals

Entire project team, stakeholders,


Stakeholders Primarily the development team
clients

Goal Incremental value and quick feedback Successful project completion

Iteration
Repeated for each iteration Contains multiple iterations
Cycle

In sum, iteration workflows help development teams maintain a steady pace of delivery,
while project workflows ensure that these efforts align with the long-term objectives of the
software project. Together, they create a flexible yet organized approach to software
development.
PART – 2 CHECKPOINTS OF THE PROCESS
In software development, all system-wide events are held at the end of every phase of
development. These checkpoints provide visibility to milestones in life cycle and also to
system-wide issues and problems. These checkpoints generally provide following things :
• It simply synchronizes management and engineering perspectives.
• It also verifies that goal every phase has been achieved or not.
• It provide basis for analysis and evaluation so as to determine whether or not project
is proceeding as planned, and also to make correction and right action as per
requirement.
• It also identifies risks, issues, or problems that are essential and conditions that are not
tolerable.
• For entire life-cycle, it performs global assessment.
Generally, to synchronize expectations of stakeholders throughout life-cycle, three sequences
of project checkpoints are used.

These three types of joint management reviews are given below :


1. MAJOR MILESTONES
Major milestones represent significant system-wide events and are often placed at the end of
each phase in the development lifecycle. These milestones serve as checkpoints that confirm
the project is on the right track and that the objectives of each phase have been met. Here’s a
deeper look at what they accomplish:
• System-Wide Visibility: Major milestones provide visibility into high-level issues or
concerns affecting the entire system, ensuring no critical aspects are overlooked.
• Management and Engineering Synchronization: They help synchronize
perspectives between management and engineering teams, ensuring that both sides are
in alignment. For instance, while management may focus on timelines and budgets,
engineering focuses on technical and functional requirements. Major milestones bring
these viewpoints together to ensure the project remains feasible and within scope.
• Validation of Phase Goals: At each major milestone, teams assess whether the goals
of that phase have been achieved. If a major milestone is reached, it implies that the
team has successfully achieved the target for that phase, such as completing a design
phase, verifying requirements, or finalizing testing.
• Stakeholder Agreement: Major milestones are essential for gaining consensus from
stakeholders regarding the project’s current state and progress. This agreement is
crucial in large, complex projects where multiple stakeholders have different priorities
and perspectives.
• Balanced Details in Deliverables: These milestones ensure a balanced level of detail
in project deliverables, confirming that understanding of requirements, life-cycle
planning, and product specifications align well.
• Consistency Across Artifacts: Major milestones verify consistency across various
project artifacts (e.g., documents, designs, code), reducing the risk of conflicting or
redundant information.
In traditional models like Waterfall, major milestones occur less frequently due to the linear
nature of the process. In Agile and iterative models, they may still exist but could be spread
out across iterative cycles.
2. MINOR MILESTONES (MICRO MILESTONES)
Minor milestones, often referred to as micro milestones, represent smaller, more frequent
progress checks that help manage daily project activities. They play a key role in maintaining
momentum and verifying that the team is on track to reach major milestones.
• Day-to-Day Control: These minor milestones are essential for project managers to
monitor the daily activities of team members, keeping the project on course.
• Iteration-Focused Events: In iterative models, minor milestones are especially useful
because they break down the work within each iteration. For example, each iteration
might have minor milestones for requirements gathering, coding, testing, and
integration.
• Detailed Review Points: Minor milestones allow for a closer inspection of the current
iteration's content, such as code or design elements. This helps to identify and address
potential issues early, ensuring that the iteration is proceeding as planned.
• Early and Late Iteration Focus: In the initial project stages, minor milestones help
teams focus on gathering requirements and designing the system. As the project
progresses, these milestones shift to emphasize aspects like completeness,
consistency, usability, and change management, reflecting the evolving priorities as
the project matures.
• Confidence in Achieving Major Milestones: Minor milestones divide the time
between major milestones into smaller intervals, providing confidence that the project
will achieve its broader goals on schedule. By meeting these smaller goals, teams stay
aligned with the bigger objectives, ensuring smoother transitions between phases.
3. STATUS ASSESSMENTS
Status assessments are a critical part of monitoring progress, identifying risks, and
maintaining clear communication among all parties involved in a project. These assessments
are often regular, systematic evaluations performed to keep stakeholders informed and
address any emerging issues.
• Mechanism for Issue Resolution: Status assessments provide a structured way to
address and resolve potential management, technical, and project-related risks. By
having regular assessments, teams can tackle issues early before they escalate.
• Synchronization of Expectations: The primary objective of status assessments is to
ensure all stakeholders have a synchronized and consistent view of project
expectations. For example, an engineering team might expect a certain level of
functionality, while a client might prioritize usability; status assessments help align
these expectations.
• Monitoring of Progress and Quality: These assessments track progress indicators
(e.g., timeline adherence, completed features) and quality metrics (e.g., defect counts,
code coverage) to ensure that the project remains on track both in terms of delivery
and quality.
• Attention to Project Dynamics: Regular assessments help teams remain attentive to
project dynamics, such as evolving requirements, team performance, or external
changes that might impact project goals.
• Stakeholder Communication: Status assessments foster open lines of
communication between all stakeholders. Regular feedback sessions keep everyone in
the loop about changes, challenges, and successes, reducing misunderstandings and
misalignments.
• Management Insight: For management, status assessments provide frequent and
reliable insights into the project’s progress. This enables management to make
informed decisions, such as reallocating resources, adjusting timelines, or
recalibrating objectives to adapt to new developments.

PART – 3 ITERATIVE PROCESS PLANNING


Iteration planning is generally a process of adapting as the project unfolds by making
alterations to plans. Plans are changed simply based on feedback from the monitoring
process, some changes on project assumptions, risks, and changes in scope, budget, or
schedule. It is essential to include the team in the planning process. Planning is generally
concerned with explaining and defining the actual sequence of intermediate results. It is an
event where each team member identifies how much of the team backlog, they can commit to
delivering during an upcoming iteration.
What is Iteration Planning?
Iteration planning is the process in Agile software development where teams decide which
tasks to work on during a specific period, typically a week or two (an iteration or sprint). It
involves breaking down larger tasks into smaller, manageable ones, estimating their effort,
and assigning them to team members based on capacity and project goals. The aim is to
create a plan that ensures steady progress towards completing the project within the desired
timeframe.
Iteration planning is a key aspect of Agile project management, particularly in methodologies
like Scrum. It involves the collaborative effort of the entire team to plan and prioritize the
work to be done within a defined period, known as an iteration or sprint.
During iteration planning, the team typically:
• Reviews the product backlog: The team examines the list of prioritized tasks, user
stories, or features in the product backlog. This backlog represents all the work that
needs to be completed over the course of the project.
• Selects items for the iteration: Based on the team’s capacity and the priorities set by
the product owner, the team selects a set of tasks or user stories from the product
backlog to be completed during the iteration. These tasks should be small enough to
be completed within the iteration timeframe, typically one to four weeks.
• Breaks down tasks: The team further breaks down the selected tasks into smaller,
more manageable units of work. This decomposition helps clarify the specific steps
needed to complete each task and ensures that they are well-defined and achievable
within the iteration.
• Estimates effort: The team estimates the effort required to complete each task or user
story. This estimation is usually done using techniques like story points or time-based
estimates, allowing the team to gauge the relative complexity and size of each task.
Inputs and Outputs of Iteration Planning
Inputs and outputs of iteration planning in Agile software development typically include:
Inputs:
• Product Backlog: A prioritized list of features, user stories, or tasks that need to be
completed over the course of the project.
• Velocity: The team’s historical velocity, which represents the amount of work the
team can typically complete in one iteration. It helps the team estimate how much
work they can commit to for the upcoming iteration.
• Stakeholder Feedback: Input from stakeholders, such as product owners or
customers, regarding their priorities and preferences for the upcoming iteration.
• Team Capacity: The availability of team members and their capacity to take on work
during the iteration.
• Definition of Done: Criteria that must be met for a task to be considered complete,
ensuring that the team delivers high-quality work.
Outputs:
• Iteration Plan: A detailed plan outlining the tasks or user stories to be completed
during the iteration, along with their estimated effort, assignments, and dependencies.
• Task Breakdown: Tasks are broken down into smaller, more manageable units of
work, clarifying the specific steps needed to complete each task.
• Effort Estimates: Estimates of the effort required to complete each task, typically
measured in story points or time-based estimates.
• Task Assignments: Tasks are assigned to individual team members based on their
skills, availability, and capacity.
• Sprint Goal: A clear and concise statement of what the team aims to accomplish
during the iteration, providing focus and direction for their work.
• Iteration Backlog: The subset of items from the product backlog that the team
commits to completing during the iteration. This forms the basis for tracking progress
and measuring the team’s success during the iteration.
These inputs and outputs enable Agile teams to effectively plan and execute their work during
each iteration, ensuring that they deliver value incrementally and iteratively throughout the
project.
Preparation of Iteration Planning
The preparation of iteration planning in Agile software development involves several steps to
ensure that the team is ready to effectively plan and execute their work during the upcoming
iteration. Here’s how the preparation typically unfolds:
1. Review of Product Backlog: The product backlog, containing prioritized user stories
or tasks, is reviewed to understand the scope of work available for the upcoming
iteration. This review helps the team gain clarity on the requirements and priorities set
by the product owner.
2. Refinement of Product Backlog: Any user stories or tasks that are unclear,
ambiguous, or incomplete are refined or decomposed into smaller, more manageable
units of work. This refinement ensures that the backlog items are well-defined and
ready for implementation during the iteration.
3. Estimation of Backlog Items: The team estimates the effort required to complete
each backlog item, typically using techniques like story points or time-based
estimates. This estimation helps the team gauge the size and complexity of the work
and facilitates capacity planning for the iteration.
4. Identification of Dependencies: Any dependencies between backlog items or tasks
are identified to understand potential constraints or risks that may impact the team’s
ability to complete the work. This identification allows the team to proactively
address dependencies and plan accordingly during iteration planning.
5. Assessment of Team Capacity: The team assesses its capacity and availability for the
upcoming iteration, taking into account factors such as team member availability,
holidays, vacations, and any other commitments. This assessment helps the team
determine how much work they can commit to completing during the iteration.
6. Alignment with Sprint Goals: The team ensures that the work planned for the
iteration aligns with the overall goals and objectives set for the sprint. This alignment
ensures that the team is focused on delivering value that contributes to the sprint goals
and the project’s overall success.
Process of Iteration Planning
The process of iteration planning in Agile software development typically follows these steps:

• Preparation: The team prepares for the iteration planning meeting by reviewing the
product backlog, refining user stories, estimating tasks, identifying dependencies, and
assessing team capacity.
• Iteration Planning Meeting: The team holds a collaborative meeting, usually lasting
a few hours, to plan the work for the upcoming iteration. During this meeting, they:
o Review Goals: The product owner or scrum master reviews the goals and
objectives for the iteration, providing context for the planning session.
o Review Backlog: The team reviews the prioritized items in the product
backlog, discussing their requirements and acceptance criteria.
o Select Backlog Items: Based on the team’s capacity and the sprint goals, the
team collectively selects a subset of backlog items to work on during the
iteration.
o Break Down Tasks: The team breaks down selected backlog items into
smaller, more manageable tasks, clarifying the specific steps needed to
complete each one.
o Estimate Effort: The team estimates the effort required to complete each task,
using techniques like story points or time-based estimates.
o Assign Tasks: Tasks are assigned to individual team members based on their
skills, availability, and capacity, ensuring a balanced workload.
o Define Sprint Goal: The team collaboratively defines a sprint goal, a concise
statement of what they aim to achieve by the end of the iteration.
• Update Plans and Tools: After the iteration planning meeting, the team updates
project management tools, such as task boards or project tracking software, to reflect
the planned work for the iteration.
• Daily Standups: Throughout the iteration, the team holds daily standup meetings to
discuss progress, share updates, and address any impediments or obstacles that arise.
• Demo and Retrospective: At the end of the iteration, the team holds a demo to
showcase the completed work to stakeholders and a retrospective to reflect on what
went well, what could be improved, and any lessons learned for future iterations.
By following this iterative planning process, Agile teams can effectively plan, execute, and
deliver value incrementally throughout the project, adapting to changing requirements and
delivering high-quality software in a timely manner.
WORK BREAKDOWN STRUCTURES
A Work Breakdown Structure includes dividing a large and complex project into simpler,
manageable, and independent tasks. The root of this tree (structure) is labeled by the Project
name itself. For constructing a work breakdown structure, each node is recursively
decomposed into smaller sub-activities, until at the leaf level, the activities become
undividable and independent. It follows a Top-Down approach.
Steps Work Breakdown Structure:
Step 1: Identify the major activities of the project.
Step 2: Identify the sub-activities of the major activities.
Step 3: Repeat till undividable, simple, and independent activities are created.
Work Breakdown Structure
Construction of Work Breakdown Structure
1. Firstly, the project managers and top level management identifies the main
deliverables of the project.
2. After this important step, these main deliverables are broke down into smaller higher-
level tasks and this complete process is done recursively to produce much smaller
independent tasks.
3. It depends on the project manager and team that upto which level of detail they want
to break down their project.
4. Generally the lowest level tasks are the most simplest and independent tasks and takes
less than two weeks worth of work.
5. Hence, there is no rule for upto which level we may build the work breakdown
structure of the project as it totally depends upon the type of project we are working
on and the management of the company.
6. The efficiency and success of the whole project majorly depends on the quality of the
Work Breakdown Structure of the project and hence, it implies its importance.
Uses of Work Breakdown Structure
1. Cost estimation: It allows doing a precise cost estimation of each activity.
2. Time estimation: It allows estimating the time that each activity will take more
precisely.
3. Easy project management: It allows easy management of the project.
4. Helps in project organization: It helps in proper organization of the project by the
top management.
PLANNING GUIDELINES
Planning guidelines are generally written statement that contains guidance to be referred
before any development and establishing take place of a project. Planning guidelines are
often used for purpose of uniformity, comfort, and safe development. These planning
guidelines should be followed by any party for development. These are initial planning
guidelines that are made on basis of experience of many other people. Planning guidelines
creates a convenient living environment. These guidelines are therefore considered credible
bases of estimates and build some amount of confidence in the stakeholders.
Planning is an essential part of software engineering, and there are several guidelines
that can be followed to ensure that the software development process is well-organized
and efficient. Some of these guidelines include:
1. Define clear and measurable objectives: Clearly define the goals and objectives of
the software development project, and make sure that they are measurable and
achievable.
2. Understand the requirements: Gather and analyze the requirements for the software,
and ensure that they are complete, consistent, and unambiguous.
3. Create a project plan: Develop a detailed project plan that includes the schedule,
resources, and deliverables for the software development project.
4. Identify and manage risks: Identify and evaluate the risks that may impact the
software development project, and develop a plan to mitigate or manage them.
5. Define the software architecture: Define the overall structure and organization of
the software, and ensure that it meets the requirements and is consistent with the
project plan.
6. Establish a development process: Establish a development process that includes
methodologies, tools, and standards to be used throughout the software development
project.
7. Define the testing strategy: Define the testing strategy for the software development
project, including the types of tests to be performed and the schedule for testing.
8. Monitor and control progress: Monitor and control the progress of the software
development project, and take corrective action as needed to keep the project on
schedule and within budget.
9. Communicate effectively: Ensure effective communication among all members of
the development team and stakeholders.
10. By following these guidelines, software engineers can ensure that the software
development process is well-organized, efficient and that the project is delivered on
time and within budget.
11. It’s important to note that these guidelines are not exhaustive and it may vary based
on the size, complexity and the nature of the project. Planning in software engineering
is an ongoing process and it should be flexible enough to adapt to the changes that are
likely to happen during the course of the project.

There are generally two planning guidelines that should be followed for development of a
project. These guidelines are given below :
• Give Advice for default allocation of costs among all elements of first-level WBS.
Below a table is given. In this table, you will see default allocations for budgeted
costs of all elements of first-level WBS. Values might change or vary across various
projects, but allocation generally plays an essential role by providing a good
benchmark for evaluating plan by completely understanding and knowing rationale
for deviations from these guidelines. It is actually a cost allocation, not an effort
allocation.

First-Level WBS Element Default Budget

Management 10 %

Environment 10 %

Requirements 10 %

Design 15 %

Implementation 25 %

Assessment 25 %

Deployment 5%

Total 100 %
• Give Advice for allocation of effort and schedule across all phases of a life-cycle.
Below is a table is given. In this table, you will see allocation of effort and schedule.
Value might change or vary across various projects, but allocation generally plays an
essential role by providing an average expectation across a spectrum of domain of
application.

Domain Inception Elaboration Construction Transition

Effort 5% 20 % 65 % 10 %

Schedule 10 % 30 % 50 % 10 %

All guidelines simply translate broad framework into fully explained principles of
development. They play a major role in achieving high quality of development of a project.
But sometimes, advice regarding project independent planning is also risky. It can be that
these guidelines can be adopted blindly without any adaptation to circumstances of a specific
project. Another risk can be interpretation in a wrong way.
COST AND SCHEDULE ESTIMATION
The cost and schedule estimation process helps in determining number of resources to
complete all project activities. It generally involves approximation and development of
costing alternatives to plan, perform or work, deliver, or give project. A good estimation is
very much essential for keeping a project under budget.
Two perspectives are generally required to derive project plans. These perspectives are given
below :
1. Forward-Looking :
• The Forward-Looking approach is also known as Top-Down approach. This
approach generally starts with describing and explaining various project tasks
that involve starting with project aim or end deliverable and breaking it all
down into smaller planning chunks.
• Top-down budgeting also refers to a method of budgeting where project
managers prepare a high-level budget for organization.
• These project managers or senior management develops and creates a
characterization of overall size, process, environment, people, and quality that
is essential for software project. In this approach, duration of deliverable’s is
estimated.
• It generally takes less time and effort than bottom-up estimate. With help of
software cost estimation model, an estimation of overall effort and schedule is
done. The project manager generally divides estimation of overall effort into a
top-level of WBS (Work Breakdown Structure).
• They also divide schedule into major milestones dates. At this stage, sub-
project managers are simply given responsibility for decomposing every
element of WBS into lower levels with help of various allocations of top-level,
staffing profile, and, major milestones dates as constraints.
• The main benefit of this approach is use of holistic data from earlier projects
or products, along with unmitigated risks, and scope creeps. This also helps in
reducing risk of overlooked work activities or costs.
2. Backward-Looking :
• Backward-Looking approach is also known as Bottom-up approach.
• In this approach, project team breaks requirements of clients down,
determining lowest level appropriate to develop a range of estimates, covering
overall scope of project based on available definition of task.
• Overall elements of lowest level WBS are generally explained into detailed
tasks, for which WBS element manager is responsible for estimating budget
and schedule.
• All of these estimates are joined and integrated into higher-level WBS budgets
and milestones.
Milestone scheduling also called budget allocation with help of top-down approach results in
a highly optimistic plan. Whereas, bottom-up approach results in a highly pessimistic plan.
Iteration is very much needed and important, using results of one approach to validate and
even check results of other approach. Both of approaches should be used together, in balance,
throughout life-cycle of project as shown below.
Below is diagram showing planning balance through life cycle.

Engineering stage planning emphasis on following points :


• Macro-level task estimation for engineering artifacts.
• Macro-level task estimation for production stage artifacts.
• Stakeholder concurrence.
• Coarse-grained variance analysis of actual vs. planned expenditures.
• Tuning top-down project-independent planning guidelines into project-specific
planning guidelines.
• WBS definition and elaboration.
Production stage planning emphasis on following points :
• Macro-level task estimation for production stage artifacts.
• Macro-level task estimation for maintenance of engineering artifacts.
• Stakeholder concurrence.
• Coarse-grained variance analysis of actual vs. planned expenditures.
Top-down perspective generally dominates during engineering stage. This is because there is
no enough depth or details of understanding not even stability in sequences of detailed task to
perform planning of bottom-up approach. On other hand, there is enough prior experience
and planning fidelity that bottom-up planning perspective dominates during production stage.
PRAGMATIC PLANNING
Pragmatic planning in software process and project management is a flexible, practical
approach to project planning that prioritizes real-world constraints and goals over strict
adherence to theoretical methodologies or rigid procedures. Pragmatic planning focuses on
achieving project objectives effectively and efficiently, often by adapting to changing
requirements, limited resources, and unforeseen challenges.
Key Elements of Pragmatic Planning
1. Goal-Oriented Focus:
o Instead of following a rigid plan, pragmatic planning centers on the key goals
and outcomes that the project aims to achieve. Teams define success criteria
and prioritize tasks based on what will most effectively contribute to these
goals.
o Pragmatic planners often use a "minimum viable product" (MVP) approach,
focusing first on delivering a workable version that can be improved over
time, ensuring value is delivered even with limited resources or time.
2. Adaptability to Change:
o Pragmatic planning allows for adjustments as new information, requirements,
or constraints arise. This approach acknowledges that not all elements of a
project can be foreseen at the outset and that a flexible plan allows teams to
address changes more smoothly.
o Agile and iterative models support pragmatic planning well, as they provide
frameworks for regular reassessment and re-prioritization.
3. Realistic Resource Management:
o Pragmatic planning emphasizes realistic budgeting of time, personnel, and
other resources. Instead of making optimistic assumptions about resource
availability or efficiency, it encourages planners to account for limitations and
potential setbacks.
o This includes planning for developer workload, recognizing dependencies, and
balancing team strengths to allocate resources where they are most impactful.
4. Incremental Delivery and Continuous Feedback:
o Pragmatic planning supports an incremental approach, where teams deliver
project components or features in smaller pieces and gather feedback
continuously. This iterative process allows teams to make quick adjustments
based on stakeholder input or testing results.
o By frequently releasing increments, teams can ensure the project remains
aligned with the client’s evolving needs and business goals.
5. Risk Management Focus:
o A pragmatic approach places high importance on identifying and managing
risks early on, allowing for contingencies to be built into the plan. Instead of
creating overly complex plans, it emphasizes addressing the biggest risks first
and preparing for potential obstacles.
o This involves both proactive risk mitigation (such as allocating extra time for
testing) and reactive strategies for handling unexpected issues.
6. Transparent Communication and Stakeholder Engagement:
o Pragmatic planning encourages regular communication with all stakeholders
to ensure they are informed of progress, issues, and changes in direction. This
ongoing dialogue helps manage expectations and ensures everyone is on the
same page about project priorities.
o Involving stakeholders at each planning stage fosters alignment and prevents
surprises, making it easier to adjust when priorities shift.
7. Simplification and Avoiding Overengineering:
o Pragmatic planning advocates for simplifying wherever possible. This can
mean using straightforward solutions that meet requirements without
overengineering, minimizing complexity to reduce the risk of delays or errors.
o By focusing on “just enough” processes or technology to accomplish goals,
pragmatic planners reduce project bloat and complexity, keeping the team
focused on what truly matters for delivery.
Benefits of Pragmatic Planning
• Higher Agility and Responsiveness: Teams can respond effectively to changes or
issues, which is critical in fast-paced or evolving environments.
• Efficient Resource Use: By focusing on realistic resource allocation, pragmatic
planning helps prevent burnout, budget overruns, and inefficient time management.
• Improved Stakeholder Satisfaction: Continuous delivery and regular
communication help align the project’s output with stakeholder expectations and
needs.
• Reduced Risk of Failure: Through careful risk management and adaptable planning,
pragmatic projects are more likely to succeed in delivering their core objectives.

UNIT – 4 PROCESS AUTOMATION, PROJECT CONTROL &


PROCESS INSTRUMENTATION & TAILORING THE PROCESS
PART – 1 PROCESS AUTOMATION
AUTOMATION BUILDING BLOCKS
Process automation generally refers to use of digital technology simply to work and perform
a process or processes. This is done to accomplish or complete workflow or function. To an
iterative process, process automation and change management is a very critical one. Even if
change will be too expensive, then development will resist and won’t allow it. To automate
process of software development, various tools are available nowadays.

In diagram given above, some of important tools are included and introduced that are very
much needed across overall software process and correlates very well to process framework.
Each of tools of software development map closely to one of process workflows and each of
these process workflows have a distinct for automation support. Workflow automation
generally makes complicated software process in an easy way to manage. Here you will see
environment that is necessary to support process framework. Some of concerns are associated
with each workflow are given below :
1. Management : Nowadays, there are several opportunities and chances available for
automation of project planning and control activities of management workflow. For
creating planning artifacts, several tools are useful such as software cost estimation
tools and Work Breakdown Structure (WBS) tools. Workflow management software is
an advanced platform that provides flexible tools to improve way you work in an
efficient manner. Thus, automation support can also improve insight into metrics.
2. Environment : Automating development process and also developing an
infrastructure for supporting different project workflows are very essential activities
of engineering stage of life-cycle. environment that generally gives and provides
process automation is a tangible artifact that is generally very critical to life-cycle of
system being developed. Even, top-level WBS recognizes environment like a first-
class workflow. Integrating their own environment and infrastructure for software
development is one of main tasks for most of software organizations.
3. Requirements : Requirements management is a very systematic approach for
identifying, documenting, organizing, and tracking changing requirements of a
system. It is also responsible for establishing and maintaining agreement between user
or customer and project team on changing requirements of system. If process wants
strong traceability among requirements and design, then architecture is very much
likely to evolve in a way that it optimizes requirements traceability other than design
integrity. This effect is even more and highly effective and pronounced if tools are
used for process automation. For effective requirement management, points that must
include are maintaining a clear statement of requirements with attributes for every
type of requirement and traceability to other requirements and other project artifacts.
4. Design : Workflow design is actually a visual depiction of each step that is involved
in a workflow from start to end. It generally lays out each and every task sequentially
and provides complete clarity into how data moves from one task to another one.
Workflow design tools simply allow us to depict different tasks involved graphically
as well as also depict performers, timelines, data, and other aspects that are crucial to
execution. Visual modeling is primary support that is required and essential for design
workflow. Visual model is generally used for capturing design models, representing
them in a human-readable format, and also translating them into source code.
5. Implementation : The main focus and purpose of implementation workflow are to
write and initially test software, relies primarily on programming environment (editor,
compiler, debugger, etc.). But on other hand, it should also include substantial
integration along with change management tools, visual modeling tools, and test
automation tools. This is simply required to just support iteration to be productive. It
is main focus of Construction phase. The implementation simply means to transform a
design model into executable one.
6. Assessment and Deployment : Workflow assessment is initial step to identifying
outdated software processes and just replace them with most effective process. This
generally combines domain expertise, qualitative and quantitative information
gathering and collection, proprietary tools, and much more. It requires and needs
every tool discussed along with some additional capabilities simply to support test
automation and test management. Defect tracking is also a tool that supports
assessment.
More Automation Tools for building blocks in Software Process are-
1. Version Control: Git is a popular tool for version control, which is an essential step
in the software development process. Git is a system for sharing version control that
keeps track of changes, manages branches and offers a history of code alterations to
enable collaborative development. Its automated features to simplify the integration of
code and reduce the possibility of disputes among team members.
2. Static analysis and code quality: SonarQube is an effective tool for ongoing code
quality inspection. It does static code analysis, finding and emphasizing errors,
security flaws and code issues. Furthermore, by spotting and resolving problems with
coding standards and guidelines, linters like Pylint and ESLint are essential to
maintain code quality.
3. Monitoring and Logging: Prometheus is quite good at gathering and storing metrics
from different systems, as well as monitoring and sending out alerts. For consolidated
logging and log analysis, the ELK Stack (Elasticsearch, Logstash, Kibana) provides a
comprehensive solution. By automating the procedures of troubleshooting and
monitoring, these technologies guarantee the dependability and efficiency of software
programmes.
4. Containerization: By encapsulating apps and their dependencies, Docker transforms
software packaging. By simplifying installing, scaling and management of
containerized applications, Kubernetes enhances Docker. When combined, these
solutions offer a scalable and consistent environment that makes resource utilization
effective and portable across a range of deployment scenarios.

PART – 2 PROJECT CONTROL & PROCESS INSTRUMENTATION


Project Control
Project control is the practice of actively monitoring, measuring, and adjusting project
activities to ensure that the project stays on track with its goals, timelines, and budget. It
encompasses various techniques and tools for tracking progress, managing risks, and
correcting deviations when necessary.
Process Instrumentation
Process instrumentation involves using tools and metrics to monitor and analyze the software
development process itself, ensuring that the development workflows and procedures are
optimized and consistently delivering quality results. This instrumentation focuses on
collecting, tracking, and analyzing data from development activities to ensure that processes
are efficient, predictable, and producing high-quality outcomes.
THESE VENCOR METRIC
Vendor metrics in project control and process instrumentation refer to the set of measurable
indicators used to assess the performance, quality, reliability, and efficiency of vendors or
third-party service providers involved in a software project. These metrics provide insights
into a vendor's ability to deliver on commitments, meet quality standards, adhere to
schedules, and manage costs effectively. Tracking vendor metrics is crucial for ensuring that
all parties contribute positively to the project’s success and can help in identifying areas for
improvement, enforcing accountability, and fostering better collaboration.
Need for Software Metrics: Software metrics are needed for calculating the cost and
schedule of a software product with
great accuracy.
Software metrics are required for making an accurate estimation of the progress.
The metrics are also required for understanding the quality of the software product.
INDICATORS:
An indicator is a metric or a group of metrics that provides an understanding of the software
process or software product or a software project. A software engineer assembles measures
and produce metrics from which the indicators can be derived.
Two types of indicators are:
(i) Management indicators.
(ii) Quality indicators.
Management Indicators
The management indicators i.e., technical progress, financial status and staffing progress are
used to determine whether a project is on budget and on schedule. The management
indicators that indicate financial status are based on earned value system.
Quality Indicators
The quality indicators are based on the measurement of the changes occurred in software.
SEVEN CORE METRICS OF SOFTWARE PROJECT
Software metrics instrument the activities and products of the software
development/integration process. Metrics values provide an important perspective for
managing the process. The most useful metrics are extracted directly from the evolving
artifacts.
There are seven core metrics that are used in managing a modern process.
Seven core metrics related to project control:
Management Indicators Quality Indicators
Work and Progress Change traffic and stability
Budgeted cost and expenditures Breakage and modularity
Staffing and team dynamics Rework and adaptability
Mean time between failures (MTBF) and maturity
MANAGEMENT INDICATORS
Work and progress
This metric measure the work performed over time. Work is the effort to be accomplished to
complete a certain set of tasks. The various activities of an iterative development project can
be measured by defining a planned estimate of the work in an objective measure, then
tracking progress (work completed overtime) against that plan.
The default perspectives of this metric are:
Software architecture team: - Use cases demonstrated.
Software development team: - SLOC under baseline change management, SCOs closed
Software assessment team: - SCOs opened, test hours executed and evaluation criteria meet.
Software management team: - milestones completed.
The below figure shows expected progress for a typical project with three major releases

Budgeted cost and expenditures


This metric measures cost incurred over time. Budgeted cost is the planned expenditure
profile over the life cycle of the project. To maintain management control, measuring cost
expenditures over the project life cycle is always necessary. Tracking financial progress takes
on an organization - specific format. Financial performance can be measured by the use of an
earned value system, which provides highly detailed cost and schedule insight.
The basic parameters of an earned value system, expressed in units of dollars, are as follows:
Expenditure Plan - It is the planned spending profile for a project over its planned schedule.
Actual progress -
It is the technical accomplishment relative to the planned progress underlying the spending
profile.
Actual cost: It is the actual spending profile for a project over its actual schedule.
Earned value: It is the value that represents the planned cost of the actual progress.
Cost variance: It is the difference between the actual cost and the earned value.
Schedule variance: It is the difference between the planned cost and the earned value. Of all
parameters in an earned value system, actual progress is the most subjective
Assessment: Because most managers know exactly how much cost they have incurred and
how much schedule they have used, the variability in making accurate assessments is centred
in the actual progress assessment. The default perspectives of this metric are cost per month,
full-time staff per month and percentage of budget expended.
Staffing and team dynamics
This metric measures the personnel changes over time, which involves staffing additions and
reductions over time. An iterative development should start with a small team until the risks
in the requirements and architecture have been suitably resolved. Depending on the overlap
of iterations and other project specific circumstances, staffing can vary. Increase in staff can
slow overall project progress as new people consume the productive team of existing people
in coming up to speed. Low attrition of good people is a sign of success. The default
perspectives of this metric are people per month added and people per month leaving. These
three management indicators are responsible for technical progress, financial status and
staffing progress.

QUALITY INDICATORS
Change traffic and stability:
This metric measures the change traffic over time. The number of software change orders
opened and closed over the life cycle is called change traffic. Stability specifies the
relationship between opened versus closed software change orders. This metric can be
collected by change type, by release, across all releases, by term, by components, by
subsystems, etc.
The below figure shows stability expectation over a healthy project’s life cycle

Breakage and modularity


This metric measures the average breakage per change over time. Breakage is defined as the
average extent of change, which is the amount of software baseline that needs rework and
measured in source lines of code, function points, components, subsystems, files or other
units. Modularity is the average breakage trend over time. This metric can be collected by
revoke SLOC per change, by change type, by release, by components and by subsystems.
Rework and adaptability:
This metric measures the average rework per change over time. Rework is defined as the
average cost of change which is the effort to analyse, resolve and retest all changes to
software baselines. Adaptability is defined as the rework trend over time. This metric
provides insight into rework measurement. All changes are not created equal. Some changes
can be made in a staff- hour, while others take staff-weeks. This metric can be collected by
average hours per change, by change type, by release, by components and by subsystems.
MTBF and Maturity:
This metric measures defect rather over time. MTBF (Mean Time Between Failures) is the
average usage time between software faults. It is computed by dividing the test hours by the
number of type 0 and type 1 SCOs. Maturity is defined as the MTBF trend over time.
Software errors can be categorized into two types deterministic and nondeterministic.
Deterministic errors are also known as Bohr-bugs and nondeterministic errors are also called
as Heisen-bugs. Bohr-bugs are a class of errors caused when the software is stimulated in a
certain way such as coding errors. Heisen-bugs are software faults that are coincidental with a
certain probabilistic occurrence of a given situation, such as design errors. This metric can be
collected by failure counts, test hours until failure, by release, by components and by
subsystems. These four quality indicators are based primarily on the measurement of
software change across evolving baselines of engineering data.
LIFE -CYCLE EXPECTATIONS
There is no mathematical or formal derivation for using seven core metrics properly.
However, there were specific reasons for selecting them:
The quality indicators are derived from the evolving product rather than the artifacts.
They provide inside into the waste generated by the process. Scrap and rework metrics are a
standard measurement perspective of most manufacturing processes.
They recognize the inherently dynamic nature of an iterative development process. Rather
than focus on the value, they explicitly concentrate on the trends or changes with respect to
time.
The combination of insight from the current and the current trend provides tangible indicators
for management action.
Table 13-3. the default pattern of life cycle evolution
Metric Inception Elaboration Construction Transition
Progress 5% 25% 90% 100%
Architecture 30% 90% 100% 100%
Applications <5% 20% 85% 100%
Expenditures Low Moderate High High
Effort 5% 25% 90% 100%
Schedule 10% 40% 90% 100%
Staffing Small team Ramp up Steady Varying
Stability Volatile Moderate Moderate Stable
Architecture Volatile Moderate Stable Stable
Applications Volatile Volatile Moderate Stable
Modularity 50%-100% 25%-50% <25% 5%-10%
Architecture >50% >50% <15% <5%
Applications >80% >80% <25% <10%
Adaptability Varying Varying Benign. Benign
Architecture Varying Moderate Benign Benign
Applications Varying Varying Moderate Benign
Maturity Prototype Fragile Usable Robust
Architecture Prototype Usable Robust Robust
Applications Prototype Fragile Usable Robust
METRICS AUTOMATION:
Many opportunities are available to automate the project control activities of a software
project. A Software Project Control Panel (SPCP) is essential for managing against a plan.
This panel integrates data from multiple sources to show the current status of some aspect of
the project. The panel can support standard features and provide extensive capability for
detailed situation analysis. SPCP is one example of metrics automation approach that
collects, organizes and reports values and trends extracted directly from the evolving
engineering artifacts.
SPCP:
To implement a complete SPCP, the following are necessary.
• Metrics primitives - trends, comparisons and progressions
• A graphical user interface.
• Metrics collection agents
• Metrics data management server
• Metrics definitions - actual metrics presentations for requirements progress,
implementation progress, assessment progress, design progress and other progress
dimensions.
• Actors - monitor and administrator.
Monitor defines panel layouts, graphical objects and linkages to project data. Specific
monitors called roles include software project managers, software development team leads,
software architects and customers. Administrator installs the system, defines new
mechanisms, graphical objects and linkages. The whole display is called a panel. Within a
panel are graphical objects, which are types of layouts such as dials and bar charts for
information. Each graphical object displays a metric. A panel contains a number of graphical
objects positioned in a particular geometric layout. A metric shown in a graphical object is
labelled with the metric type, summary level and insurance name (line of code, subsystem,
server1). Metrics can be displayed in two modes – value, referring to a given point in time
and graph referring to multiple and consecutive points in time. Metrics can be displayed with
or without control values. A control value is an existing expectation either absolute or relative
that is used for comparison with a dynamically changing metric. Thresholds are examples of
control values.

The format and content of any project panel are configurable to the software project
manager's preference for tracking metrics of top-level interest. The basic operation of an
SPCP can be described by the following top - level use case.
i. Start the SPCP
ii. Select a panel preference
iii. Select a value or graph metric
iv. Select to superimpose controls
v. Drill down to trend
vi. Drill down to point in time.
vii. Drill down to lower levels of information
viii. Drill down to lower level of indicators.
PART – 3 TAILORING THE PROCESS
PROCESS DISCRIMINTES
Tailoring the process in software process and project management refers to adapting a
standard or base process to fit the unique needs, constraints, and characteristics of a specific
project. No two software projects are identical; each project has distinct requirements, risk
levels, team compositions, and customer expectations. Tailoring ensures that the chosen
process is flexible, efficient, and aligned with the specific goals of the project, leading to
better outcomes.
Process discriminants are the factors or characteristics that help guide the tailoring of a
process. They help to "discriminate" or identify which aspects of the standard process need
adjustment and how to adjust them to suit the project's needs.
Key Concepts in Tailoring the Process and Process Discriminants
1. Understanding Process Discriminants:
o Process discriminants are essentially the criteria used to decide how to tailor a
software process.
o They help in identifying which elements of a project management approach
can stay rigid and which ones need to be flexible.
o Common process discriminants include project size, complexity, duration,
risk, regulatory requirements, team experience, and customer expectations.
2. Common Process Discriminants:
o Project Size: Smaller projects may not need as much formal documentation or
rigorous processes compared to larger, complex projects. Smaller teams can
often benefit from lightweight processes, while larger projects may need
structured processes to handle coordination and complexity.
o Complexity: Projects with highly complex systems, integrations, or
technologies may require detailed planning, more rigorous testing, and
advanced risk management.
o Risk Level: High-risk projects, such as those involving security or mission-
critical systems, often require a more structured approach with enhanced
quality control, risk management, and compliance measures. Lower-risk
projects may allow for more agility and fewer checkpoints.
o Customer Requirements: Projects with strict customer requirements or
specifications may demand more documentation, reviews, and approvals. In
contrast, projects where customers prioritize rapid delivery and flexibility may
allow more agile, iterative approaches.
o Regulatory Compliance: Projects in regulated industries (e.g., healthcare,
finance) often require adherence to specific standards, resulting in tailored
processes with extra steps for validation, documentation, and regulatory
audits.
o Team Size and Experience: Projects with larger, less experienced teams
might need more structured processes, defined roles, and oversight. Smaller,
experienced teams can often work effectively with a leaner approach.
3. Process Tailoring Steps:
o Assess Project Characteristics: Evaluate the unique aspects of the project,
such as risk level, complexity, duration, team composition, and customer
requirements.
o Select and Adapt Process Elements: Based on the assessment, select which
aspects of the standard process need adjusting. This might involve
customizing documentation requirements, changing the frequency of reviews,
or adjusting timelines for iterative releases.
o Incorporate Best Practices: Use organizational best practices to guide the
tailoring process. This ensures consistency with previous projects and allows
for continuous improvement.
o Document Tailoring Decisions: Record any adjustments made to the process
for transparency and future reference. This documentation is crucial for
understanding process impacts on project outcomes, helping teams improve
tailoring decisions over time.
o Review and Adjust as Needed: Process tailoring is an ongoing activity. As
the project progresses, team members should periodically review the tailored
process to ensure it remains effective, making further adjustments as the
project needs change.
4. Examples of Tailoring Based on Discriminants:
o For Small, Low-Risk Projects: A small, low-risk project with a short timeline
might follow a streamlined agile approach with minimal documentation,
lightweight sprint planning, and reduced formal testing steps.
o For Large, High-Complexity Projects: A large-scale project involving
multiple integrations might need a detailed work breakdown structure (WBS),
comprehensive testing strategies, multiple approval checkpoints, and clear
documentation to support coordination.
o For Regulated Projects: A project with strict regulatory compliance might
involve rigorous documentation, traceability matrices, compliance checks, and
detailed review processes to meet audit requirements.
5. Benefits of Tailoring:
o Efficiency: Tailoring reduces unnecessary overhead, ensuring that only
relevant processes are followed, which saves time and resources.
o Flexibility: Tailored processes are more adaptable to the needs of the project,
allowing for agility where needed and rigor where required.
o Better Risk Management: Tailoring allows teams to adjust their approach to
address project-specific risks, such as adding steps to mitigate high-risk areas.
o Higher Quality and Customer Satisfaction: By aligning processes with
customer requirements and expectations, tailoring can lead to higher quality
deliverables that better meet customer needs.
In essence, tailoring the process with process discriminants enables software teams to
create a customized framework that balances flexibility and control. It optimizes resource
usage, mitigates risk, and aligns development practices with the unique goals and constraints
of each project.

UNIT – 5 PROJECT ORGANIZATIONS & RESPONSIBILITIES,


FUTURE SOFTWARE PROJECT MANAGEMENT & CASE
STUDY
PART – 1 PROJECT ORGANIZATIONS &RESPONSIBILITIES
Project Organization is actually a structure that simply facilitates and motivates
coordination and implementation of activities of the project.
Their main purpose is to simply create an environment that encourages development of
interactions between team members with very less amount of disruptions, overlaps, and
conflicts. The most important decision of a project management team is form of organization
structure that will be required and essential for the project. The organization should evolve
with Work Breakdown Structure (WBS) and life-cycle concerns.
LINE-OF-BUSINESS ORGANIZATIONS
Below is a diagram is given that shows roles and responsibilities of a default line-of-business
organization. Line business organizations need to support projects with infrastructure that are
necessary and essential to make use of a common process. Line of business simply a general
term that describes and explains products and services simply offered by a business or
manufacturer. Software lines of the business are generally motivated and supported by Return
of Investment (ROI), new business discriminators, market diversification, and profitability.
Responsibility of organization :
• They are generally responsible for definition of process even maintenance of project
process.
• They are also responsible for process automation. This is an organizational role and it
is equally important to that of role of definition of process.
• The responsibility of organization’s role or role of process automation is taken and
achieved by a single individual or various other teams.
Various authorities of Organization :
1. Software Engineering Process Authority (SEPA) –
It is team that is responsible for exchanging information and guidance of project both
to and from project practitioners. The project practitioners simply perform work and
are usually responsible for one or more process activities. SEPA is a very important
and essential role or responsibility in any of organizations.
2. Project Review Authority (PRA) –
Project review is simply a scheduled status meeting that is taken on a regular basis. It
includes project progress, issues, and risks. It is responsible for project review. The
PRA generally reviews both conformance to contractual obligations and
organizational policy obligations of project.
3. Software Engineering Environment Authority (SEEA) –
SEEA is a very important role and is very much needed to achieve an ROI for a
common process. It is simply responsible for supporting and managing a standard
environment. Due to this, different tools, techniques, and training can be effectively
amortized across all types of projects.
4. Infrastructure –
Organizational infrastructure generally consists of systems, protocols, and various
processes that provide structure to an organization, support human resources, supports
organization in carrying out its vision, mission, goals, and values. It can range from
trivial to largely entrenched bureaucracies. Various components of organizational
infrastructure are Project administration, Engineering skill centers, and professional
development.
Below diagram given that shows roles and responsibilities of a default project organization.
Project organizations generally need to allocate artifacts and responsibilities across project
team simply to ensure and confirm a balance of global (architecture) and local (component)
concerns.
Teams of Organization :
• Project Management Team –
It is an active and highly enthusiastic participant. They are responsible for producing,
developing, and managing project.
• Architecture Team –
They are generally responsible for real artifacts and even for integration of
components. They also find out risks of product misalignment with requirements of
stakeholders and simply ensure that solution fits defined purpose.
• Development Team –
They are responsible for all work that is necessary to produce working and validated
assets.
• Assessment Team –
They are responsible for assessing quality of deliverables.
UNDERSTANDING BEHAVIOUR-ORGANIZATIONAL BEHAVIOUR
Organizational behavior just as the name states, is the process of understanding and managing
human behavior within an organization. An organization not only runs on profits, work and
schedules but also takes into consideration the human values. Organization have come up
with a theory that the organization runs well when the employees are treated well and
understood well as the entire organization depends upon the human resources.

Organization behavior examines and gathers the insights on employee behavior, as how to
drive them with the proper motivation by understanding them a little better. Organizational
behavior should start with the role of the managers and how well they incorporate moral and
support down the hierarchy. Managerialism is not just about gaining profits, and executing
control but, creating a safe space for interaction of different opinions and to be able to work
as a group and achieve organizational goals. As they say, there is no I in Team. The
organization that works together, grows together.
It all comes down to the question of, what role should the manager play, keeping in mind
what should be expected of him/her with respect to organizational behavior?
Role of Managers :
1. Interpersonal Role :
• Figure Head –
In this role, the manager performs duties of ceremonial nature, such as, attending an
employee’s wedding, taking the customer to lunch, greeting the tourist dignitaries and
so on.
• Leader Role –
In this role, the manager is a leader, guiding the employees in the right path, with the
proper motivation and encouragement.
• Liaison Role –
In this role, the manager cultivates contacts outside the vertical chain of command to
collect useful information for the organization.
2. Informational Role :
• Monitor Role –
In this role, manager acts as a monitor, perpetually scanning the environment for
information, keeping an eye on the liaison contacts and subordinates and receive
unsolicited information.
• Disseminator Role –
In this role, manager acts as a disseminator by passing down privileged information to
the subordinates who would otherwise have no access to it.
• Spokesperson Role –
In this role, manager acts a spokesperson by representing the organization before
various outside groups, which have some stake in the organization. These
stakeholders can be government officials, labour unions, financial institutions,
suppliers, customers, etc. They have a wide influence over the organization, so the
manager should coin their support by effectively managing the social impact of the
organization.
3. Decisional Role :
• Entrepreneurial role –
In this role, the manager acts as an entrepreneur, always thirsty for new knowledge
and innovation to improve the organization. Nowadays, it doesn’t matter if the
organization is bigger or better, but it is necessary that it grows consistently.
Innovation is creating new ideas which may either result in the development of new
products or services or improving upon the old ones. This makes innovation an
important function for a manager.
• Disturbance handler role –
In this role, the manager acts a disturbance handler, where the manager has to work
reactively like a firefighter. The manager should come up with solutions to any
problem that arises and handle it in an orderly way.
• Resource allocator role –
In this role, the manager acts as a resource allocator where the manager must divide
work and delegate authority among his subordinates. The manager should plan out
which subordinate will get what based on the abilities and who will be more suited
into a particular task.
• Negotiator –
In this role, the manager acts as a negotiator where the manager at all levels has to
spend considerable time in negotiations. The president of a company may negotiate
with the union leaders about a new strike issue or the foreman may negotiate with the
workers about a grievance problem, etc.

PART – 2 FUTURE SOFTWARE PROJECT MANAGEMENT


MODERN PROJECT PROFILES
Modern project profiles in software process and project management encompass various
methodologies, practices, and tools designed to enhance the efficiency, flexibility, and quality
of software development. Here are some key profiles:
1. Agile Project Management
• Profile: Agile focuses on iterative development, allowing for incremental delivery
and continuous improvement. It values adaptability, collaboration, and customer
feedback.
• Methodologies: Scrum, Kanban, Lean.
• Tools: Jira, Trello, Asana.
• Applications: Ideal for projects with evolving requirements and a need for frequent
user feedback.
2. DevOps Project Management
• Profile: DevOps combines development and operations to automate and improve
software delivery and infrastructure changes. It emphasizes collaboration and
automation throughout the lifecycle.
• Practices: Continuous Integration (CI), Continuous Delivery (CD), Infrastructure as
Code (IaC).
• Tools: Jenkins, Docker, Kubernetes, Ansible.
• Applications: Used for projects that prioritize speed, reliability, and frequent releases.
3. Waterfall Project Management
• Profile: Waterfall is a sequential, linear approach where each phase must be
completed before moving on to the next. It is well-suited for well-defined projects
with stable requirements.
• Practices: Requirement Analysis, Design, Implementation, Testing, Deployment.
• Tools: Microsoft Project, Gantt Charts.
• Applications: Works best in highly regulated industries or projects with fixed scopes.
4. Hybrid Project Management
• Profile: A combination of Agile and Waterfall methodologies, this approach blends
structured planning with iterative development.
• Methodologies: Agile-Waterfall Hybrid, Agile at Scale.
• Tools: Smartsheet, Microsoft Azure DevOps.
• Applications: Useful for projects that require both strict requirements and flexibility.
5. Lean Project Management
• Profile: Lean focuses on optimizing efficiency by reducing waste and enhancing
value delivery. It emphasizes continuous improvement and efficiency.
• Principles: Value Stream Mapping, Just-In-Time, Kaizen.
• Tools: Kanban boards, Value Stream Mapping.
• Applications: Often applied in projects where minimizing costs and waste is critical.
6. Scaled Agile Framework (SAFe)
• Profile: SAFe scales Agile across large organizations by providing structure for team
alignment, planning, and coordination.
• Elements: Program Increment Planning, Agile Release Train.
• Tools: SAFe Accelerator, Jira Align.
• Applications: Used in enterprises looking to scale Agile practices across multiple
teams.
7. Extreme Programming (XP)
• Profile: XP focuses on technical excellence and frequent releases to improve software
quality and responsiveness to customer needs.
• Practices: Pair Programming, Test-Driven Development, Refactoring.
• Tools: Eclipse, Visual Studio, TestRail.
• Applications: Suitable for projects requiring high technical standards and frequent
releases.
8. Product-Centric Project Management
• Profile: This approach organizes projects around the product lifecycle, focusing on
customer needs and product value.
• Practices: Product Roadmapping, Backlog Management, User Story Mapping.
• Tools: ProductPlan, Productboard.
• Applications: Typically applied in companies where a product’s long-term
development is the priority.
9. Continuous Delivery/Continuous Deployment (CD/CD) Management
• Profile: CD/CD focuses on automating the release of software so new features,
improvements, and bug fixes are delivered frequently.
• Practices: Automated Testing, Deployment Pipelines, Infrastructure Monitoring.
• Tools: GitLab CI, AWS CodePipeline, CircleCI.
• Applications: Useful for projects requiring rapid deployment and minimal downtime.
10. Risk Management and Compliance-Centric Project Management
• Profile: This profile focuses on identifying, assessing, and mitigating risks, especially
for projects requiring high regulatory compliance.
• Practices: Risk Assessment, Compliance Checks, Quality Assurance.
• Tools: Riskwatch, LogicGate.
• Applications: Common in regulated industries, including healthcare, finance, and
government projects.
Each of these profiles serves a unique purpose and can be adapted to fit the needs of modern
software projects based on factors like team size, project scope, regulatory requirements, and
market demands.
NEXT GENERATION SOFTWARE ECONOMICS
Next-generation software economics focuses on optimizing the financial aspects of software
development by leveraging modern methodologies, automation, and intelligent technologies.
This approach seeks to lower costs, reduce time-to-market, and increase the value of software
by using next-generation tools and techniques. Here’s an overview of its key elements:
1. Value-Driven Development
• Focus on Outcomes: Traditional software projects often measure success by
timelines or budget compliance. Next-generation software economics shifts focus to
the end value delivered to users, aligning development closely with business
outcomes.
• User-Centric Prioritization: Features and functionalities are prioritized based on
user needs and value, not just technical specifications. This results in greater ROI by
focusing on high-impact features.
• Examples: Prioritizing feature releases based on user feedback or revenue impact,
continuous monitoring of product performance and usage data.
2. Agile and Lean Methodologies
• Reduced Waste and Improved Efficiency: Agile and Lean focus on delivering
small, incremental changes and eliminating waste, which helps in lowering project
costs and reducing the risk of large-scale project failure.
• Iterative Value Realization: By delivering in iterations, Agile and Lean improve
economics by reducing development cycles, allowing for continuous value
assessment, and adjusting based on user needs.
• Examples: Adopting Scrum, Kanban, and Lean practices to streamline processes and
reduce costs.
3. Automation and DevOps
• Cost Reduction Through Automation: Automation in testing, deployment, and
monitoring cuts down on manual labor and reduces the likelihood of errors,
improving productivity and reducing costs.
• Rapid Time-to-Market: Automated CI/CD pipelines enable faster releases, which
means that businesses can start realizing returns on their software investments sooner.
• Examples: Using CI/CD tools like Jenkins, GitLab CI, and CircleCI for automated
build and release pipelines, and automating quality assurance with tools like Selenium
or TestRail.
4. Cloud Computing and Infrastructure-as-Code (IaC)
• Scalable Resource Management: Cloud platforms allow software projects to scale
resources up or down based on demand, avoiding the cost of over-provisioning and
enabling pay-as-you-go billing.
• Automated Infrastructure Provisioning: Infrastructure-as-Code automates the setup
and configuration of infrastructure, reducing manual effort, improving consistency,
and reducing downtime.
• Examples: Utilizing cloud providers like AWS, Azure, or Google Cloud, and IaC
tools like Terraform and AWS CloudFormation to manage infrastructure.
5. Artificial Intelligence (AI) and Machine Learning (ML) Integration
• Enhanced Efficiency and Cost Optimization: AI and ML help automate processes
like bug detection, code review, and project management tasks, reducing time and
resources needed for these activities.
• Predictive Analytics: AI-driven insights provide accurate predictions about project
timelines, cost overruns, and customer behavior, allowing better financial planning
and decision-making.
• Examples: Using AI for predictive analytics, project risk management, and automated
testing with tools like DeepCode or CodeGuru.
6. Open Source and Reusable Code
• Lowered Development Costs: Leveraging open-source libraries and frameworks
reduces development costs and speeds up the delivery by providing pre-built
solutions.
• Increased Collaboration and Innovation: Open-source projects foster innovation
through collaboration, often accelerating advancements and reducing the need to
develop software from scratch.
• Examples: Adopting open-source libraries such as React or TensorFlow, or using
reusable microservices to accelerate development.
7. Microservices and Modular Architectures
• Scalability and Flexibility: Modular and microservices architectures enable scaling
of individual components, reducing costs associated with scaling the entire system.
• Faster and Cost-Effective Upgrades: Modular design allows for faster and cheaper
updates or feature additions without disrupting the whole system.
• Examples: Using platforms like Kubernetes for container orchestration, enabling
microservices scalability, and using modular code patterns to speed up development.
8. Economic Models Based on Usage and Subscription
• Shift from Perpetual Licensing to Subscription Models: SaaS and cloud-based
models allow companies to use software on a subscription basis, which helps to
maintain a steady cash flow and reduces upfront costs.
• Pay-Per-Use Pricing: This model enables organizations to pay only for the resources
and services they use, which can be particularly cost-effective in highly variable
demand environments.
• Examples: Using SaaS solutions for CRM (e.g., Salesforce) or cloud computing (e.g.,
AWS), which are billed based on usage or subscription.
9. Data-Driven Decision-Making and Continuous Feedback Loops
• Optimized Resource Allocation: Data from continuous feedback loops, telemetry,
and user behavior analysis helps optimize resource allocation and focus investment on
the most valuable areas.
• Rapid Response to Market Needs: Continuous feedback reduces wasted investment
by allowing companies to pivot quickly based on actual user data, rather than assumed
needs.
• Examples: Using analytics tools like Google Analytics, Mixpanel, or custom
telemetry systems to track software performance and user engagement.
10. Enhanced Security and Compliance Economies
• Risk Reduction: Automation and compliance tools reduce security risks and
regulatory fines, which can otherwise be a large cost to enterprises.
• Cost Efficiency in Compliance: Integrating security and compliance early in the
software development lifecycle reduces costly compliance retrofitting and fines.
• Examples: Security automation tools like Snyk or compliance automation solutions
that integrate with CI/CD pipelines to ensure secure and compliant software releases.
MODERN PROCESS TRANSITIONS
Modern process transitions in software process and project management involve the shift
from traditional, rigid methodologies to more adaptive, flexible, and technology-enabled
approaches. These transitions focus on improving speed, agility, collaboration, and quality,
while aligning closely with rapidly changing market demands and technology advancements.
Here are the key elements:
1. Transition from Waterfall to Agile and Lean Approaches
• Traditional Approach: Waterfall was a widely used linear model, where each phase
is completed before moving to the next. This approach worked well for stable projects
but struggled with changing requirements.
• Modern Transition: Agile and Lean emphasize iterative development, collaboration,
and continuous feedback, enabling teams to adapt to changes. Agile methods like
Scrum and Kanban facilitate faster delivery with more frequent checkpoints.
• Benefits: Increased flexibility, higher-quality outcomes, and better alignment with
customer needs.
2. From Project-Centric to Product-Centric Mindset
• Traditional Approach: The focus was on completing individual projects with
specific budgets and deadlines, often with limited long-term consideration for product
evolution.
• Modern Transition: Product-centric management places value on the lifecycle of a
product, ensuring continuous improvement, customer feedback integration, and long-
term vision.
• Benefits: Creates sustainable value by focusing on the long-term growth and
adaptability of the product rather than short-term project constraints.
3. Moving from Manual Processes to Automation and DevOps
• Traditional Approach: Manual testing, deployment, and operational processes were
time-consuming and error-prone.
• Modern Transition: DevOps integrates development and operations to create a
seamless flow from coding to deployment. Automation in Continuous Integration
(CI), Continuous Delivery (CD), and Continuous Deployment significantly reduces
manual effort, speeds up delivery, and minimizes human error.
• Benefits: Faster releases, improved reliability, and efficient resource allocation.
4. Adopting Cloud-Based and Infrastructure-as-Code (IaC) Solutions
• Traditional Approach: In-house servers and manual infrastructure management
limited scalability and were costly.
• Modern Transition: Cloud computing and Infrastructure-as-Code allow for dynamic
scaling and automated infrastructure setup, improving cost efficiency, scalability, and
deployment speed.
• Benefits: Lower infrastructure costs, improved scalability, and reduced manual
infrastructure setup time.
5. Shift from Siloed Teams to Cross-Functional and Collaborative Teams
• Traditional Approach: Teams were often siloed (development, QA, operations),
which led to slower communication and misaligned objectives.
• Modern Transition: Cross-functional teams combine developers, QA engineers,
designers, and operations within the same team to increase collaboration and ensure
alignment of goals.
• Benefits: Faster decision-making, higher-quality products, and improved
collaboration across disciplines.
6. Transition from Fixed Requirements to Adaptive Requirement Management
• Traditional Approach: Fixed requirements were defined at the start of the project,
which limited flexibility in adapting to new insights or changes.
• Modern Transition: Agile approaches emphasize adaptive requirement management
with backlogs that evolve based on customer feedback and market changes.
Requirements are continuously reprioritized in sprints.
• Benefits: Greater responsiveness to changing needs and improved alignment with
market and user expectations.
7. From Monolithic Architectures to Microservices and Modular Architectures
• Traditional Approach: Monolithic architectures were common, where all functions
were tightly integrated, making changes complex and time-consuming.
• Modern Transition: Microservices and modular architectures break down
applications into loosely coupled services that can be developed, deployed, and scaled
independently.
• Benefits: Easier maintenance, faster development cycles, and better scalability.
8. Incorporating AI and Machine Learning in Project Management
• Traditional Approach: Decision-making relied on manual data analysis, which was
slower and less accurate.
• Modern Transition: AI and machine learning can predict project risks, optimize
resource allocation, and automate repetitive tasks like code review and bug detection.
• Benefits: Better decision-making, proactive risk management, and reduced workload
for project managers and team members.
9. Enhanced Focus on User Feedback and Continuous Improvement
• Traditional Approach: Feedback was often collected only at the end of a project or
during specific testing phases.
• Modern Transition: Modern processes implement continuous feedback loops that
capture user input at every stage. Methods like A/B testing and user analytics provide
real-time feedback for ongoing improvements.
• Benefits: Improved user satisfaction, faster identification of issues, and greater
adaptability to user needs.
10. Adopting Data-Driven Decision Making
• Traditional Approach: Decisions were frequently based on assumptions, experience,
or incomplete data, which often led to misalignment with market needs.
• Modern Transition: Modern project management leverages real-time data and
analytics to inform decisions, allowing for evidence-based prioritization and resource
allocation.
• Benefits: More accurate project planning, improved resource management, and better
alignment with organizational objectives.

PART – 3 CASE STUDY


THE COMMAND CENTER PROCESSING AND DISPLAY SYSTEM-
REPLACEMENT (CCPDS-R)
COCOMO MODEL
The best known and most transparent cost model COCOMO (Constructive cost model) was
developed by Boehm, which was derived from the analysis of 63 software projects. Boehm
proposed three levels of the model: basic, intermediate and detailed. COCOMO focuses mainly
upon the intermediate mode.
The COCOMO model is based on the relationships between:

Equation 1:- Development effort is related to system size


MM = a.KDSI.b
Equation 2:- Effort and development time
TDEV = c.MM.d
where MM is the effort in man-months.

KDSI is the number of thousand delivered source instructions.


TDEV is the development time.
The coefficients a, b, c and d are dependent upon the 'mode of development which Boehm
classified into 3 distinct modes:
1. Organic - Projects involve small teams working in familiar and stable environments.
Eg: - Payroll systems.
2. Semi - Detached - Mixture of experience within project teams. This lies in between organic
and embedded modes.
Eg: Interactive banking system.
3. Embedded: - Projects that are developed under tight constraints, innovative, complex and
have volatility of requirements.
Eg: - nuclear reactor control systems. Development mode A B C D Organic 3.2 1.05 2.5 0.38
Software process and project management Page 87
Semi-detached 3.0 1.12 2.5 0.35
Embedded 2.8 1.20 2.5 0.32
In the intermediate mode it is possible to adjust the nominal effort obtained from the model by
the influence of 15 cost drivers. These drivers deviate from the nominal figures, where
particular project differ from the average project. For example, if the reliability of the software
is very high, a factor rating of 1.4 can be assigned to that driver. Once all the factors for each
driver have been chosen they are multiplied to arrive at an Effort Adjustment Factor (EAF).
The actual steps in producing an estimate using the intermediate COCOMO model are:
1. Identify the 'mode' of development for the new project.
2. Estimate the size of the project in KDSI to derive a nominal effort prediction.
3. Adjust the 15 cost drivers to reflect your project, producing an error adjustment
factor.
4. Calculate the predicted project effort using equation 1 and the effort adjustment
factor.
5. Calculate the project duration using equation 2.

Drawbacks:
1. It is hard to accurately estimate KDSI early on in the project, when most effort
estimates are required.
2. Extremely vulnerable to mis-classification of the development mode.
3. Success depends largely on tuning the model to the needs of the organization, using
historical data which is not always available.
Advantages:
1. COCOMO is transparent. It can be seen how it works.
2. Drivers are particularly helpful to the operator to understand the impact of different
factors that affect project costs.

You might also like