SOFTWARE PROCESS & PROJECT MANAGEMENT
SOFTWARE PROCESS & PROJECT MANAGEMENT
3. Do it twice. If a computer program is being developed for the first time, arrange matters
so that the version finally delivered to the customer for operational deployment is
actually the second version insofar as critical design/operations are concerned. Note that
this is simply the entire process done in miniature, to a time scale that is relatively small
with respect to the overall effort. In the first version, the team must have a special broad
competence where they can quickly sense trouble spots in the design, model them,
model alternatives, forget the straightforward aspects of the design that aren't worth
studying at this early point, and, finally, arrive at an error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project
resources-manpower, computer time, and/or management judgment-is the test phase.
This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest
point in the schedule, when backup alternatives are least available, if at all. The previous
three recommendations were all aimed at uncovering and solving problems before
entering the test phase. However, even after doing these things, there is still a test phase
and there are still important things to be done, including: (1) employ a team of test
specialists who were not responsible for theoriginal design; (2) employ visual
inspections to spot the obvious errors like dropped minus signs, missing factors of two,
jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too
expensive); (3) test every logic path; (4) employ the final checkout on the target
computer.
5. Involve the customer. It is important to involve the customer in a formal way so that
he has committed himself at earlier points before final delivery. There are three points
following requirements definition where the insight, judgment, and commitment of the
customer can bolster the development effort. These include a "preliminary software
review" following the preliminary program design step, a sequence of "critical software
design reviews" during program design, and a "final software acceptance review".
Importance of Waterfall Model
Following are the importance of waterfall model:
1. Clarity and Simplicity: The linear form of the Waterfall Model offers a simple and
unambiguous foundation for project development.
2. Clearly Defined Phases: The Waterfall Model phases each have unique inputs and
outputs, guaranteeing a planned development with obvious checkpoints.
3. Documentation: A focus on thorough documentation helps with software
comprehension, maintenance, and future growth.
4. Stability in Requirements: Suitable for projects when the requirements are clear and
stable, reducing modifications as the project progresses.
5. Resource Optimization: It encourages effective task-focused work without
continuously changing contexts by allocating resources according to project phases.
6. Relevance for Small Projects: Economical for modest projects with simple
specifications and minimal complexity.
Phases of Waterfall Model
The Waterfall Model has six phases which are:
1. Requirements: The first phase involves gathering requirements from stakeholders
and analyzing them to understand the scope and objectives of the project.
2. Design: Once the requirements are understood, the design phase begins. This involves
creating a detailed design document that outlines the software architecture, user
interface, and system components.
3. Development: The Development phase include implementation involves coding the
software based on the design specifications. This phase also includes unit testing to
ensure that each component of the software is working as expected.
4. Testing: In the testing phase, the software is tested as a whole to ensure that it meets
the requirements and is free from defects.
5. Deployment: Once the software has been tested and approved, it is deployed to the
production environment.
6. Maintenance: The final phase of the Waterfall Model is maintenance, which involves
fixing any issues that arise after the software has been deployed and ensuring that it
continues to meet the requirements over time.
Advantages of Waterfall Model
The classical waterfall model is an idealistic model for software development. It is very
simple, so it can be considered the basis for other software development life cycle models.
Below are some of the major advantages of this SDLC model.
• Easy to Understand: The Classical Waterfall Model is very simple and easy to
understand.
• Individual Processing: Phases in the Classical Waterfall model are processed one at a
time.
• Properly Defined: In the classical waterfall model, each stage in the model is clearly
defined.
• Clear Milestones: The classical Waterfall model has very clear and well-understood
milestones.
• Properly Documented: Processes, actions, and results are very well documented.
• Reinforces Good Habits: The Classical Waterfall Model reinforces good habits like
define-before-design and design-before-code.
• Working: Classical Waterfall Model works well for smaller projects and projects
where requirements are well understood.
Disadvantages of Waterfall Model
The Classical Waterfall Model suffers from various shortcomings we can’t use it in real
projects, but we use other software development lifecycle models which are based on the
classical waterfall model. Below are some major drawbacks of this model.
• No Feedback Path: In the classical waterfall model evolution of software from one
phase to another phase is like a waterfall. It assumes that no error is ever committed
by developers during any phase. Therefore, it does not incorporate any mechanism for
error correction.
• Difficult to accommodate Change Requests: This model assumes that all the
customer requirements can be completely and correctly defined at the beginning of
the project, but the customer’s requirements keep on changing with time. It is difficult
to accommodate any change requests after the requirements specification phase is
complete.
• No Overlapping of Phases: This model recommends that a new phase can start only
after the completion of the previous phase. But in real projects, this can’t be
maintained. To increase efficiency and reduce cost, phases may overlap.
• Limited Flexibility: The Waterfall Model is a rigid and linear approach to software
development, which means that it is not well-suited for projects with changing or
uncertain requirements. Once a phase has been completed, it is difficult to make
changes or go back to a previous phase.
• Limited Stakeholder Involvement: The Waterfall Model is a structured and
sequential approach, which means that stakeholders are typically involved in the early
phases of the project (requirements gathering and analysis) but may not be involved in
the later phases (implementation, testing, and deployment).
• Late Defect Detection: In the Waterfall Model, testing is typically done toward the
end of the development process. This means that defects may not be discovered until
late in the development process, which can be expensive and time-consuming to fix.
• Lengthy Development Cycle: The Waterfall Model can result in a lengthy
development cycle, as each phase must be completed before moving on to the next.
This can result in delays and increased costs if requirements change or new issues
arise.
When to Use Waterfall Model?
Here are some cases where the use of the Waterfall Model is best suited:
• Well-understood Requirements: Before beginning development, there are precise,
reliable, and thoroughly documented requirements available.
• Very Little Changes Expected: During development, very little adjustments or
expansions to the project’s scope are anticipated.
• Small to Medium-Sized Projects: Ideal for more manageable projects with a clear
development path and little complexity.
• Predictable: Projects that are predictable, low-risk, and able to be addressed early in
the development life cycle are those that have known, controllable risks.
• Regulatory Compliance is Critical: Circumstances in which paperwork is of utmost
importance and stringent regulatory compliance is required.
• Client Prefers a Linear and Sequential Approach: This situation describes the
client’s preference for a linear and sequential approach to project development.
• Limited Resources: Projects with limited resources can benefit from a set-up
strategy, which enables targeted resource allocation.
CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE
Conventional software management refers to traditional methods used in software
development and project management, which generally follow structured, sequential
processes. These conventional methods, like the Waterfall model, have a series of predefined
stages such as requirements gathering, design, development, testing, and deployment.
Software management performance in this context focuses on measuring how well these
stages are executed to ensure the project meets its objectives within set constraints, such as
budget, time, and scope. Here are key aspects of conventional software management
performance:
1. Predictability and Planning :Conventional methods emphasize upfront planning, aiming
for a clear, predictable project timeline and budget. This planning helps set specific
milestones for progress tracking and allocate resources accordingly.
• Performance is measured based on how closely actual progress aligns with this initial
plan, which is typically rigid. Deviations from the plan can indicate inefficiencies or
issues in performance.
2. Quality Assurance and Testing
• Quality control is typically concentrated at the testing phase, which occurs toward the
end of the project. Conventional performance metrics here focus on defect detection
rates, code quality, and meeting design specifications.
• Since testing often happens after significant development, quality issues can be costly
to fix, and performance here hinges on how well the initial requirements were
captured and implemented.
3. Resource Allocation and Utilization
• In conventional management, resource allocation is largely fixed early in the project.
Performance is assessed based on how well team members and tools are utilized
within these parameters, aiming for high productivity without overburdening.
• Overuse of resources or unexpected costs often points to gaps in initial planning or
unforeseen project complexities.
4. Schedule Adherence
• Meeting project deadlines is a core metric in conventional software management. The
structured nature of conventional methods aims to minimize the risk of delays by
setting a strict schedule.
• Performance is measured by the team’s ability to deliver milestones on time, and
missed deadlines typically impact the entire project schedule.
5. Cost Management
• Budget adherence is another performance metric, as cost overruns are common
challenges. Budget control in conventional methods is based on detailed initial
estimates that factor in development, testing, and deployment costs.
• Performance issues related to costs often arise from scope changes, unexpected issues,
or underestimation of time and resources, which are all risks in sequential planning.
6. Customer Satisfaction
• Since conventional models gather requirements early and deliver at the end, customer
satisfaction depends heavily on how well initial requirements were understood and
fulfilled. Poor alignment between customer expectations and delivered software can
hurt performance evaluations.
Challenges of Conventional Software Management Performance:
• Rigidity: Limited flexibility in handling changes after project initiation can affect the
project’s success, especially in dynamic environments.
• Late Detection of Issues: Problems are often found late in the process (e.g., during
final testing), making them costly to fix.
• Limited Customer Feedback: Customer input is usually incorporated only at the
beginning and end, so mid-project changes are challenging to address, potentially
affecting performance on user satisfaction.
Conventional software management performance is thus heavily dependent on planning
accuracy, schedule adherence, resource optimization, and successful execution of sequential
tasks. However, this model often faces limitations in environments requiring adaptability,
leading many teams to adopt Agile and other iterative methods in recent years.
OVERVIEW OF PROJECT PLANNING – STEPWISE PROJECT PLANNING
A good project plan sets out the processes that everyone is expected to follow, so it avoids a
lot of headaches later. For example, if you specify that estimates are going to be worked out
by subject matter experts based on their judgement, and that’s approved, later no one can
complain that they wanted you to use a different estimating technique. They’ve known the
deal since the start.
Project plans are also really helpful for monitoring progress. You can go back to them and
check what you said you were going to do and how, comparing it to what you are actually
doing. This gives you a good reality check and enables you to change course if you need to,
bringing the project back on track.
Tools like dashboards can help you make sure that your project is proceeding according to
plan. ProjectManager has a real-time dashboard that updates automatically whenever tasks
are updated.
How to Create a Project Plan
Your project plan is essential to the success of any project. Without one, your project may be
susceptible to common project management issues such as missed deadlines, scope creep and
cost overrun. While writing a project plan is somewhat labor intensive up front, the effort will
pay dividends throughout the project life cycle.
The basic outline of any project plan can be summarized in these five steps:
1. Define your project’s stakeholders, scope, quality baseline, deliverables, milestones,
success criteria and requirements. Create a project charter, work breakdown structure
(WBS) and a statement of work (SOW).
2. Identify risks and assign deliverables to your team members, who will perform the
tasks required and monitor the risks associated with them.
3. Organize your project team (customers, stakeholders, teams, ad hoc members, and so
on), and define their roles and responsibilities.
4. List the necessary project resources, such as personnel, equipment, salaries, and
materials, then estimate their cost.
5. Develop change management procedures and forms.
6. Create a communication plan, schedule, budget and other guiding documents for the
project.
What Are the 5 Phases of the Project Life Cycle?
Any project, whether big or small, has the potential to be very complex. It’s much easier to
break down all the necessary inclusions for a project plan by viewing your project in terms of
phases. The Project Management Institute, within the Project Management Book of
Knowledge (PMBOK), has identified the following 5 phases of a project:
1. Initiation: The start of a project, in which goals and objectives are defined through a
business case and the practicality of the project is determined by a feasibility study.
2. Planning: During the project planning phase, the scope of the project is defined by a
work breakdown structure (WBS) and the project methodology to manage the project
is decided on. Costs, quality and resources are estimated, and a project schedule with
milestones and task dependencies is identified. The main deliverable of this phase is
your project plan.
3. Execution: The project deliverables are completed during this phase. Usually, this
phase begins with a kick-off meeting and is followed by regular team meetings
and status reports while the project is being worked on.
4. Monitoring & Controlling: This phase is performed in tandem with the project
execution phase. Progress and performance metrics are measured to keep progress on
the project aligned with the project plan.
5. Closure: The project is completed when the stakeholder receives the final deliverable.
Resources are released, contracts are signed off on and, ideally, there will be an
evaluation of the successes and failures.
Reducing software product size is a strategy in software process and project management
focused on decreasing the complexity, code volume, and storage needs of a software product.
This effort aims to streamline development, improve maintainability, and optimize system
resources, which can lead to faster performance and lower operational costs. Here’s an
overview of why and how reducing product size can be beneficial:
Benefits of Reducing Software Product Size
1. Improved Performance and Efficiency
o Smaller codebases are typically easier to run and often execute faster,
especially in resource-constrained environments like embedded systems or
mobile devices.
o Reducing the software size can also reduce memory and storage requirements,
enhancing performance on systems with limited resources.
2. Lower Maintenance Costs
o A smaller codebase is easier to maintain and debug, which reduces the long-
term costs associated with fixes, updates, and enhancements.
o Simplified code often leads to fewer bugs, and smaller, more modular
components are easier to update and test.
3. Enhanced Security and Reliability
o Large, complex codebases often have more areas where vulnerabilities can
arise. By minimizing code, teams can reduce potential security flaws and
increase overall reliability.
o Less code means a smaller attack surface, making it easier to conduct
thorough security audits and testing.
4. Reduced Development Time and Costs
o Smaller products typically require less time to develop and test. By cutting out
redundant or unnecessary code, developers can speed up development cycles
and deliver more streamlined, efficient software.
o Less time spent coding and testing translates to lower development costs,
which is especially beneficial for projects with tight budgets.
5. Smoother User Experience
o Smaller, optimized applications generally lead to quicker load times, less
strain on system resources, and a smoother experience for users, especially on
devices with limited processing power or storage.
Strategies for Reducing Software Product Size
1. Code Refactoring and Optimization
o Review and refactor code regularly to eliminate redundancy, dead code, and
any functions or libraries that do not contribute meaningfully to the product’s
functionality.
o Optimizing algorithms and data structures can also help reduce the product
size and improve performance.
2. Use of Modular and Component-Based Design
o By breaking the software into smaller, reusable modules, teams can isolate
functionality, making it easier to replace or remove components without
affecting the whole product.
o Modular design encourages code reuse, which can reduce the overall size by
avoiding duplication.
3. Optimize Third-Party Libraries and Dependencies
o Third-party libraries can significantly increase software size, especially when
they include features that are not fully utilized.
o Only include essential libraries and consider lightweight alternatives that offer
the necessary functionality without excessive overhead.
4. Remove Unused Features
o Regularly conduct feature audits to identify underused or unnecessary features
that add to the software’s size. Reducing feature bloat (extra, rarely used
features) can help keep the product lightweight and focused.
o Engage with users to understand which features are essential and which can be
scaled down or removed without affecting user satisfaction.
5. Data Compression and Resource Optimization
o Compress assets such as images, audio, and video to reduce their size.
Optimizing assets can significantly reduce the overall storage requirements of
a product.
o Remove unused resources, including old configuration files, logs, or large
datasets, especially if they are not directly used by the product.
6. Efficient Use of Data Storage
o Implement efficient data storage solutions, such as binary serialization, to save
space when data storage is necessary within the application.
o Consider offloading some data storage or processing tasks to cloud services if
possible, especially for mobile or embedded devices where local storage is
limited.
Managing Product Size Reduction in Project Management
1. Set Clear Size-Related Goals Early
o Establish size goals and constraints at the beginning of the project as part of
the requirements, so they can guide development decisions throughout the
process.
o Factor size goals into project planning, budgeting, and time estimates.
2. Continuous Monitoring of Size Metrics
o Track metrics like lines of code, binary size, and resource usage continuously
to ensure the project remains within target boundaries.
o Use automated tools to provide real-time feedback on size impact, making it
easier to manage and adjust before issues become critical.
3. Encourage a Lean Development Culture
o Encourage developers to follow best practices such as writing clean, efficient
code, and to use design patterns that promote simplicity and modularity.
o Regular code reviews focused on size and optimization can reinforce a culture
of lean software development.
4. Use Prototyping and Testing for Feedback
o Prototyping helps identify which features are essential and what can be
optimized, reduced, or removed without sacrificing user satisfaction.
o Conduct testing with a focus on performance and user experience in
environments that mimic target constraints (e.g., lower memory, limited
processing power).
Reducing software product size ultimately contributes to a more efficient, maintainable, and
user-friendly product. It requires a proactive approach throughout the software development
lifecycle to ensure size goals are met without compromising functionality or quality.
IMPROVING SOFTWARE PROCESSES
Improving software processes in software process and project management involves refining
and optimizing the methods, practices, and workflows that teams use to design, develop, test,
and deliver software. The goal of process improvement is to make development more
efficient, reliable, and responsive to business needs, ultimately delivering high-quality
products with fewer defects, reduced costs, and shorter timelines. Here are key elements and
approaches to improving software processes:
1. Defining Process Improvement Goals
• Process improvement goals should align with the organization’s strategic objectives.
For instance, if the organization values speed to market, the goal may be to reduce
development cycles. If reliability is prioritized, the focus may be on defect reduction.
• Common improvement goals include increasing development speed, reducing defect
rates, enhancing collaboration, and improving customer satisfaction.
2. Understanding Current Process Maturity Levels
• Software process maturity frameworks, such as the Capability Maturity Model
Integration (CMMI), help assess current processes and determine areas for
improvement. These models categorize process maturity levels from initial (ad hoc) to
optimized (proactively improving).
• Assessing maturity levels gives organizations a structured understanding of their
strengths and weaknesses, identifying practices to improve efficiency, consistency,
and quality.
3. Implementing Agile Methodologies
• Agile practices like Scrum, Kanban, and Extreme Programming (XP) focus on
iterative development, adaptability, and frequent feedback. These methods improve
flexibility, allowing teams to respond quickly to changing requirements.
• Agile’s emphasis on continuous delivery, customer feedback, and short iterations can
improve overall productivity, reduce the risk of project failure, and ensure alignment
with customer needs.
4. Using DevOps for Continuous Improvement
• DevOps combines development and operations practices to streamline software
delivery and operations, automating tasks like testing, integration, deployment, and
monitoring.
• By automating repetitive processes, DevOps can improve deployment speed, reduce
errors, and ensure a continuous flow from development to production, enhancing
software quality and customer responsiveness.
5. Quality Assurance and Testing Improvements
• Enhancing quality assurance (QA) processes involves shifting testing left in the
development cycle (testing early and often) to catch defects sooner and improve code
quality.
• Automated testing, continuous integration, and test-driven development (TDD) can all
help increase testing efficiency, reduce defects, and improve the reliability of
software.
6. Process Standardization and Best Practices
• Establishing standardized processes across teams ensures consistent quality and
efficiency. Standard practices can include coding standards, documentation
requirements, code reviews, and testing protocols.
• Standardization minimizes errors, improves team alignment, and enhances
collaboration, especially in larger organizations with multiple teams.
7. Process Metrics and Data-Driven Decisions
• Monitoring and analyzing process metrics (such as defect density, cycle time, and
code churn) helps track performance and identify bottlenecks or inefficiencies.
• Using data-driven decisions allows teams to make informed changes, enabling them
to identify and eliminate wasteful practices and improve overall efficiency.
8. Encouraging Continuous Feedback and Retrospectives
• Frequent feedback cycles with stakeholders and retrospectives with the development
team allow for quick identification of issues and opportunities for improvement.
• Retrospectives, common in Agile practices, encourage teams to reflect on successes
and failures after each sprint or iteration, fostering a culture of continuous learning
and improvement.
9. Improving Project Management Practices
• Effective project management practices such as clear requirement gathering, scope
management, and risk mitigation improve project outcomes by reducing
misunderstandings and scope creep.
• Tools like Gantt charts, Kanban boards, and project management software (e.g., Jira,
Asana) can help teams better track progress, assign responsibilities, and manage
timelines.
10. Training and Skill Development
• Investing in training and skill development is essential for any process improvement.
As technology and best practices evolve, continuous learning helps teams stay
updated on the latest tools, techniques, and methodologies.
• Training can cover coding standards, new technologies, testing techniques, or process
improvement methods like Lean and Six Sigma.
11. Incorporating Lean Principles to Reduce Waste
• Lean principles focus on eliminating waste, improving efficiency, and delivering
value to customers. This can involve cutting down on unnecessary processes,
simplifying workflows, or automating repetitive tasks.
• Lean methodologies like Value Stream Mapping can help identify and remove
bottlenecks, streamline processes, and enhance value delivery to customers.
12. Customer and Stakeholder Involvement
• Engaging customers and stakeholders throughout the software development process
ensures that the final product meets their needs and expectations, minimizing rework
and dissatisfaction.
• Customer feedback can be integrated at regular intervals, helping shape the product to
better suit end-user requirements and improve overall customer satisfaction.
Benefits of Improving Software Processes
1. Enhanced Efficiency and Reduced Cycle Time
o Optimized processes reduce time spent on each development stage, leading to
faster delivery of software products and shorter time-to-market.
2. Higher Quality and Reliability
o Improved QA practices and continuous testing reduce defects, making the
software more reliable and robust and increasing end-user satisfaction.
3. Better Resource Utilization
o Streamlined workflows help make the best use of resources (people, time, and
technology), reducing unnecessary costs and allowing teams to focus on high-
value activities.
4. Greater Team Collaboration and Morale
o Clear processes and roles improve team communication and collaboration,
reducing friction and fostering a positive work environment.
5. Increased Customer Satisfaction
o Software that is delivered on time, with high quality, and aligned with
customer needs, contributes to greater customer satisfaction and loyalty.
6. Scalability and Adaptability
o Improved processes make it easier for organizations to scale up or adapt to
new technologies or project requirements, maintaining efficiency as they grow.
IMPROVING TEAM EFFECTIVENESS
No matter how productive the team is, there are always some ways available with the help of
which we can be incorporated to take the productivity of the workplace to a whole new level.
Efficiency generally represents a level of performance that explains and describes the process
that just uses the lowest amount of inputs to develop or create the greatest or highest amount
of outputs. Rather than the sum of the individuals, teamwork is much more efficient and
important nowadays. A team is vulnerable whenever it is out of balance. Some of the true
statements of team management are given below:
• With the help of the engineering team that is nominal or are not expert, a project that
is well and carefully managed can succeed in achieving their purpose.
• Even if the team of engineers is highly expert, a project that is not carefully managed
can never succeed in achieving its purpose.
• With the help of a team of software builders or developers that are not experts or are
nominal, a system that is well-architected can be developed or built.
• Even with the help of a team of software builders or developers that are experts, a
system that is poorly architected will experience many difficulties.
Boehm (1981) have suggested and offered five staffing principles generally to examine how
to staff a software project. These principles are given below:
• Forward Engineering –
It is a process usually applied in the software engineering principles, concepts, and
methods to simply recreate an existing application. Forward Engineering is the
automation of one artifact from another more abstract representation. Examples
include compilers, linkers, etc.
• Reverse Engineering –
It is generally a process of recovering the design, requirement specifications, and
functions of a product from its code analysis. Reverse Engineering is the generation of
a more abstract representation from an existing artifact. Examples include creating or
developing a visual model from source code.
• Round-trip Engineering –
It is actually a functionality of software development tools that simply synchronizes
two or more software artifacts that are related such as source code, models, etc.
Round-trip Engineering term is used to describe and explain the key capability of
environments that usually support iterative development.
ACHIEVING REQUIRED QUALITY
Some of the software’s best practices are usually derived from the development process and
technologies. All of these practices have a greater impact on the addition of improving cost
efficiency. Some of them also permit and allow the improvement in quality for the same cost.
Some of the quality improvements with a modern process are given below:
Conventional
Quality Driver Process Modern Iterative Processes
Requirements It is usually
misunderstandings discovered late Usually resolved early
Peer inspection, also known as peer review, is a collaborative quality assurance activity in
software process and project management where team members review each other's work to
identify defects, improve code quality, and ensure alignment with standards and
requirements. This practice involves peers reviewing code, designs, documentation, or other
deliverables, typically before they move to the next stage in development. Peer inspections
help catch issues early in the development process, which can significantly reduce the cost
and effort associated with fixing bugs later in the project lifecycle.
Key Objectives of Peer Inspection
1. Defect Detection: Identify errors, inconsistencies, and potential bugs early on,
reducing the likelihood of defects being found in later stages where they are more
costly to fix.
2. Code Quality Improvement: Promote best practices, coding standards, and
consistent design, leading to cleaner, more maintainable code.
3. Knowledge Sharing: Facilitate knowledge transfer among team members, enhancing
collective skills, understanding of the codebase, and familiarity with different parts of
the project.
4. Process Improvement: Provide feedback on the development process, highlighting
areas where methodologies or practices could be improved.
5. Alignment and Consistency: Ensure all deliverables meet project requirements and
standards, which is particularly useful when multiple developers are working on the
same codebase.
Types of Peer Inspection
1. Code Review: A systematic examination of source code by team members. This is the
most common type of peer inspection, focusing on code logic, adherence to coding
standards, and potential errors.
2. Design Review: A review of architectural or design documents to ensure they meet
requirements, follow design principles, and are feasible for implementation.
3. Documentation Review: Checking project documentation, including requirements
and test plans, for accuracy, completeness, and clarity.
4. Testing and QA Review: Reviewing test cases, test scripts, and QA processes to
ensure they adequately cover requirements and are efficient.
The Peer Inspection Process
1. Preparation: The author of the deliverable (code, document, or design) notifies the
team about the inspection and provides materials for review. Reviewers may prepare
by understanding the objectives and gathering necessary information.
2. Review Session: The review team (usually 2-5 peers) examines the deliverable and
discusses identified issues, improvements, and suggestions. This session can take
place in a meeting or via a collaborative online tool.
3. Issue Identification and Documentation: Reviewers document any defects,
questions, or improvement suggestions, often categorizing them by severity or impact.
Common tools used for documentation are issue tracking systems (like Jira) or code
review platforms (like GitHub, GitLab, or Bitbucket).
4. Feedback and Action: The author addresses the feedback by making necessary
changes. In some cases, a follow-up review session may occur to verify that feedback
has been implemented.
5. Evaluation and Follow-Up: The team may review the peer inspection process itself
to identify any areas for improvement in future reviews, encouraging continuous
improvement in both the process and team collaboration.
Benefits of Peer Inspection
1. Early Defect Detection: By catching defects early, teams reduce the need for rework
and prevent issues from reaching later stages, where they are more time-consuming
and costly to fix.
2. Enhanced Code Quality: Reviews improve code readability, maintainability, and
performance, contributing to a more robust codebase.
3. Increased Knowledge Sharing: Peer reviews encourage collaboration, helping team
members learn from each other’s expertise and become familiar with different parts of
the codebase.
4. Higher Project Efficiency: Peer inspections reduce the load on testing and quality
assurance by catching defects early, leading to faster development cycles and fewer
delays.
5. Improved Team Cohesion and Standards: With regular feedback, teams naturally
align on coding standards, documentation practices, and best practices, building a
cohesive work culture.
Best Practices for Effective Peer Inspection
1. Set Clear Goals and Guidelines: Define the purpose, scope, and standards for the
review. Ensure reviewers know what to focus on, such as functionality, performance,
or security.
2. Keep Review Groups Small: Limit the number of reviewers to ensure productive
discussions without overwhelming feedback, typically involving two to five people.
3. Use Review Tools: Utilize version control systems, code review platforms, or issue
trackers to facilitate discussions, provide structured feedback, and document issues.
4. Focus on Objective Feedback: Encourage a collaborative and constructive
atmosphere to ensure feedback is given objectively, focusing on the deliverable, not
the author.
5. Limit Review Sessions to Manageable Time Blocks: Research shows that review
quality declines if sessions are too long. Short, focused reviews (usually 60-90
minutes) are more effective.
6. Follow Up on Action Items: Ensure that issues identified in the review are tracked
and addressed before the deliverable moves to the next stage. Re-inspect critical
issues if necessary.
Challenges in Peer Inspection
1. Time Constraints: Reviews can be time-consuming, especially if team members
have tight deadlines. To manage this, teams should balance review rigor with project
timelines.
2. Feedback Sensitivity: Not all feedback is easy to receive, and reviews may
inadvertently cause tension. To mitigate this, teams should cultivate a culture of
constructive feedback and learning.
3. Skill Disparities: If reviewers have varied skill levels, the review process might be
less effective. Training and mentoring can help bridge knowledge gaps.
4. Review Scope Creep: Overly detailed reviews can become unproductive. Teams
should ensure reviews stay focused on high-impact areas and avoid getting bogged
down in minor details.
Tools for Peer Inspection
1. Code Review Tools: GitHub, GitLab, Bitbucket, and Crucible provide platforms for
collaborative code reviews and feedback.
2. Documentation Review Tools: Tools like Confluence or Google Docs enable
documentation reviews and real-time feedback.
3. Issue Tracking Systems: Jira, Trello, and Azure DevOps help track review issues and
action items for better follow-up and accountability.
UNIT – 2 THE OLD WAY AND THE NEW WAY, LIFE CYCLE
PHASES & ARTIFACTS OF THE PROCESS
PART – 1 THE OLD WAY & THE NEW WAY
THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING
We know that there are many explanations and descriptions of engineering software “the old
way”. After many years of experience regarding software development, the software industry
has learned and understood many lessons and formulated or created many principles. Some of
these principles are given below :
1. Create or Make Quality –
Quality of software must be measured or quantified and mechanisms put into place to
motivate towards achieving the goal.
2. Large or high-quality software is possible –
There are several techniques that have been given a practical explanation to raise and
increase and quality includes involving customer, prototyping, simplifying design,
conducting inspections, and hiring good and best people.
3. Give end-products to Customers early –
It doesn’t matter how hard we try to learn and know about needs of users during
requirements stage, most important and effective to determine and identify their real
and realistic needs is to give product to users and let them play with it.
4. Determine problem or issue before writing and needs or requirements –
When engineers usually face with what they believe is a problem, they rush towards
offering a solution. But we should keep in mind that before we try to solve any
problem, be sure to explore all alternatives and just don’t get blinded by obvious
solution.
5. Evaluate all Design Alternatives –
When requirements or needs are agreed upon, we must examine and understand
various architecture and algorithms. We usually don’t want to use “architecture”
generally due to its usage in requirements specification.
6. Use an appropriate and correct process model –
Each and every project must select an appropriate process that makes most sense for
that project generally on basis of corporate culture, willingness to take risks,
application area, volatility of requirements, and extent to which all requirements and
needs are understood in a good manner.
7. Use different languages for various phases –
Our industry generally gives simple solutions to large complex problems. Due to his,
many declare that best development method is only one that makes use of notation
throughout life cycle.
8. Minimize or reduce intellectual distance –
The structure of software must be very close enough to a real-world structure to
minimize intellectual distance.
9. Before tools, put Techniques –
An undisciplined software engineer with a tool becomes very dangerous and harmful.
10. Get it right just before we make it very faster –
It is very easy and simple to make a program that’s being working run very faster than
it is to simply make a program work fast.
11. Inspect Code –
Assessing or inspecting design with its details and code is most and better way of
identifying errors other than testing.
12. Rather than Good technology, Good management is more important –
Good management simply motivates other people also to do their work at best, but
there are no universal “right” styles for management.
13. Key to success is “PEOPLE” –
People with high and many skills, experience, talent, and training are key to success.
14. Follow with Care and in a proper manner –
just because everyone is doing something doesn’t mean that it is right for us. It may or
may not be right for us. So, we must inspect or assess its applicability to our
environment very carefully.
15. Take Responsibility –
When a bridge collapse we only ask one question, “What did engineers do wrong?”
Even when software fails, we rarely ask this question. This is due to that in any
engineering discipline, best and important methods can be used to produce or develop
an awful design, and best and largely antiquated methods to develop an elegant
design.
16. Understanding priorities of customers –
It is simply possible that customer would tolerate 90 percent of performance that is
delivered late only if they could have 10 percent of it on time.
17. More they see, more they need –
More the functionality or performance we provide to user, more functionality or
performance they will want. Their expectation increases by time to time.
18. Plan to throw one away –
The most essential and critical factor is whether a product is entirely new or not.
Brand-new applications, architectures, interfaces, or algorithms generally work rarely
for first time.
19. Design for update or change –
We should update or accommodate change architecture, components, and
specification techniques that we generally use.
20. Without documentation, design is not a design –
We have some engineers often saying that “I have finished and completed design and
all things left is documentation”.
21. Be realistic while using tools –
Tools that software used make their customers and users more efficient.
22. Encapsulate –
Encapsulate simply means to hide information and it is easy, simply. It is a concept
that simply results in software that is simpler and easier to test and very easy to
maintain.
23. Avoid tricks –
Some of programmers love to create and develop programs with some tricks
constructs that perform a function in correct way but in a non-understandable manner.
Rather just prove to world how smart we are by avoiding trick code.
24. Don’t test-own software –
Software Developers should not test their own software. They should not be primary
testers of their own software.
25. Coupling and Cohesion –
Using coupling and cohesion are best way to measure software’s inherent
maintainability and adaptability.
25. Except for excellence –
Our employees will work in a far better way if we have high expectations for them.
PRINCIPLES OF MODERN SOFTWARE MANAGEMENT
There are some modern principles for the development of software. By following these
modern principles we can develop an efficacious software meeting all the needs of customer.
To develop a proper software one should follow the following 10 Principles of software
development:
Principles Of Software development:
The project lifecycle in software development consists of organized, sequential phases that
guide a project from initial planning through final deployment and support. These phases,
designed to manage complexity and enhance productivity, are categorized into two main
segments: Engineering Phase and Production Phase. Each of these broad categories
includes specific stages that structure project progress, assess feasibility, manage risk, and
ensure the efficient, timely delivery of high-quality software.
1. Engineering Phase
The Engineering Phase focuses on laying the foundational structure for the software project.
It defines the scope, goals, and preliminary architecture and ensures that all initial
considerations are made to inform the rest of the project. This phase generally involves a
smaller team and faces higher uncertainty, as it lays the groundwork for subsequent stages. It
is divided into two sub-phases: the Inception Phase and the Elaboration Phase.
a. Inception Phase
The Inception Phase is the initial stage where project goals are established, requirements are
gathered, and preliminary analyses are performed. Key activities in this phase include:
• Goal Setting: Defining the purpose of the project, desired outcomes, and high-level
objectives.
• Requirements Gathering: Collecting functional and non-functional requirements to
understand the features and capabilities the software must deliver.
• Cost Estimation: Creating an initial budget to understand the financial requirements
and constraints of the project.
• Risk Identification: Identifying possible risks that could impact the project, such as
technical limitations, time constraints, or resource availability.
• Scope Definition: Clearly outlining what will be included in the project and what will
not, to maintain focus and avoid scope creep.
• Architecture and Feasibility Analysis: Conducting a high-level architectural
assessment and feasibility study to determine whether the project can realistically be
executed with the available resources and technology.
This phase establishes a blueprint for the project by analyzing feasibility and requirements,
providing a structured start to the development process.
b. Elaboration Phase
The Elaboration Phase expands on the Inception Phase by diving deeper into technical
aspects, refining requirements, and establishing a solid architectural foundation for the
software. Activities in this phase include:
• Architecture Evaluation: A more thorough examination of the initial architecture,
with an emphasis on making it efficient, scalable, and robust.
• Use Case Analysis: Defining specific use cases to identify how the software will be
used, which helps clarify functionality and user requirements.
• Software Diagrams and Models: Creating detailed diagrams (like UML diagrams)
that map out the software structure, component interactions, and other essential
details.
• Risk Reduction: Taking steps to mitigate the highest priority risks identified in the
Inception Phase, often by addressing potential technical or resource challenges early
on.
• Preliminary Module Creation: Developing initial versions or prototypes of core
modules to validate the architecture and establish a foundation for later development.
The Elaboration Phase solidifies the project structure, architecture, and requirements, giving
teams a reliable roadmap for the subsequent production activities.
2. Production Phase
The Production Phase encompasses the implementation, optimization, testing, and
deployment stages. During this phase, the project involves a larger team and operates with
more predictability, as many of the technical unknowns are resolved. The Production Phase is
divided into two sub-phases: the Construction Phase and the Transition Phase.
a. Construction Phase
The Construction Phase is the main development stage, where coding and testing are the
primary focus. This phase involves integrating all features and components into a working
application and refining them to meet functional and performance requirements. Key
activities in this phase include:
• Implementation: Writing code based on the designs and requirements finalized in the
Engineering Phase. This is where the actual software product is built.
• Risk Minimization: Addressing and eliminating risks as they arise during
implementation to ensure a smoother development process.
• Component Integration: Combining individual features, components, or modules
into a cohesive application.
• Testing: Performing rigorous tests, including unit testing, integration testing, and
performance testing, to verify that each component works correctly and meets quality
standards.
• Process Optimization: Identifying and implementing improvements to streamline
workflows, reduce development costs, and improve project efficiency.
The Construction Phase emphasizes the development and integration of the application,
striving to minimize cost while maximizing quality through testing and optimization.
b. Transition Phase
The Transition Phase is the final stage of the Production Phase and includes final testing,
deployment, and post-release modifications. This phase ensures that the software is user-
ready and meets all requirements. Activities include:
• Beta Testing: Conducting beta tests with a small group of end-users to validate the
software’s functionality in real-world settings and gather feedback on usability and
performance.
• Deployment: Launching the software in a live production environment, making it
accessible to all intended users.
• User Feedback Collection: Gathering user feedback post-deployment to identify any
additional requirements or usability issues.
• Post-Release Adjustments: Implementing minor fixes or updates based on user
feedback to enhance the software’s efficacy and ensure a high level of user
satisfaction.
• User Training and Support: Providing training sessions, documentation, and
ongoing support to help users adopt the software effectively and address any initial
issues they encounter.
During the Transition Phase, developers work with a “user perspective” to fine-tune the
software, ensuring it is user-friendly, stable, and ready for sustained use in a live
environment.
PART – 3 ARTIFACTS OF THE PROCESS
THE ARTIFACT SETS
Artifact is highly associated and related to specific methods or processes of development.
Methods or processes can be project plans, business cases, or risk assessments. Distinct
gathering and collections of detailed information are generally organized and incorporated
into artifact sets. A set generally represents complete aspect of system. This is simply done to
make development and establishment of complete software system in manageable manner.
Artifacts of the life-cycle of software are generally organized and divided into two sets i.e.,
Management set and Engineering set. These sets are further divided or partitioned by
underlying language of set. These artifact sets are shown below in diagram :
1. Engineering Sets :
In this set, primary mechanism or method for forming an idea regarding evolving quality of
these artifact sets in transitioning of information from one set to another. This set is further
divided into four distinct sets that include requirements set, design set, implementation set,
and deployment set.
1. Requirement Set –
This set is primary engineering context simply used for evaluating other three artifact
sets of engineering set and basis of test cases. Artifacts of this set are evaluated,
checked, and measured through combination of following:
• Analysis of consistency among present vision and requirements models.
• Analysis of consistency with supplementary specifications of management set.
• Analysis of consistency among requirement models.
2. Design Set –
Tools that are used in Visually modeling tools. To engineer design model, UML
(Unified Modeling Language) notations are used. This sets simply contains many
different levels of abstractions. Design model generally includes all structural and
behavioral data or information to ascertain bill of material. These set artifacts mostly
include test models, design models, software architecture descriptions.
3. Implementation Set –
Tools that are used are debuggers, compilers, code analyzers, tools for test
management. This set generally contains source code as implementation of
components, their form, interfaces, and executables that are necessary for stand-alone
testing of components.
4. Deployment Set –
Tools that are used are network management tools, test coverage, and test automation
tools, etc. To simply use end result or product in its environment where it is supposed
to be used, these set generally contains executable software’s, build scripts, ML
notations, installation scripts.
2. Management Set :
This set usually captures artifacts that are simply associated with planning and execution or
running process. These artifacts generally make use of ad hoc notations. It also includes text,
graphics or whatever representation is required or needed to simply capture “contracts”
among all project personnel (such as project developers, project management, etc.), among
different stakeholders (such as user, project manager, etc.), and even between stakeholders
and project personnel.
This set includes various artifacts such as work breakdown structure, business case, software
development plan, deployment, Environment. Artifacts of this set are evaluated, checked, and
measured through combination of following :
• Review of relevant stakeholder.
• Analyzing alterations or changes among present version of artifact and previous
versions.
• Demonstrations of major milestone regarding balance between all artifacts and, in
specific or particular, accuracy of business case and vision artifacts.
MANAGEMENT ARTIFACTS
Management Artifacts includes overseeing whole condition or situation to confirm and
ensure that instructional technology project gets done. It contains various activities that
capture intermediate results and supporting information and data that are essential to
document the end-product/process legacy, maintain the end-product, increase quality of
product, and also increases performance of the process.
Some types of Management Artifacts :
1. Business Case –
The business case generally provides justification for initiating project, task, program,
or portfolio. This justification is simply based on the estimated cost of the
development and implementation against risks and issues and evaluated business
benefits and savings to be gained.
It is created and developed during early stages of project and simply explains why,
what, how, and who necessary to decide whether if it is worthwhile continuing or
initiating project. A business case is good if describes problems and issues, determines
all the possible options to address it, and gives permission to the decision-makers to
decide which of the course of the action will be best for organization. The main goal
of business case to convert vision into economic teams so that organization can
develop exact and accurate ROI (Return on Investment) assessment.
2. Software Development Plan (SDP) –
Software development plan aims to lay out whole plan which is necessary and
required in order to develop, modify, and upgrade software system. It is ready-made
solution for managers for software development. It provides acquirer insight and tool
for checking processes that have to be followed for development of software.
It simply indicates two things: Periodic updating and understanding and approval by
managers and practitioners alike.
3. Work Breakdown Structure (WBS) –
Work breakdown structure is deliverable-oriented breakdown of project into
component of small size. WBS is created and developed to establish similar
understanding of scope of project.
It is hierarchical tree structure that layout project and simply breaks it down into
smaller and manageable portions or components. It is vehicle for budgeting and
collecting or gathering costs.
4. Software Change Order Database –
For Iterative development process, primary task to manage change. A project can
iterate (perform repeatedly) more productively with large change freedom. This
change of freedom has been gained due to automation.
5. Release Specification –
Release specifications generally mean tests and limits against that which raw material,
intermediate and end product are accurately measured just before use or release.
Two important forms of requirements in release specifications are Vision Statement
(captures contract between development group and buyer) and Evaluation Criteria
(management-oriented requirements that can be showed and represented using use
cases, use cases realizations, etc).
6. Deployment –
The deployment includes numerous subsets of documents for transitioning product
into operational status. It is simply application code as it runs on production: built,
bundled, compiled, etc. It is process of putting artifact where it is necessary and
performing any tasks it needs so as to achieve its purpose. It can also include
computer system operations manuals, software installations manuals, plans and
procedures for cutover, etc.
7. Environment –
Automation of development process needs and important to get supported by robust
development environment. It must include following points :
• Management of requirements.
• Visual Modeling.
• Automation of document.
• Automated regression testing.
• Tools of host and target programming.
• Tracking of features and defects or errors.
ENGINEERING ARTIFACTS
Engineering artifacts are key elements in the engineering process, captured using rigorous
notations like UML, programming languages, or machine code. They come in various forms
such as vision documents, architecture descriptions, and software user manuals, which
support the development and functionality of software systems. Additionally, artifacts span
multiple fields, including mechanical, electrical, chemical, biomedical, and geotechnical
engineering, each contributing to their respective disciplines.
Engineering Artifacts are generally captured in rigorous engineering notations. These
notations can be unified modeling languages (UML), programming languages, or executable
machine codes.
Types of Engineering Artifacts
There are generally three types of engineering artifacts.
• Vision Document
• Architecture Description
• Software User Manual
1. Vision Document
A vision document generally provides complete vision for software system that is under
development. It is document that describes and explains compelling idea, project, or another
future state simply for specific organization, product, or service. It also supports contract
between funding authority and development organization. A vision document is specially
written by keeping user’s perspective into consideration and also by focusing on essential
features of system. A good vision document should include two appendixes: first one should
explain concept of operation using use cases and second one should explain change risks
inherent in vision statement.
2. Architecture Description
An architecture description is collection of artifacts that document an architecture that
includes an organized view of software architecture under development. It is generally taken
and extracted from design model and also contains views of design, implementation, and
deployment sets. In Architecture description, architecture views are generally key artifacts.
Iteration workflows and project workflows are two fundamental aspects of the software
development process, especially in iterative and Agile methodologies. Here’s a breakdown of
each and their differences.
Iteration Workflows
Iteration workflows refer to the sequence of steps repeated during each iteration of the
software development process. In iterative models like Agile, Scrum, and Spiral, the
workflow is typically broken into cycles or iterations. Each iteration produces a potentially
shippable increment of the software, adding value continuously.
Key Components of Iteration Workflows:
1. Planning: Define what the iteration will accomplish based on requirements, often
from a prioritized backlog.
2. Designing: Outline the architectural and technical approach for this iteration’s
features.
3. Development: Implement the functionality, often in code, using small, manageable
tasks.
4. Testing: Verify the code through various tests, including unit, integration, and
sometimes acceptance testing.
5. Review and Feedback: Evaluate the outcomes of the iteration, gather feedback from
stakeholders, and refine the product.
6. Retrospective: Discuss what went well, what didn’t, and areas for improvement to
enhance future iterations.
Iteration workflows are typically short, ranging from 1-4 weeks, ensuring the team can
respond to changes quickly.
Project Workflows
Project workflows encompass the entire lifecycle of a software project from inception to
delivery and maintenance. It’s a broader, higher-level view that includes multiple iterations as
well as the long-term goals, planning, and management of the software development project.
Key Components of Project Workflows:
1. Project Planning: Establish the scope, timeline, resources, and goals for the entire
project.
2. Requirements Analysis: Gather, analyze, and document the system requirements
from stakeholders.
3. Architecture and Design: Define the overall architecture, technology stack, and
design principles that will guide the project.
4. Development and Iterations: Execute the development phase in multiple iterations,
following the iteration workflow within each cycle.
5. Integration and Testing: Validate the integrated components across iterations to
ensure functionality, performance, and security.
6. Deployment: Release the software to the production environment or client.
7. Maintenance and Support: Address any post-release issues, apply patches, and make
updates.
Project workflows emphasize the overall structure and roadmap for the project, while
iteration workflows are more focused on delivering incremental value throughout the
development process.
Differences Between Iteration and Project Workflows
Iteration
Repeated for each iteration Contains multiple iterations
Cycle
In sum, iteration workflows help development teams maintain a steady pace of delivery,
while project workflows ensure that these efforts align with the long-term objectives of the
software project. Together, they create a flexible yet organized approach to software
development.
PART – 2 CHECKPOINTS OF THE PROCESS
In software development, all system-wide events are held at the end of every phase of
development. These checkpoints provide visibility to milestones in life cycle and also to
system-wide issues and problems. These checkpoints generally provide following things :
• It simply synchronizes management and engineering perspectives.
• It also verifies that goal every phase has been achieved or not.
• It provide basis for analysis and evaluation so as to determine whether or not project
is proceeding as planned, and also to make correction and right action as per
requirement.
• It also identifies risks, issues, or problems that are essential and conditions that are not
tolerable.
• For entire life-cycle, it performs global assessment.
Generally, to synchronize expectations of stakeholders throughout life-cycle, three sequences
of project checkpoints are used.
• Preparation: The team prepares for the iteration planning meeting by reviewing the
product backlog, refining user stories, estimating tasks, identifying dependencies, and
assessing team capacity.
• Iteration Planning Meeting: The team holds a collaborative meeting, usually lasting
a few hours, to plan the work for the upcoming iteration. During this meeting, they:
o Review Goals: The product owner or scrum master reviews the goals and
objectives for the iteration, providing context for the planning session.
o Review Backlog: The team reviews the prioritized items in the product
backlog, discussing their requirements and acceptance criteria.
o Select Backlog Items: Based on the team’s capacity and the sprint goals, the
team collectively selects a subset of backlog items to work on during the
iteration.
o Break Down Tasks: The team breaks down selected backlog items into
smaller, more manageable tasks, clarifying the specific steps needed to
complete each one.
o Estimate Effort: The team estimates the effort required to complete each task,
using techniques like story points or time-based estimates.
o Assign Tasks: Tasks are assigned to individual team members based on their
skills, availability, and capacity, ensuring a balanced workload.
o Define Sprint Goal: The team collaboratively defines a sprint goal, a concise
statement of what they aim to achieve by the end of the iteration.
• Update Plans and Tools: After the iteration planning meeting, the team updates
project management tools, such as task boards or project tracking software, to reflect
the planned work for the iteration.
• Daily Standups: Throughout the iteration, the team holds daily standup meetings to
discuss progress, share updates, and address any impediments or obstacles that arise.
• Demo and Retrospective: At the end of the iteration, the team holds a demo to
showcase the completed work to stakeholders and a retrospective to reflect on what
went well, what could be improved, and any lessons learned for future iterations.
By following this iterative planning process, Agile teams can effectively plan, execute, and
deliver value incrementally throughout the project, adapting to changing requirements and
delivering high-quality software in a timely manner.
WORK BREAKDOWN STRUCTURES
A Work Breakdown Structure includes dividing a large and complex project into simpler,
manageable, and independent tasks. The root of this tree (structure) is labeled by the Project
name itself. For constructing a work breakdown structure, each node is recursively
decomposed into smaller sub-activities, until at the leaf level, the activities become
undividable and independent. It follows a Top-Down approach.
Steps Work Breakdown Structure:
Step 1: Identify the major activities of the project.
Step 2: Identify the sub-activities of the major activities.
Step 3: Repeat till undividable, simple, and independent activities are created.
Work Breakdown Structure
Construction of Work Breakdown Structure
1. Firstly, the project managers and top level management identifies the main
deliverables of the project.
2. After this important step, these main deliverables are broke down into smaller higher-
level tasks and this complete process is done recursively to produce much smaller
independent tasks.
3. It depends on the project manager and team that upto which level of detail they want
to break down their project.
4. Generally the lowest level tasks are the most simplest and independent tasks and takes
less than two weeks worth of work.
5. Hence, there is no rule for upto which level we may build the work breakdown
structure of the project as it totally depends upon the type of project we are working
on and the management of the company.
6. The efficiency and success of the whole project majorly depends on the quality of the
Work Breakdown Structure of the project and hence, it implies its importance.
Uses of Work Breakdown Structure
1. Cost estimation: It allows doing a precise cost estimation of each activity.
2. Time estimation: It allows estimating the time that each activity will take more
precisely.
3. Easy project management: It allows easy management of the project.
4. Helps in project organization: It helps in proper organization of the project by the
top management.
PLANNING GUIDELINES
Planning guidelines are generally written statement that contains guidance to be referred
before any development and establishing take place of a project. Planning guidelines are
often used for purpose of uniformity, comfort, and safe development. These planning
guidelines should be followed by any party for development. These are initial planning
guidelines that are made on basis of experience of many other people. Planning guidelines
creates a convenient living environment. These guidelines are therefore considered credible
bases of estimates and build some amount of confidence in the stakeholders.
Planning is an essential part of software engineering, and there are several guidelines
that can be followed to ensure that the software development process is well-organized
and efficient. Some of these guidelines include:
1. Define clear and measurable objectives: Clearly define the goals and objectives of
the software development project, and make sure that they are measurable and
achievable.
2. Understand the requirements: Gather and analyze the requirements for the software,
and ensure that they are complete, consistent, and unambiguous.
3. Create a project plan: Develop a detailed project plan that includes the schedule,
resources, and deliverables for the software development project.
4. Identify and manage risks: Identify and evaluate the risks that may impact the
software development project, and develop a plan to mitigate or manage them.
5. Define the software architecture: Define the overall structure and organization of
the software, and ensure that it meets the requirements and is consistent with the
project plan.
6. Establish a development process: Establish a development process that includes
methodologies, tools, and standards to be used throughout the software development
project.
7. Define the testing strategy: Define the testing strategy for the software development
project, including the types of tests to be performed and the schedule for testing.
8. Monitor and control progress: Monitor and control the progress of the software
development project, and take corrective action as needed to keep the project on
schedule and within budget.
9. Communicate effectively: Ensure effective communication among all members of
the development team and stakeholders.
10. By following these guidelines, software engineers can ensure that the software
development process is well-organized, efficient and that the project is delivered on
time and within budget.
11. It’s important to note that these guidelines are not exhaustive and it may vary based
on the size, complexity and the nature of the project. Planning in software engineering
is an ongoing process and it should be flexible enough to adapt to the changes that are
likely to happen during the course of the project.
There are generally two planning guidelines that should be followed for development of a
project. These guidelines are given below :
• Give Advice for default allocation of costs among all elements of first-level WBS.
Below a table is given. In this table, you will see default allocations for budgeted
costs of all elements of first-level WBS. Values might change or vary across various
projects, but allocation generally plays an essential role by providing a good
benchmark for evaluating plan by completely understanding and knowing rationale
for deviations from these guidelines. It is actually a cost allocation, not an effort
allocation.
Management 10 %
Environment 10 %
Requirements 10 %
Design 15 %
Implementation 25 %
Assessment 25 %
Deployment 5%
Total 100 %
• Give Advice for allocation of effort and schedule across all phases of a life-cycle.
Below is a table is given. In this table, you will see allocation of effort and schedule.
Value might change or vary across various projects, but allocation generally plays an
essential role by providing an average expectation across a spectrum of domain of
application.
Effort 5% 20 % 65 % 10 %
Schedule 10 % 30 % 50 % 10 %
All guidelines simply translate broad framework into fully explained principles of
development. They play a major role in achieving high quality of development of a project.
But sometimes, advice regarding project independent planning is also risky. It can be that
these guidelines can be adopted blindly without any adaptation to circumstances of a specific
project. Another risk can be interpretation in a wrong way.
COST AND SCHEDULE ESTIMATION
The cost and schedule estimation process helps in determining number of resources to
complete all project activities. It generally involves approximation and development of
costing alternatives to plan, perform or work, deliver, or give project. A good estimation is
very much essential for keeping a project under budget.
Two perspectives are generally required to derive project plans. These perspectives are given
below :
1. Forward-Looking :
• The Forward-Looking approach is also known as Top-Down approach. This
approach generally starts with describing and explaining various project tasks
that involve starting with project aim or end deliverable and breaking it all
down into smaller planning chunks.
• Top-down budgeting also refers to a method of budgeting where project
managers prepare a high-level budget for organization.
• These project managers or senior management develops and creates a
characterization of overall size, process, environment, people, and quality that
is essential for software project. In this approach, duration of deliverable’s is
estimated.
• It generally takes less time and effort than bottom-up estimate. With help of
software cost estimation model, an estimation of overall effort and schedule is
done. The project manager generally divides estimation of overall effort into a
top-level of WBS (Work Breakdown Structure).
• They also divide schedule into major milestones dates. At this stage, sub-
project managers are simply given responsibility for decomposing every
element of WBS into lower levels with help of various allocations of top-level,
staffing profile, and, major milestones dates as constraints.
• The main benefit of this approach is use of holistic data from earlier projects
or products, along with unmitigated risks, and scope creeps. This also helps in
reducing risk of overlooked work activities or costs.
2. Backward-Looking :
• Backward-Looking approach is also known as Bottom-up approach.
• In this approach, project team breaks requirements of clients down,
determining lowest level appropriate to develop a range of estimates, covering
overall scope of project based on available definition of task.
• Overall elements of lowest level WBS are generally explained into detailed
tasks, for which WBS element manager is responsible for estimating budget
and schedule.
• All of these estimates are joined and integrated into higher-level WBS budgets
and milestones.
Milestone scheduling also called budget allocation with help of top-down approach results in
a highly optimistic plan. Whereas, bottom-up approach results in a highly pessimistic plan.
Iteration is very much needed and important, using results of one approach to validate and
even check results of other approach. Both of approaches should be used together, in balance,
throughout life-cycle of project as shown below.
Below is diagram showing planning balance through life cycle.
In diagram given above, some of important tools are included and introduced that are very
much needed across overall software process and correlates very well to process framework.
Each of tools of software development map closely to one of process workflows and each of
these process workflows have a distinct for automation support. Workflow automation
generally makes complicated software process in an easy way to manage. Here you will see
environment that is necessary to support process framework. Some of concerns are associated
with each workflow are given below :
1. Management : Nowadays, there are several opportunities and chances available for
automation of project planning and control activities of management workflow. For
creating planning artifacts, several tools are useful such as software cost estimation
tools and Work Breakdown Structure (WBS) tools. Workflow management software is
an advanced platform that provides flexible tools to improve way you work in an
efficient manner. Thus, automation support can also improve insight into metrics.
2. Environment : Automating development process and also developing an
infrastructure for supporting different project workflows are very essential activities
of engineering stage of life-cycle. environment that generally gives and provides
process automation is a tangible artifact that is generally very critical to life-cycle of
system being developed. Even, top-level WBS recognizes environment like a first-
class workflow. Integrating their own environment and infrastructure for software
development is one of main tasks for most of software organizations.
3. Requirements : Requirements management is a very systematic approach for
identifying, documenting, organizing, and tracking changing requirements of a
system. It is also responsible for establishing and maintaining agreement between user
or customer and project team on changing requirements of system. If process wants
strong traceability among requirements and design, then architecture is very much
likely to evolve in a way that it optimizes requirements traceability other than design
integrity. This effect is even more and highly effective and pronounced if tools are
used for process automation. For effective requirement management, points that must
include are maintaining a clear statement of requirements with attributes for every
type of requirement and traceability to other requirements and other project artifacts.
4. Design : Workflow design is actually a visual depiction of each step that is involved
in a workflow from start to end. It generally lays out each and every task sequentially
and provides complete clarity into how data moves from one task to another one.
Workflow design tools simply allow us to depict different tasks involved graphically
as well as also depict performers, timelines, data, and other aspects that are crucial to
execution. Visual modeling is primary support that is required and essential for design
workflow. Visual model is generally used for capturing design models, representing
them in a human-readable format, and also translating them into source code.
5. Implementation : The main focus and purpose of implementation workflow are to
write and initially test software, relies primarily on programming environment (editor,
compiler, debugger, etc.). But on other hand, it should also include substantial
integration along with change management tools, visual modeling tools, and test
automation tools. This is simply required to just support iteration to be productive. It
is main focus of Construction phase. The implementation simply means to transform a
design model into executable one.
6. Assessment and Deployment : Workflow assessment is initial step to identifying
outdated software processes and just replace them with most effective process. This
generally combines domain expertise, qualitative and quantitative information
gathering and collection, proprietary tools, and much more. It requires and needs
every tool discussed along with some additional capabilities simply to support test
automation and test management. Defect tracking is also a tool that supports
assessment.
More Automation Tools for building blocks in Software Process are-
1. Version Control: Git is a popular tool for version control, which is an essential step
in the software development process. Git is a system for sharing version control that
keeps track of changes, manages branches and offers a history of code alterations to
enable collaborative development. Its automated features to simplify the integration of
code and reduce the possibility of disputes among team members.
2. Static analysis and code quality: SonarQube is an effective tool for ongoing code
quality inspection. It does static code analysis, finding and emphasizing errors,
security flaws and code issues. Furthermore, by spotting and resolving problems with
coding standards and guidelines, linters like Pylint and ESLint are essential to
maintain code quality.
3. Monitoring and Logging: Prometheus is quite good at gathering and storing metrics
from different systems, as well as monitoring and sending out alerts. For consolidated
logging and log analysis, the ELK Stack (Elasticsearch, Logstash, Kibana) provides a
comprehensive solution. By automating the procedures of troubleshooting and
monitoring, these technologies guarantee the dependability and efficiency of software
programmes.
4. Containerization: By encapsulating apps and their dependencies, Docker transforms
software packaging. By simplifying installing, scaling and management of
containerized applications, Kubernetes enhances Docker. When combined, these
solutions offer a scalable and consistent environment that makes resource utilization
effective and portable across a range of deployment scenarios.
QUALITY INDICATORS
Change traffic and stability:
This metric measures the change traffic over time. The number of software change orders
opened and closed over the life cycle is called change traffic. Stability specifies the
relationship between opened versus closed software change orders. This metric can be
collected by change type, by release, across all releases, by term, by components, by
subsystems, etc.
The below figure shows stability expectation over a healthy project’s life cycle
The format and content of any project panel are configurable to the software project
manager's preference for tracking metrics of top-level interest. The basic operation of an
SPCP can be described by the following top - level use case.
i. Start the SPCP
ii. Select a panel preference
iii. Select a value or graph metric
iv. Select to superimpose controls
v. Drill down to trend
vi. Drill down to point in time.
vii. Drill down to lower levels of information
viii. Drill down to lower level of indicators.
PART – 3 TAILORING THE PROCESS
PROCESS DISCRIMINTES
Tailoring the process in software process and project management refers to adapting a
standard or base process to fit the unique needs, constraints, and characteristics of a specific
project. No two software projects are identical; each project has distinct requirements, risk
levels, team compositions, and customer expectations. Tailoring ensures that the chosen
process is flexible, efficient, and aligned with the specific goals of the project, leading to
better outcomes.
Process discriminants are the factors or characteristics that help guide the tailoring of a
process. They help to "discriminate" or identify which aspects of the standard process need
adjustment and how to adjust them to suit the project's needs.
Key Concepts in Tailoring the Process and Process Discriminants
1. Understanding Process Discriminants:
o Process discriminants are essentially the criteria used to decide how to tailor a
software process.
o They help in identifying which elements of a project management approach
can stay rigid and which ones need to be flexible.
o Common process discriminants include project size, complexity, duration,
risk, regulatory requirements, team experience, and customer expectations.
2. Common Process Discriminants:
o Project Size: Smaller projects may not need as much formal documentation or
rigorous processes compared to larger, complex projects. Smaller teams can
often benefit from lightweight processes, while larger projects may need
structured processes to handle coordination and complexity.
o Complexity: Projects with highly complex systems, integrations, or
technologies may require detailed planning, more rigorous testing, and
advanced risk management.
o Risk Level: High-risk projects, such as those involving security or mission-
critical systems, often require a more structured approach with enhanced
quality control, risk management, and compliance measures. Lower-risk
projects may allow for more agility and fewer checkpoints.
o Customer Requirements: Projects with strict customer requirements or
specifications may demand more documentation, reviews, and approvals. In
contrast, projects where customers prioritize rapid delivery and flexibility may
allow more agile, iterative approaches.
o Regulatory Compliance: Projects in regulated industries (e.g., healthcare,
finance) often require adherence to specific standards, resulting in tailored
processes with extra steps for validation, documentation, and regulatory
audits.
o Team Size and Experience: Projects with larger, less experienced teams
might need more structured processes, defined roles, and oversight. Smaller,
experienced teams can often work effectively with a leaner approach.
3. Process Tailoring Steps:
o Assess Project Characteristics: Evaluate the unique aspects of the project,
such as risk level, complexity, duration, team composition, and customer
requirements.
o Select and Adapt Process Elements: Based on the assessment, select which
aspects of the standard process need adjusting. This might involve
customizing documentation requirements, changing the frequency of reviews,
or adjusting timelines for iterative releases.
o Incorporate Best Practices: Use organizational best practices to guide the
tailoring process. This ensures consistency with previous projects and allows
for continuous improvement.
o Document Tailoring Decisions: Record any adjustments made to the process
for transparency and future reference. This documentation is crucial for
understanding process impacts on project outcomes, helping teams improve
tailoring decisions over time.
o Review and Adjust as Needed: Process tailoring is an ongoing activity. As
the project progresses, team members should periodically review the tailored
process to ensure it remains effective, making further adjustments as the
project needs change.
4. Examples of Tailoring Based on Discriminants:
o For Small, Low-Risk Projects: A small, low-risk project with a short timeline
might follow a streamlined agile approach with minimal documentation,
lightweight sprint planning, and reduced formal testing steps.
o For Large, High-Complexity Projects: A large-scale project involving
multiple integrations might need a detailed work breakdown structure (WBS),
comprehensive testing strategies, multiple approval checkpoints, and clear
documentation to support coordination.
o For Regulated Projects: A project with strict regulatory compliance might
involve rigorous documentation, traceability matrices, compliance checks, and
detailed review processes to meet audit requirements.
5. Benefits of Tailoring:
o Efficiency: Tailoring reduces unnecessary overhead, ensuring that only
relevant processes are followed, which saves time and resources.
o Flexibility: Tailored processes are more adaptable to the needs of the project,
allowing for agility where needed and rigor where required.
o Better Risk Management: Tailoring allows teams to adjust their approach to
address project-specific risks, such as adding steps to mitigate high-risk areas.
o Higher Quality and Customer Satisfaction: By aligning processes with
customer requirements and expectations, tailoring can lead to higher quality
deliverables that better meet customer needs.
In essence, tailoring the process with process discriminants enables software teams to
create a customized framework that balances flexibility and control. It optimizes resource
usage, mitigates risk, and aligns development practices with the unique goals and constraints
of each project.
Organization behavior examines and gathers the insights on employee behavior, as how to
drive them with the proper motivation by understanding them a little better. Organizational
behavior should start with the role of the managers and how well they incorporate moral and
support down the hierarchy. Managerialism is not just about gaining profits, and executing
control but, creating a safe space for interaction of different opinions and to be able to work
as a group and achieve organizational goals. As they say, there is no I in Team. The
organization that works together, grows together.
It all comes down to the question of, what role should the manager play, keeping in mind
what should be expected of him/her with respect to organizational behavior?
Role of Managers :
1. Interpersonal Role :
• Figure Head –
In this role, the manager performs duties of ceremonial nature, such as, attending an
employee’s wedding, taking the customer to lunch, greeting the tourist dignitaries and
so on.
• Leader Role –
In this role, the manager is a leader, guiding the employees in the right path, with the
proper motivation and encouragement.
• Liaison Role –
In this role, the manager cultivates contacts outside the vertical chain of command to
collect useful information for the organization.
2. Informational Role :
• Monitor Role –
In this role, manager acts as a monitor, perpetually scanning the environment for
information, keeping an eye on the liaison contacts and subordinates and receive
unsolicited information.
• Disseminator Role –
In this role, manager acts as a disseminator by passing down privileged information to
the subordinates who would otherwise have no access to it.
• Spokesperson Role –
In this role, manager acts a spokesperson by representing the organization before
various outside groups, which have some stake in the organization. These
stakeholders can be government officials, labour unions, financial institutions,
suppliers, customers, etc. They have a wide influence over the organization, so the
manager should coin their support by effectively managing the social impact of the
organization.
3. Decisional Role :
• Entrepreneurial role –
In this role, the manager acts as an entrepreneur, always thirsty for new knowledge
and innovation to improve the organization. Nowadays, it doesn’t matter if the
organization is bigger or better, but it is necessary that it grows consistently.
Innovation is creating new ideas which may either result in the development of new
products or services or improving upon the old ones. This makes innovation an
important function for a manager.
• Disturbance handler role –
In this role, the manager acts a disturbance handler, where the manager has to work
reactively like a firefighter. The manager should come up with solutions to any
problem that arises and handle it in an orderly way.
• Resource allocator role –
In this role, the manager acts as a resource allocator where the manager must divide
work and delegate authority among his subordinates. The manager should plan out
which subordinate will get what based on the abilities and who will be more suited
into a particular task.
• Negotiator –
In this role, the manager acts as a negotiator where the manager at all levels has to
spend considerable time in negotiations. The president of a company may negotiate
with the union leaders about a new strike issue or the foreman may negotiate with the
workers about a grievance problem, etc.
Drawbacks:
1. It is hard to accurately estimate KDSI early on in the project, when most effort
estimates are required.
2. Extremely vulnerable to mis-classification of the development mode.
3. Success depends largely on tuning the model to the needs of the organization, using
historical data which is not always available.
Advantages:
1. COCOMO is transparent. It can be seen how it works.
2. Drivers are particularly helpful to the operator to understand the impact of different
factors that affect project costs.