unit 2 - Software project Planning
unit 2 - Software project Planning
• What is Project?
A project is a group of tasks that need to complete to reach a clear result. A project
also defines as a set of inputs and outputs which are required to achieve a goal. Projects can
vary from simple to difficult and can be operated by one person or a hundred.
Projects usually described and approved by a project manager or team executive. For
good project development, some teams split the project into specific tasks so they can
manage responsibility and utilize team strengths.
In software Project Management, the client and the developers need to know
the length, period and cost of the project.
There are three needs for software project management. These are:
1. Time
2. Cost
3. Quality
It is an essential part of the software organization to deliver a quality product, keeping the
cost within the clients budget and deliver the project as per schedule. There are various
factors, both external and internal, which may impact this triple factor. Any of three-factor
can severely affect the other two.
• Project Manager
A project manager is a character who has the overall responsibility for the
planning, design, execution, monitoring, controlling and closure of a project. A project
manager represents an essential role in the achievement of the projects.
1. Leader
A project manager must lead his team and should provide them direction to make them
understand what is expected from all of them.
2. Medium:
The Project manager is a medium between his clients and his team. He must coordinate and
transfer all the appropriate information from the clients to his team and report to the senior
management.
3. Mentor:
He should be there to guide his team at each step and make sure that the team has an
attachment. He provides a recommendation to his team and points them in the right direction.
Project Planning
Software project planning is task, which is performed before the
production of software actually starts. It is there for the software production but
involves no concrete activity that has any direction connection with software
production; rather it is a set of multiple processes, which facilitates software
production. Project planning may include the following:
✓ Scope Management
It defines the scope of project; this includes all the activities, process
need to be done in order to make a deliverable software product. Scope management
is essential because it creates boundaries of the project by clearly defining what would
be done in the project and what would not be done. This makes project to contain
limited and quantifiable tasks, which can easily be documented and in turn avoids cost
and time overrun.
✓ Project Estimation
For an effective management accurate estimation of various measures
is a must. With correct estimation managers can manage and control the project more
efficiently and effectively.
✓ Project Estimation Techniques
Project manager can estimate the listed factors using two broadly recognized techniques –
o Decomposition Technique
• Line of Code Estimation is done on behalf of number of line of codes in the software
product.
• Function Points Estimation is done on behalf of number of function points in the software
product.
This technique uses empirically derived formulae to make estimation.These formulae are
based on LOC or FPs.
• Putnam Model
• COCOMO
➢ The COCOMO Model is a procedural cost estimate model for software projects and
is often used as a process of reliably predicting the various parameters associated
with making a project such as size, effort, cost, time, and quality.
➢ It was proposed by Barry Boehm in 1981 and is based on the study of 63 projects,
which makes it one of the best-documented models.
➢ The key parameters that define the quality of any software product, which are also
an outcome of COCOMO, are primarily effort and schedule:
➢ In the COCOMO model, software projects are categorized into three types based on
their complexity, size, and the development environment. These types are:
1. Organic: A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past and
also the team members have a nominal experience regarding the problem.
3. Embedded: A software project requiring the highest level of complexity, creativity, and
experience requirement falls under this category. Such software requires a larger team
size than the other two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.
Effort
E = 2.4(400)1.05 E = 3.0(400)1.12 E = 3.6(400)1.20
Equation
• The above formula is used for the cost estimation of the basic COCOMO model and
also is used in the subsequent models.
• The constant values a, b, c, and d for the Basic Model for the different categories of
the software projects are:
Software
Projects A b c d
1. The effort is measured in Person-Months and as evident from the formula is dependent
on Kilo-Lines of code. The development time is measured in months.
2. These formulas are used as such in the Basic Model calculations, as not much
consideration of different factors such as reliability, and expertise is taken into account,
henceforth the estimate is rough.
o Hardware attributes
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time
o Personal attributes
• Analyst capability
• Software engineering capability
• Application experience
• Virtual machine experience
• Programming language experience
o Project attributes
• Use of software tools
• Application of software engineering methods
• Required development schedule
The Effort Adjustment Factor (EAF) is determined by multiplying the effort multipliers
associated with each of the 15 attributes.
The Effort Adjustment Factor (EAF) is employed to enhance the estimates generated by the
basic COCOMO model in the following expression:
1. Cost Estimation: To help with resource planning and project budgeting, COCOMO
offers a methodical approach to software development cost estimation.
2. Resource Management: By taking team experience, project size, and complexity into
account, the model helps with efficient resource allocation.
3. Project Planning: COCOMO assists in developing practical project plans that include
attainable objectives, due dates, and benchmarks.
5. Support for Decisions: During project planning, the model provides a quantitative
foundation for choices about scope, priorities, and resource allocation.
7. Resource Optimization: The model helps to maximize the use of resources, which
raises productivity and lowers costs.
1. Systematic cost estimation: Provides a systematic way to estimate the cost and effort
of a software project.
2. Helps to estimate cost and effort: This can be used to estimate the cost and effort of a
software project at different stages of the development process.
3. Helps in high-impact factors: Helps in identifying the factors that have the greatest
impact on the cost and effort of a software project.
4. Helps to evaluate the feasibility of a project: This can be used to evaluate the
feasibility of a software project by estimating the cost and effort required to complete it.
1. Assumes project size as the main factor: Assumes that the size of the software is the
main factor that determines the cost and effort of a software project, which may not
always be the case.
2. Does not count development team-specific characteristics: Does not take into
account the specific characteristics of the development team, which can have a
significant impact on the cost and effort of a software project.
3. Not enough precise cost and effort estimate: This does not provide a precise estimate
of the cost and effort of a software project, as it is based on assumptions and averages.
The Lawrence Putnam model describes the time and effort requires finishing a
software project of a specified size. Putnam makes a use of a so-called The Norden/Rayleigh
Curve to estimate project effort, schedule & defect rate as shown in fig:
Putnam noticed that software staffing profiles followed the well known Rayleigh distribution.
Putnam used his observation about productivity levels to derive the software equation:
The various terms of this expression are as follows:
K is the total effort expended (in PM) in product development, and L is the product estimate
in KLOC .
td correlate to the time of system and integration testing. Therefore, td can be relatively
considered as the time required for developing the product.
Ck Is the state of technology constant and reflects requirements that impede the development
of the program.
The exact value of Ck for a specific task can be computed from the historical data of the
organization developing it.
Putnam proposed that optimal staff develop on a project should follow the Rayleigh curve.
Only a small number of engineers are required at the beginning of a plan to carry out
planning and specification tasks. As the project progresses and more detailed work are
necessary, the number of engineers reaches a peak. After implementation and unit testing, the
number of project staff falls.
Where, K is the total effort expended (in PM) in the product development
Ck Is the state of technology constant and reflects constraints that impede the progress of the
program
Risk Management
1. Risk Identification:
Risk identification involves brainstorming activities. It also involves the
preparation of a risk list. Brainstorming is a group discussion technique where all the
stakeholders meet together. This technique produces new ideas and promotes creative
thinking. Preparation of a risk list involves the identification of risks that are occurring
continuously in previous software projects.
• Calculate the risk exposure factor which is the product of values of Step 2 and Step 3
• Prepare a table consisting of all the values and order risk based on risk exposure factor
4. Risk Monitoring:
In this technique, the risk is monitored continuously by re evaluating the risks, the
impact of risk, and the probability of occurrence of the risk.
This ensures that:
RMMM
A risk management technique is usually seen in the software Project plan. This
can be divided into Risk Mitigation, Monitoring, and Management Plan (RMMM). In this
plan, all works are done as part of risk analysis. As part of the overall project plan project
manager generally uses this RMMM plan.
In some software teams, risk is documented with the help of a Risk Information Sheet
(RIS). This RIS is controlled by using a database system for easier management of
information i.e creation, priority ordering, searching, and other analysis. After documentation
of RMMM and start of a project, risk mitigation and monitoring steps will start.
Risk Mitigation :
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
Risk Monitoring :
It is an activity used for project tracking.
It has the following primary objectives as follows.
This shows that the response that will be taken for each risk by a manager.
The main objective of the risk management plan is the risk register. This risk register
describes and focuses on the predicted threats to a software project.
✓ Example:
Let us understand RMMM with the help of an example of high staff turnover.
Risk Mitigation:
To mitigate this risk, project management must develop a strategy for reducing turnover. The
possible steps to be taken are:
• Meet the current staff to determine causes for turnover (e.g., poor working conditions,
low pay, competitive job market).
• Mitigate those causes that are under our control before the project starts.
• Once the project commences, assume turnover will occur and develop techniques to
ensure continuity when people leave.
• Organize project teams so that information about each development activity is widely
dispersed.
• Define documentation standards and establish mechanisms to ensure that documents
are developed in a timely manner.
• Assign a backup staff member for every critical technologist.
Risk Monitoring:
As the project proceeds, risk monitoring activities commence. The project manager monitors
factors that may provide an indication of whether the risk is becoming more or less likely. In
the case of high staff turnover, the following factors can be monitored:
Risk Management:
Risk management and contingency planning assumes that mitigation efforts have failed and
that the risk has become a reality. Continuing the example, the project is well underway, and
a number of people announce that they will be leaving. If the mitigation strategy has been
followed, backup is available, information is documented, and knowledge has been dispersed
across the team. In addition, the project manager may temporarily refocus resources (and
readjust the project schedule) to those functions that are fully staffed, enabling newcomers
who must be added to the team to “get up to the speed“.
Drawbacks of RMMM:
• Time management: The project scheduling tools keep projects running the way it is
planned. There will be proper time management and better scheduling of the tasks
• Team collaboration: The project scheduling tool improves team collaboration and
communication. It helps to make it easy to comment and chat within the platform
without relying on external software.
• User-friendly interface: Good project scheduling tools are designed to be more user-
friendly to enable teams to complete projects in a better and more efficient way.
• Defines work tasks: The project scheduling tool defines the work tasks of a project.
• Time and resource management: It helps to keep the project on track with respect to
the time and plan.
• Cost management: It helps in determining the cost of the project.
• Improved projectivity: It enables greater productivity in teams as it helps in smarter
planning, better scheduling, and better task delegation.
• Increased efficiency: The project scheduling tool increases speed and efficiency in
project development.
✓ Criteria for Selecting Project Scheduling Tools
• Capability to handle multiple projects: The scheduling tool must handle multiple
projects at a time.
• User-frinedly: It should be easy to use and must have a user-friendly interface.
• Budget friendly: The tool should be of low cost and should be within the
development budget.
• Security features: The tool must be secured and risk-free from vulnerable threats.
• Project tracking can be done manually or with software tools. Manual tracking
involves keeping track of tasks, deadlines, and other details in a spreadsheet or
document. Software tools provide more detailed tracking capabilities, such as task
management, resource allocation, and reporting.
✓ Module Coupling
A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls between
modules increase or the amount of shared data is large. Thus, it can be said that a design with
high coupling will have more errors.
Types of Module Coupling
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite
data items such as structure, objects, etc. When the module passes non-global data structure
or entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module
is used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally
imposed data format, communication protocols, or device interface. This is related to
communication to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
Advertisement
Advertisement
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.
✓ Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality
is strongly related.
Coupling Cohesion
1. Separation of Concerns
2) Design Heuristics
Design heuristics are principles or guidelines that help inform design
decisions, fostering creativity and problem-solving in software design. Here’s a list of key
design heuristics to consider:
1. User-Centric Focus
• Prioritize User Needs: Understand the end-user's requirements and design with their
experience in mind.
2. Iterative Design
• Embrace Feedback: Continuously refine your design through user testing and
feedback loops.
3. Simplicity
• Reduce Complexity: Strive for clarity by removing unnecessary features and focusing
on essential functions.
4. Consistency
• Provide Immediate Feedback: Ensure users receive prompt responses to their actions,
reinforcing understanding and confidence.
6. Error Prevention and Recovery
• Design for Mistakes: Anticipate user errors and provide helpful guidance for recovery
or prevention.
7. Accessibility
• Inclusive Design: Consider diverse user needs, ensuring your design is usable by
people with various abilities and disabilities.
8. Scalability
• Design for Growth: Ensure your system can handle increased load or functionality
without major redesign.
9. Flexibility and Customization
• Allow Personalization: Enable users to tailor the interface or features to suit their
preferences.
10. Visual Hierarchy
• Guide Attention: Use layout, color, and size to draw attention to the most important
elements.
11. Modularity
• Keep It Simple: Don’t add unnecessary complexity that doesn’t solve a specific
problem.
13. Affordance and Signifiers
• Make Actions Obvious: Use design elements that suggest their function, making it
clear how to interact with them.
14. Progressive Disclosure
• Show Information Gradually: Present information in layers, revealing details as
needed to prevent overwhelming the user.
15. Alignment with Business Goals
• Support Organizational Objectives: Ensure that the design aligns with the broader
business strategy and goals.
3)design documentation(SRS)
1. Introduction
1.1 Purpose
Describe the purpose of the SRS and its intended audience.
1.2 Scope
Outline the software product, its goals, and what it will and will not do.
1.4 References
List any related documents, such as regulatory standards or other SRS documents.
1.5 Overview
Summarize the structure of the document.
2. Overall Description
3. Specific Requirements
4. Use Cases
Provide detailed use case descriptions that demonstrate how users will interact with the
system. Each use case should include:
5. Acceptance Criteria
Outline the criteria that must be met for the software to be accepted by stakeholders. This
should align with both functional and non-functional requirements.
6. Appendices
Include any additional information that supports the SRS, such as:
• Diagrams
• Additional use cases
• User stories
• Related project documentation
design methods in software engineering
In software engineering, design methods help structure the development process and ensure
that software products meet user needs and technical requirements. Here are some key design
methods commonly used in this field:
1. Agile Methodology
• Iterate Frequently: Regularly review and refine designs based on user feedback and
testing.
• Collaborate: Foster communication between developers, designers, and stakeholders.
• Emphasize Documentation: Keep clear documentation of decisions, designs, and
processes.
1) data design
Data design is a crucial aspect of software engineering that focuses on how data is structured,
stored, and managed within a system. It involves creating a blueprint for data management
that ensures data integrity, efficiency, and usability. Here’s a detailed overview of data
design:
The Concepts in Data Design:-
1. Data Modeling
o Definition: The process of creating a visual representation of data and its
relationships.
o Types:
▪ Conceptual Data Model: High-level view of data entities and
relationships, often represented in an Entity-Relationship (ER)
diagram.
▪ Logical Data Model: More detailed view that defines data attributes,
types, and relationships without considering physical storage.
▪ Physical Data Model: Specific implementation details, including data
storage methods, indexing, and partitioning.
2. Normalization
o Purpose: To eliminate data redundancy and ensure data integrity by organizing
data into related tables.
o Forms:
▪ First Normal Form (1NF): Ensures atomicity of data.
▪ Second Normal Form (2NF): Eliminates partial dependencies.
▪ Third Normal Form (3NF): Removes transitive dependencies.
3. Denormalization
o Purpose: The process of combining tables to improve read performance at the
expense of some redundancy.
o When to Use: In systems where read operations are more frequent than write
operations.
4. Data Structures
o Definition: Ways to organize and store data efficiently.
o Common Types:
▪ Arrays and Lists: For sequential data.
▪ Trees: For hierarchical data (e.g., binary trees).
▪ Graphs: For representing networks and relationships.
▪ Hash Tables: For fast data retrieval based on keys.
5. Data Integrity
o Definition: Ensuring accuracy and consistency of data over its lifecycle.
o Types:
▪ Entity Integrity: Each row must have a unique identifier (primary key).
▪ Referential Integrity: Foreign keys must reference valid primary keys.
6. Data Access Methods
o Purpose: Define how data can be retrieved, updated, or deleted.
o Methods:
▪ SQL Queries: For relational databases.
▪ APIs: For accessing data in microservices or web applications.
▪ NoSQL Queries: For unstructured or semi-structured data storage.
7. Data Security
o Focus: Protecting data from unauthorized access and breaches.
o Techniques:
▪ Encryption: Securing data in transit and at rest.
▪ Access Control: Defining user roles and permissions.
8. Data Warehousing
o Definition: A system used for reporting and data analysis, integrating data
from multiple sources.
o Key Components: ETL (Extract, Transform, Load) processes, data lakes, and
OLAP (Online Analytical Processing).
9. Big Data Considerations
o Focus: Handling large volumes of diverse data efficiently.
o Technologies: Hadoop, Spark, NoSQL databases (e.g., MongoDB, Cassandra).
Best Practices for Data Design
1. Architecture Patterns
o Layered Architecture: Organizes the system into layers (e.g., presentation,
business logic, data access) to separate concerns.
o Microservices Architecture: Breaks the application into small, independent
services that communicate over APIs, allowing for flexibility and scalability.
o Event-Driven Architecture: Uses events to trigger and communicate between
decoupled services, ideal for real-time applications.
o Client-Server Architecture: Divides the system into client (frontend) and
server (backend) components, often used in web applications.
2. Architectural Styles
o Monolithic Architecture: A single, unified application where all components
are interconnected.
o Service-Oriented Architecture (SOA): Similar to microservices but
emphasizes reusability and interoperability of services.
o Peer-to-Peer Architecture: Each node can act as both client and server,
promoting decentralized communication.
3. Design Principles
o Separation of Concerns: Dividing the system into distinct sections, each
addressing a specific concern.
o Single Responsibility Principle: Each module or class should have one
responsibility or reason to change.
o Modularity: Breaking down the system into smaller, manageable parts or
modules.
o Loose Coupling: Reducing dependencies between components to enhance
flexibility and maintainability.
4. Quality Attributes
o Performance: The system’s responsiveness and resource utilization.
o Scalability: The ability to handle increased loads by adding resources.
o Reliability: The system’s ability to perform its intended function consistently.
o Maintainability: How easily the system can be modified to correct faults,
improve performance, or adapt to changes.
5. Architectural Documentation
o Architecture Diagrams: Visual representations of the system's architecture,
including component interactions and data flow.
o Technical Specifications: Detailed documentation outlining the architecture’s
design decisions, rationale, and component descriptions.
Architectural Design Process
1. Requirements Analysis
o Gather functional and non-functional requirements from stakeholders to
understand the system's goals.
2. Define Architecture Goals
o Establish clear goals based on quality attributes (e.g., scalability, security) and
business needs.
3. Choose Architectural Style
o Select the most suitable architectural pattern or style based on requirements
and constraints.
4. Component Identification
o Identify major components or services, their responsibilities, and interactions.
5. Interface Design
o Define how components will communicate, including APIs, data formats, and
protocols.
6. Evaluate Trade-offs
o Assess the implications of design decisions on quality attributes, cost, and
complexity.
7. Documentation
o Create diagrams and documents to communicate the architecture to
stakeholders and development teams.
3)Interface design
Interface design in software engineering is crucial for creating user-friendly and efficient
applications. Here are some key principles and considerations:
1. User-Centered Design
• Visual Consistency: Use similar colors, fonts, and layouts across the application.
• Functional Consistency: Ensure similar actions have similar outcomes, aiding
predictability.
3. Usability
• Inclusive Design: Ensure the interface is usable for people with disabilities (e.g.,
keyboard navigation, screen reader support).
• Color Contrast: Use high contrast to ensure readability.
5. Simplicity
• Adaptive Layouts: Ensure the interface works well on various devices and screen
sizes.
• Performance: Optimize loading times and responsiveness.
7. Prototyping and Testing
• Continuous Feedback: Regularly gather user feedback to refine the interface over
time.
• Agile Methodology: Incorporate design iterations in the development process.
4)Procedural Design
1. Definition
• Procedural Design: This approach emphasizes the use of procedures (also called
routines, subroutines, or functions) to structure the logic of a program. It breaks down
complex tasks into smaller, manageable procedures.
2. Key Characteristics
• Define the Problem: Clearly understand and articulate the problem that needs solving.
• Identify Procedures: Break down the solution into distinct procedures that handle
specific tasks.
• Determine Data Flow: Outline how data will be passed between procedures.
• Design Interface: Specify how procedures interact, including input and output
parameters.
• Implement and Test: Write code for each procedure and test them individually, then
integrate and test the whole system.
4. Advantages
• Clear Naming Conventions: Use descriptive names for procedures to convey their
purpose.
• Limit Side Effects: Procedures should avoid altering global state unless necessary,
promoting predictability.
• Documentation: Comment on procedures to explain their functionality and usage.
7. Comparison with Other Paradigms