Software Engineering
Software Engineering
An extract from https://www.geeksforgeeks.org/software‐engineering/#sdlc
Content
Software Engineering .......................................................................................................................................... 1
Software Engineering | Introduction to Software Engineering ...................................................................... 6
Software Engineering | Classification of Software .......................................................................................... 7
Software Engineering | Classical Waterfall Model .......................................................................................... 9
Software Engineering | Iterative Waterfall Model ........................................................................................ 11
Software Engineering | Spiral Model ............................................................................................................ 13
Software Engineering | Incremental process model ..................................................................................... 16
Software Engineering | Rapid application development model (RAD) ......................................................... 19
Software Engineering | RAD Model vs Traditional SDLC ............................................................................... 22
Software Engineering | Agile Development Models ..................................................................................... 23
Software Engineering | Agile Software Development .................................................................................. 24
Software Engineering | Extreme Programming (XP) ..................................................................................... 27
Software Engineering | SDLC V‐Model .......................................................................................................... 28
Software Engineering | Comparison of different life cycle models .............................................................. 31
Software Engineering | User Interface Design .............................................................................................. 33
Software Engineering | Coupling and Cohesion ............................................................................................ 35
Software Engineering | Information System Life Cycle ................................................................................. 39
Software Engineering | Database application system life cycle ................................................................... 40
Software Engineering | Pham‐Nordmann‐Zhang Model (PNZ model) ......................................................... 41
1
Software Engineering | Schick‐Wolverton software reliability model .......................................................... 42
Software Project Management(SPM): .............................................................................................................. 45
Software Engineering | Project Management Process ................................................................................. 45
Software Engineering | Project size estimation techniques ......................................................................... 47
Software Engineering | System configuration management ........................................................................ 50
Software Engineering | COCOMO Model ...................................................................................................... 53
Software Engineering | Capability maturity model (CMM) .......................................................................... 58
Software Engineering | Integrating Risk Management in SDLC | Set 1 ........................................................ 61
Software Engineering | Integrating Risk Management in SDLC | Set 2 ........................................................ 65
Software Engineering | Integrating Risk Management in SDLC | Set 3 ........................................................ 69
Software Engineering | Role and Responsibilities of a Software Project Manager ...................................... 72
Software Engineering | Software Project Management Complexities ......................................................... 74
Software Engineering | Quasi renewal processes ......................................................................................... 76
Software Engineering | Reliability Growth Models ....................................................................................... 78
Software Engineering | Jelinski Moranda software reliability model ........................................................... 79
Software Engineering | Goel‐Okumoto Model ............................................................................................. 80
Mills’ Error Seeding Model ............................................................................................................................ 82
Software Engineering |Basic fault tolerant software techniques ................................................................. 83
Software Engineering | Software Maintenance ............................................................................................ 86
Software Requirements ..................................................................................................................................... 88
Software Engineering | Requirements Engineering Process ......................................................................... 88
Software Engineering | Classification of Software Requirements ................................................................ 89
How to write a good SRS for your Project ..................................................................................................... 91
Software Engineering | Quality Characteristics of a good SRS ..................................................................... 94
Software Engineering | Requirements Elicitation ......................................................................................... 96
Software Engineering | Challenges in eliciting requirements ....................................................................... 99
Software Testing and Debugging: ................................................................................................................... 101
2
Software Engineering | Seven Principles of software testing ..................................................................... 101
Software Engineering | Testing Guidelines ................................................................................................. 102
Software Engineering | Types of Software Testing ..................................................................................... 103
Introduction:‐ .......................................................................................................................................... 103
Principles of Testing:‐ .............................................................................................................................. 103
Types of Testing:‐ .................................................................................................................................... 103
Software Engineering | Black box testing ................................................................................................... 105
Software Engineering | White box Testing ................................................................................................. 108
Software Engineering | Debugging ............................................................................................................. 112
Software Engineering | Selenium: An Automation tool .............................................................................. 113
Software Engineering | Integration Testing ................................................................................................ 117
3
Recent Articles on Software Engineering (links)
Topics:
Introduction
Software Development Life Cycle
Software Project Management
Software Requirements
Software Testing and Debugging:
Introduction:
11. Quasi renewal processes
12. Reliability Growth Models
13. Jelinski Moranda software reliability model
14. Schick-Wolverton software reliability model
15. Goel-Okumoto Model
16. Mills’ Error Seeding Model
17. Basic fault tolerant software techniques
18. Software Maintenance
Software Requirements:
5
Software Engineering | Introduction to Software Engineering
Software is a program or set of programs containing instructions which provide desired functionality
. And Engineering is the processes of designing and building something that serves a particular
purpose and find a cost effective solution to problems.
1. As a product –
1. Maintainability –
It should be feasible for the software to evolve to meet changing requirements.
2. Correctness –
A software product is correct, if the different requirements as specified in the SRS document
have been correctly implemented.
3. Reusability –
A software product has good reusability, if the different modules of the product can easily be
reused to develop new products.
4. Testability –
Here software facilitates both the establishment of test criteria and the evaluation of the
software with respect to those criteria.
5. Reliability –
It is an attribute of software quality. The extent to which a program can be expected to
perform its desired function, over an arbitrary time period.
6. Portability –
In this case, software can be transferred from one computer system or environment to
another.
7. Adaptability –
In this case, software allows differing system constraints and user needs to be satisfied by
making changes to the software.
6
Program vs Software Product:
1. Program is a set of instruction related each other where as Software Product is a collection of
program designed for specific task.
2. Programs are usually small in size where as Software Products are usually large in size.
3. Programs are developed by individuals that means single user where as Software Product are
developed by large no of users.
4. In program, there is no documentation or lack in proper documentation.
In Software Product, Proper documentation and well documented and user manual prepared.
5. Development of program is Unplanned, not Systematic etc but Development of Software
Product is well Systematic, organised, planned approach.
6. Programs provide Limited functionality and less features where as Software Products
provides more functionality as they are big in size (lines of codes) more options and features.
1. System Software –
System Software is necessary to manage the computer resources and support the execution of
application programs. Software like operating systems, compilers, editors and drivers etc.,
come under this category. A computer cannot function without the presence of these.
Operating systems are needed to link the machine dependent needs of a program with the
capabilities of the machine on which it runs. Compilers translate programs from high-level
language to machine language.
2. Networking and Web Applications Software –
Networking Software provides the required support necessary for computers to interact with
each other and with data storage facilities. The networking software is also used when
software is running on a network of computers (such as World Wide Web). It includes all
network management software, server software, security and encryption software and
software to develop web-based applications like HTML, PHP, XML, etc.
3. Embedded Software –
This type of software is embedded into the hardware normally in the Read Only Memory
(ROM) as a part of a large system and is used to support certain functionality under the
control conditions. Examples are software used in instrumentation and control applications,
washing machines, satellites, microwaves, washing machines etc.
4. Reservation Software –
A Reservation system is primarily used to store and retrieve information and perform
transactions related to air travel, car rental, hotels, or other activities. They also provide
access to bus and railway reservations, although these are not always integrated with the main
7
system. These are also used to relay computerized information for users in the hotel industry,
making a reservation and ensuring that the hotel is not overbooked.
5. Business Software –
This category of software is used to support the business applications and is the most widely
used category of software. Examples are software for inventory management, accounts,
banking, hospitals, schools, stock markets, etc.
6. Entertainment Software –
Education and entertainment software provides a powerful tool for educational agencies,
especially those that deal with educating young children. There is a wide range of
entertainment software such as computer games, educational games, translation software,
mapping software, etc.
7. Artificial Intelligence Software –
Software like expert systems, decision support systems, pattern recognition software,
artificial neural networks, etc. come under this category. They involve complex problems
which are not affected by complex computations using non-numerical algorithms.
8. Scientific Software –
Scientific and engineering software satisfies the needs of a scientific or engineering user to
perform enterprise specific tasks. Such software is written for specific applications using
principles, techniques and formulae specific to that field. Examples are software like
MATLAB, AUTOCAD, PSPICE, ORCAD, etc.
9. Utilities Software –
The programs coming under this category perform specific tasks and are different from other
software in terms of size, cost and complexity. Examples are anti-virus software, voice
recognition software, compression programs, etc.
10. Document Management Software –
A Document Management Software is used to track, manage and store documents in order to
reduce the paperwork. Such systems are capable of keeping a record of the various versions
created and modified by different users (history tracking). They commonly provide storage,
versioning, metadata, security, as well as indexing and retrieval capabilities.
1. Commercial –
It represents the majority of software which we purchase from software companies,
commercial computer stores, etc. In this case, when a user buys a software, they acquire a
license key to use it. Users are not allowed to make the copies of the software. The copyright
of the program is owned by the company.
2. Shareware –
Shareware software is also covered under copyright but the purchasers are allowed to make
and distribute copies with the condition that after testing the software, if the purchaser adopts
it for use, then they must pay for it.
In both of the above types of software, changes to software are not allowed.
3. Freeware –
In general, according to freeware software licenses, copies of the software can be made both
for archival and distribution purposes but here, distribution cannot be for making a profit.
Derivative works and modifications to the software are allowed and encouraged.
8
Decompiling of the program code is also allowed without the explicit permission of the
copyright holder.
4. Public Domain –
In case of public domain software, the original copyright holder explicitly relinquishes all
rights to the software. Hence software copies can be made both for archival and distribution
purposes with no restrictions on distribution. Modifications to the software and reverse
engineering are also allowed.
1. Feasibility Study: The main goal of this phase is to determine whether it would be
financially and technically feasible to develop the software.
The feasibility study involves understanding the problem and then determine the various
possible strategies to solve the problem. These different identified solutions are analyzed
based on their benefits and drawbacks, The best solution is chosen and all the other phases
are carried out as per this solution strategy.
9
2. Requirements analysis and specification: The aim of the requirement analysis and
specification phase is to understand the exact requirements of the customer and document
them properly. This phase consists of two different activities.
o Requirement gathering and analysis: Firstly all the requirements regarding the
software are gathered from the customer and then the gathered requirements are
analyzed. The goal of the analysis part is to remove incompleteness (an incomplete
requirement is one in which some parts of the actual requirements have been omitted)
and inconsistencies (inconsistent requirement is one in which some part of the
requirement contradicts with some other part).
o Requirement specification: These analyzed requirements are documented in a
software requirement specification (SRS) document. SRS document serves as a
contract between development team and customers. Any future dispute between the
customers and the developers can be settled by examining the SRS document.
3. Design: The aim of the design phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language.
4. Coding and Unit testing: In coding phase software design is translated into source code
using any suitable programming language. Thus each designed module is coded. The aim of
the unit testing phase is to check whether each module is working properly or not.
5. Integration and System testing: Integration of different modules are undertaken soon after
they have been coded and unit tested. Integration of various modules is carried out
incrementally over a number of steps. During each integration step, previously planned
modules are added to the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and tested, the full working
system is obtained and system testing is carried out on this.
System testing consists three different kinds of testing activities as described below :
o Alpha testing: Alpha testing is the system testing performed by the development
team.
o Beta testing: Beta testing is the system testing performed by a friendly set of
customers.
o Acceptance testing: After the software has been delivered, the customer performed
the acceptance testing to determine whether to accept the delivered software or to
reject it.
6. Maintenance: Maintenance is the most important phase of a software life cycle. The effort
spent on maintenance is the 60% of the total effort spent to develop a full software. There are
basically three types of maintenance :
o Corrective Maintenance: This type of maintenance is carried out to correct errors
that were not discovered during the product development phase.
o Perfective Maintenance: This type of maintenance is carried out to enhance the
functionalities of the system based on the customer’s request.
o Adaptive Maintenance: Adaptive maintenance is usually required for porting the
software to work in a new environment such as work on a new computer platform or
with a new operating system.
10
Classical waterfall model is an idealistic model for software development. It is very simple, so it can
be considered as the basis for other software development life cycle models. Below are some of the
major advantages of this SDLC model:
Classical waterfall model suffers from various shortcomings, basically we can’t use it in real
projects, but we use other software development lifecycle models which are based on the classical
waterfall model. Below are some major drawbacks of this model:
No feedback path: In classical waterfall model evolution of a software from one phase to
another phase is like a waterfall. It assumes that no error is ever committed by developers
during any phases. Therefore, it does not incorporate any mechanism for error correction.
Difficult to accommodate change requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but
actually customers’ requirements keep on changing with time. It is difficult to accommodate
any change requests after the requirements specification phase is complete.
No overlapping of phases: This model recommends that new phase can start only after the
completion of the previous phase. But in real projects, this can’t be maintained. To increase
the efficiency and reduce the cost, phases may overlap.
The iterative waterfall model provides feedback paths from every phase to its preceding
phases, which is the main difference from the classical waterfall model.
Feedback paths introduced by the iterative waterfall model are shown in the figure below.
11
When errors are detected at some later phase, these feedback paths allow correcting errors committed
by programmers during some phase. The feedback paths allow the phase to be reworked in which
errors are committed and these changes are reflected in the later phases. But, there is no feedback
path to the stage – feasibility study, because once a project has been taken, does not give up the
project easily.
It is good to detect errors in the same phase in which they are committed. It reduces the effort and
time required to correct the errors.
Phase Containment of Errors: The principle of detecting errors as close to their points of
commitment as possible is known as Phase containment of errors.
Feedback Path: In the classical waterfall model, there are no feedback paths, so there is no
mechanism for error correction. But in iterative waterfall model feedback path from one
phase to its preceding phase allows correcting the errors that are committed and these
changes are reflected in the later phases.
Simple: Iterative waterfall model is very simple to understand and use. That’s why it is one
of the most widely used software development models.
Difficult to incorporate change requests: The major drawback of the iterative waterfall
model is that all the requirements must be clearly stated before starting of the development
phase. Customer may change requirements after some time but the iterative waterfall model
does not leave any scope to incorporate change requests that are made after development
phase starts.
12
Incremental delivery not supported: In the iterative waterfall model, the full software is
completely developed and tested before delivery to the customer. There is no scope for any
intermediate delivery. So, customers have to wait long for getting the software.
Overlapping of phases not supported: Iterative waterfall model assumes that one phase can
start after completion of the previous phase, But in real projects, phases may overlap to
reduce the effort and time needed to complete the project.
Risk handling not supported: Projects may suffer from various types of risks. But, Iterative
waterfall model has no mechanism for risk handling.
Limited customer interactions: Customer interaction occurs at the start of the project at the
time of requirement gathering and at project completion at the time of software delivery.
These fewer interactions with the customers may lead to many problems as the finally
developed software may differ from the customers’ actual requirements.
The Radius of the spiral at any point represents the expenses(cost) of the project so far, and the
angular dimension represents the progress made so far in the current phase.
13
Each phase of Spiral Model is divided into four quadrants as shown in the above figure. The
functions of these four quadrants are discussed below-
14
Risk Handling in Spiral Model
A risk is any adverse situation that might affect the successful completion of a software project. The
most important feature of the spiral model is handling these unknown risks after the project has
started. Such risk resolutions are easier done by developing a prototype. The spiral model supports
coping up with risks by providing the scope to build a prototype at every phase of the software
development.
Prototyping Model also support risk handling, but the risks must be identified completely before the
start of the development work of the project. But in real life project risk may occur after the
development work starts, in that case, we cannot use Prototyping Model. In each phase of the Spiral
Model, the features of the product dated and analyzed and the risks at that point of time are identified
and are resolved through prototyping. Thus, this model is much more flexible compared to other
SDLC models.
The Spiral model is called as a Meta Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model. The spiral model
incorporates the stepwise approach of the Classical Waterfall Model. The spiral model uses the
approach of Prototyping Model by building a prototype at the start of each phase as a risk handling
technique. Also, the spiral model can be considered as supporting the evolutionary model – the
iterations along the spiral can be considered as evolutionary levels through which the complete
system is built.
Advantages of Spiral Model: Below are some of the advantages of the Spiral Model.
Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the risk
analysis and risk handling at every phase.
Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.
Customer Satisfaction: Customer can see the development of the product at the early phase
of the software development and thus, they habituated with the system by using it before
completion of the total product.
Disadvantages of Spiral Model: Below are some of the main disadvantages of the spiral model.
Complex: The Spiral Model is much more complex than other SDLC models.
Expensive: Spiral Model is not suitable for small projects as it is expensive.
Too much dependable on Risk Analysis: The successful completion of the project is very
much dependent on Risk Analysis. Without very highly experienced expertise, it is going to
be a failure to develop a project using this model.
Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult.
15
Software Engineering | Incremental process model
Incremental process model is also known as Successive version model.
First, a simple working system implementing only a few basic features is built and then that is
delivered to the customer. Then thereafter many successive iterations/ versions are implemented and
delivered to the customer until the desired system is realized.
A, B, C are modules of Software Product that are incrementally developed and delivered.
Once the core features are fully developed, then these are refined to increase levels of capabilities by
adding new functions in Successive versions. Each incremental version is usually developed using an
iterative waterfall model of development.
As each successive version of the software is constructed and delivered, now the feedback of the
Customer is to be taken and these were then incorporated in the next version. Each version of the
software has more additional features over the previous ones.
16
After Requirements gathering and specification, requirements are then spitted into several different
versions starting with version-1, in each successive increment, next version is constructed and then
deployed at the customer site. After the last version (version n), it is now deployed at the client site.
1. Staged Delivery Model – Construction of only one part of the project at a time.
17
2. Parallel Development Model – Different subsystems are developed at the same time. It can
decrease the calendar time needed for the development, i.e. TTM (Time to Market), if enough
Resources are available.
18
When to use this –
1. Funding Schedule, Risk, Program Complexity, or need for early realization of benefits.
2. When Requirements are known up-front.
3. When Projects having lengthy developments schedules.
4. Projects with new Technology.
Advantages –
o Error Reduction (core modules are used by the customer from the beginning of the
phase and then these are tested thoroughly)
o Uses divide and conquer for breakdown of tasks.
o Lowers initial delivery cost.
o Incremental Resource Deployment.
Disadvantages –
A software project can be implemented using this model if the project can be broken down into small
modules wherein each module can be assigned independently to separate teams. These modules can
finally be combined to form the final product.
Development of each module involves the various basic steps as in waterfall model i.e analyzing,
designing, coding and then testing, etc. as shown in the figure.
Another striking feature of this model is a short time span i.e the time frame for delivery(time-box) is
generally 60-90 days.
19
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an
integral part of the projects.
1. Requirements Planning –
It involves the use of various techniques used in requirements elicitation like brainstorming,
task analysis, form analysis, user scenarios, FAST (Facilitated Application Development
Technique), etc. It also consists of the entire structured plan describing the critical data,
methods to obtain it and then processing it to form final refined model.
2. User Description –
This phase consists of taking user feedback and building the prototype using developer tools.
In other words, it includes re-examination and validation of the data collected in the first
phase. The dataset attributes are also identified and elucidated in this phase.
3. Construction –
In this phase, refinement of the prototype and delivery takes place. It includes the actual use
of powerful automated tools to transform process and data models into the final working
product. All the required modifications and enhancements are too done in this phase.
4. Cutover –
All the interfaces between the independent modules developed by separate teams have to be
tested properly. The use of powerfully automated tools and subparts makes testing easier.
This is followed by acceptance testing by the user.
20
The process involves building a rapid prototype, delivering it to the customer and the taking
feedback. After validation by the customer, SRS document is developed and the design is finalised.
Advantages –
Use of reusable components helps to reduce the cycle time of the project.
Feedback from the customer is available at initial stages.
Reduced costs as fewer developers are required.
Use of powerful development tools results in better quality products in comparatively shorter
time spans.
The progress and development of the project can be measured through the various stages.
It is easier to accommodate changing requirements due to the short iteration time spans.
Disadvantages –
The use of powerful and efficient tools requires highly skilled professionals.
The absence of reusable components can lead to failure of the project.
The team leader must work closely with the developers and customers to close the project in
time.
The systems which cannot be modularized suitably cannot use this model.
Customer involvement is required throughout the life cycle.
It is not meant for small scale projects as for such cases, the cost of using automated tools and
techniques may exceed the entire budget of the project.
Applications –
1. This model should be used for a system with known requirements and requiring short
development time.
2. It is also suitable for projects where requirements can be modularized and reusable
components are also available for development.
3. The model can also be used when already existing system components can be used in
developing a new system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is because relevant
knowledge and ability to use powerful techniques is a necessity.
5. The model should be chosen when the budget permits the use of automated tools and
techniques required.
21
Software Engineering | RAD Model vs Traditional SDLC
Detailed explanation of RAD model can be read here.
Different stages of application development can be Follows a predictive, inflexible and rigid
2.
reviewed and repeated as the approach is iterative. approach for application development.
Generally preferred for projects with shorter time Used for projects with longer development
9. durations and budgets large enough to afford the schedules and where budgets do not allow the
use of automated tools and techniques. use of expensive and powerful tools.
Use of reusable components helps to reduce the The use of powerful and efficient tools requires
10.
cycle time of the project. highly skilled professionals.
22
Software Engineering | Agile Development Models
In earlier days Iterative Waterfall model was very popular to complete a project. But nowadays
developers face various problems while using it to develop a software. The main difficulties included
handling change requests from customers during project development and the high cost and time
required to incorporate these changes. To overcome these drawbacks of Waterfall model, in the mid-
1990s the Agile Software Development model was proposed.
The Agile model was primarily designed to help a project to adapt to change requests quickly. So,
the main aim of the Agile model is to facilitate quick project completion. To accomplish this task
agility is required. Agility is achieved by fitting the process to the project, removing activities that
may not be essential for a specific project. Also, anything that is wastage of time and effort is
avoided.
Actually Agile model refers to a group of development processes. These processes share some basic
characteristics but do have certain subtle differences among themselves. A few Agile SDLC models
are given below:
Crystal
Atern
Feature-driven development
Scrum
Extreme programming (XP)
Lean development
Unified process
In the Agile model, the requirements are decomposed into many small parts that can be
incrementally developed. The Agile model adopts Iterative development. Each incremental part is
developed over an iteration. Each iteration is intended to be small and easily manageable and that
can be completed within a couple of weeks only. At a time one iteration is planned, developed and
deployed to the customers. Long-term plans are not made.
Agile model is the combination of iterative and incremental process models. Steps involve in agile
SDLC models are:
Requirement gathering
Requirement Analysis
Design
Coding
Unit testing
Acceptance testing
The time to complete an iteration is known as a Time Box. Time-box refers to the maximum amount
of time needed to deliver an iteration to customers. So, the end date for an iteration does not change.
Though the development team can decide to reduce the delivered functionality during a Time-box if
necessary to deliver it on time. The central principle of the Agile model is the delivery of an
increment to the customer after each Time-box.
23
Principles of Agile model:
To establish close contact with the customer during development and to gain a clear
understanding of various requirements, each Agile project usually includes a customer
representative on the team. At the end of each iteration stakeholders and the customer
representative review, the progress made and re-evaluate the requirements.
Agile model relies on working software deployment rather than comprehensive
documentation.
Frequent delivery of incremental versions of the software to the customer representative in
intervals of few weeks.
Requirement change requests from the customer are encouraged and efficiently incorporated.
It emphasizes on having efficient team members and enhancing communications among them
is given more importance. It is realized that enhanced communication among the
development team members can be achieved through face-to-face communication rather than
through the exchange of formal documents.
It is recommended that the development team size should be kept small (5 to 9 peoples) to
help the team members meaningfully engage in face-to-face communication and have
collaborative work environment.
Agile development process usually deploy Pair Programming. In Pair programming, two
programmers work together at one work-station. One does coding while the other reviews the
code as it is typed in. The two programmers switch their roles every hour or so.
Advantages:
Working through Pair programming produce well written compact programs which has fewer
errors as compared to programmers working alone.
It reduces total development time of the whole project.
Customer representative get the idea of updated software products after each iretation. So, it
is easy for him to change any requirement if needed.
Disadvantages:
Due to lack of formal documents, it creates confusion and important decisions taken during
different phases can be misinterpreted at any time by different team members.
Due to absence of proper documentation, when the project completes and the developers are
assigned to another project, maintenance of the developed project can become a problem.
Why Agile?
Technology in this current era is progressing faster than ever, enforcing the global software
24
companies to work in a fast-paced changing environment. Because these businesses are operating in
an ever-changing environment, it is impossible to gather a complete and exhaustive set of software
requirements. Without these requirements, it becomes practically hard for any conventional software
model to work.
The conventional software models such as Waterfall Model that depends on completely specifying
the requirements, designing, and testing the system are not geared towards rapid software
development. As a consequence, a conventional software development model fails to deliver the
required product.
This is where the agile software development comes to the rescue. It was specially designed to curate
the needs of the rapidly changing environment by embracing the idea of incremental development
and develop the actual final product.
Let’s now read about the on which the Agile has laid its foundation:
Principles:
1. Highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
2. It welcomes changing requirements, even late in development.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shortest timescale.
4. Build projects around motivated individuals. Give them the environment and the support they
need, and trust them to get the job done.
5. Working software is the primary measure of progress.
6. Simplicity the art of maximizing the amount of work not done is essential.
7. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
Development in Agile: Let’s see a brief overview of how development occurs in Agile philosophy.
In Agile development, Design and Implementation are considered to be the central activities
in the software process.
Design and Implementation phase also incorporate other activities such as requirements
elicitation and testing into it.
In an agile approach, iteration occurs across activities. Therefore, the requirements and the
design are developed together, rather than separately.
The allocation of requirements and the design planning and development as executed in a
series of increments. In contrast with the conventional model, where requirements gathering
needs to be completed in order to proceed to the design and development phase, it gives
Agile development an extra level of flexibility.
An agile process focuses more on code development rather than documentation.
Example: Let’s go through an example to understand clearly about how agile actually works.
A Software company named ABC wants to make a new web browser for the latest release of its
operating system. The deadline for the task is 10 months. The company’s head assigned two teams
named Team A and Team B for this task. In order to motivate the teams, the company head says
that the first team to develop the browser would be given a salary hike and a one week full sponsored
travel plan. With the dreams of their wild travel fantasies, the two teams set out on the journey of the
web browser. The team A decided to play by the book and decided to choose the Waterfall model for
25
the development. Team B after a heavy discussion decided to take a leap of faith and choose Agile as
their development model.
Since this was an Agile, the project was broken up into several iterations.
The iterations are all of the same time duration.
At the end of each iteration, a working product with a new feature has to be delivered.
Instead of Spending 1.5 months on requirements gathering, They will decide the core features
that are required in the product and decide which of these features can be developed in the
first iteration.
Any remaining features that cannot be delivered in the first iteration will be delivered in the
next subsequent iteration, based in the priority
At the end of the first iterations, the team will deliver a working software with the core basic
features.
Both the team have put their best efforts to get the product to a complete stage. But then out of blue
due to the rapidly changing environment, the company’s head come up with an entirely new set of
features and want to be implemented as quickly as possible and wanted to push out a working model
in 2 days. Team A was now in a fix, they were still in their design phase and did not yet started
coding and they had no working model to display. And moreover, it was practically impossible for
them to implement new features since waterfall model there is not reverting back to the old phase
once you proceed to the next stage, that means they would have to start from the square one again.
That would incur them heavy cost and a lot of overtime. Team B was ahead of Team A in a lot of
aspects, all thanks to Agile Development. They also had the working product with most of the core
requirement since the first increment. And it was a piece of cake for them to add the new
requirements. All they had to do is schedule these requirements for the next increment and then
implement them.
Advantages:
Deployment of software is quicker and thus helps in increasing the trust of the customer.
Can better adapt to rapidly changing requirements and respond faster.
Helps in getting immediate feedback which can be used to improve the software in the next
increment.
People – Not Process. People and interactions are given a higher priority rather than process
and tools.
Continuous attention to technical excellence and good design.
26
Disadvantages:
In case of large software projects, it is difficult to assess the effort required at the initial
stages of the software development life cycle.
The Agile Development is more code focused and produces less documentation.
Agile development is heavily depended on the inputs of the customer. If the customer has
ambiguity in his vision of the final outcome, it is highly likely for the project to get off track.
Face to Face communication is harder in large-scale organizations.
Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it’s a difficult situation for new programmers to adapt to the
environment.
Agile is a framework which defines how the software development needs to be carried on. Agile is
not a single method, it represents the various collection of methods and practices that follow the
value statements provided in the manifesto. Agile methods and practices do not promise to solve
every problem present in the software industry (No Software model ever can). But they sure help to
establish a culture and environment where solutions emerge.
Good practices needs to practiced extreme programming: Some of the good practices that have
been recognized in the extreme programming model and suggested to maximize their use are given
below:
Code Review: Code review detects and corrects errors efficiently. It suggests pair
programming as coding and reviewing of written code carried out by a pair of programmers
who switch their works between them every hour.
Testing: Testing code helps to remove errors and improves its reliability. XP suggests test-
driven development (TDD) to continually write and execute test cases. In the TDD approach
test cases are written even before any code is written.
Incremental development: Incremental development is very good because customer
feedback is gained and based on this development team come up with new increments every
few days after each iteration.
Simplicity: Simplicity makes it easier to develop good quality code as well as to test and
debug it.
Design: Good quality design is important to develop a good quality software. So, everybody
should design daily.
Integration testing: It helps to identify bugs at the interfaces of different functionalities.
Extreme programming suggests that the developers should achieve continuous integration by
building and performing integration testing several times a day.
27
Basic principles of Extreme programming: XP is based on the frequent iteration through which
the developers implement User Stories. User stories are simple and informal statements of the
customer about the functionalities needed. A User story is a conventional description by the user
about a feature of the required system. It does not mention finer details such as the different
scenarios that can occur. On the basis of User stories, the project team proposes Metaphors.
Metaphors are a common vision of how the system would work. The development team may decide
to build a Spike for some feature. A Spike is a very simple program that is constructed to explore the
suitability of a solution being proposed. It can be considered similar to a prototype. Some of the
basic activities that are followed during software development by using XP model are given below:
Coding: The concept of coding which is used in XP model is slightly different from
traditional coding. Here, coding activity includes drawing diagrams (modeling) that will be
transformed into code, scripting a web-based system and choosing among several alternative
solutions.
Testing: XP model gives high importance on testing and considers it be the primary factor to
develop a fault-free software.
Listening: The developers needs to carefully listen to the customers if they have to develop a
good quality software. Sometimes programmers may not have the depth knowledge of the
system to be developed. So, it is desirable for the programmers to understand properly the
functionality of the system and they have to listen to the customers.
Designing: Without a proper design, a system implementation becomes too complex and
very difficult to understand the solution, thus it makes maintenance expensive. A good design
results elimination of complex dependencies within a system. So, effective use of suitable
design is emphasized.
Feedback: One of the most important aspects of the XP model is to gain feedback to
understand the exact customer needs. Frequent contact with the customer makes the
development effective.
Simplicity: The main principle of the XP model is to develop a simple system that will work
efficiently in present time, rather than trying to build something that would take time and it
may never be used. It focuses on some specific features that are immediately needed, rather
than engaging time and effort on speculations of future requirements.
Applications of Extreme Programming (XP): Some of the projects that are suitable to develop
using XP model are given below:
Small projects: XP model is very useful in small projects consisting of small teams as face
to face meeting is easier to achieve.
Projects involving new technology or Research projects: This type of projects face
changing of requirements rapidly and technical problems. So XP model is used to complete
this type of projects.
28
Verification: It involves static analysis technique (review) done without executing code. It is the
process of evaluation of the product development phase to find whether specified requirements meet.
So V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation phases are joined by coding phase in V-shape. Thus it is called V-Model.
Design Phase:
Requirement Analysis: This phase contains detailed communication with the customer to
understand their requirements and expectations. This stage is known as Requirement
Gathering.
System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
Architectural Design: System design is broken down further into modules taking up
different functionalities. The data transfer and communication between the internal modules
and with the outside world (other systems) is clearly understood.
Module Design: In this phase the system breaks dowm into small modules. The detailed
design of modules is specified, also known as Low-Level Design (LLD).
29
Testing Phases:
Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test
Plans are executed to eliminate bugs at code or unit level.
Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated and the system is tested. Integration testing is
performed on the Architecture design phase. This test verifies the communication of modules
among themselves.
System Testing: System testing test the complete application with its functionality, inter
dependency, and communication. It tests the functional and non-functional requirements of
the developed application.
User Acceptance Testing (UAT): UAT is performed in a user environment that resembles
the production environment. UAT verifies that the delivered system meets user’s requirement
and system is ready for use in real world.
Industrial Challenge: As the industry has evolved, the technologies have become more complex,
increasingly faster, and forever changing, however, there remains a set of basic principles and
concepts that are as applicable today as when IT was in its infancy.
Principles of V-Model:
Why preferred?
It is easy to manage due to the rigidity of the model. Each phase of V-Model has specific
deliverables and a review process.
Proactive defect tracking – that is defects are found at early stage.
30
When to use?
Advantages:
This is a highly disciplined model and Phases are completed one at a time.
V-Model is used for small projects where project requirements are clear.
Simple and easy to understand and use.
This model focuses on verification and validation activities early in the life cycle thereby
enhancing the probability of building an error-free and good quality product.
It enables project management to track progress accurately.
Disadvantages:
Iterative Waterfall Model: The Iterative Waterfall model is probably the most used software
development model. This model is simple to use and understand. But this model is suitable only for
well-understood problems and is not suitable for the development of very large projects and projects
that suffer from a large number of risks.
Prototyping Model: The Prototyping model is suitable for projects, which either the customer
requirements or the technical solutions are not well understood. This risks must be identified before
the project starts. This model is especially popular for the development of the user interface part of
the project.
Evolutionary Model: The Evolutionary model is suitable for large projects which can be
decomposed into a set of modules for incremental development and delivery. This model is widely
31
used in object-oriented development projects. This model is only used if incremental delivery of the
system is acceptable to the customer.
Spiral Model: The Spiral model is considered as a meta-model as it includes all other life cycle
models. Flexibility and risk handling are the main characteristics of this model. The spiral model is
suitable for the development of technically challenging and large software that is prone to various
risks that are difficult to anticipate at the start of the project. But this model is very much complex
than the other models.
Agile Model: The Agile model was designed to incorporate change requests quickly. In this model,
requirements are decomposed into small parts that can be incrementally developed. But the main
principle of the Agile model is to deliver an increment to the customer after each Time-box. The end
date of an iteration is fixed, it can’t be extended. This agility is achieved by removing unnecessary
activities that waste time and effort.
Selection of appropriate life cycle model for a project: Selection of proper lifecycle model to
complete a project is the most important task. It can be selected by keeping the advantages and
disadvantages of various models in mind. The different issues that are analyzed before selecting a
suitable life cycle model are given below :
Characteristics of the software to be developed: The choice of the life cycle model largely
depends on the type of the software that is being developed. For small services projects, the
agile model is favored. On the other hand, for product and embedded development, the
Iterative Waterfall model can be preferred. The evolutionary model is suitable to develop an
object-oriented project. User interface part of the project is mainly developed through
prototyping model.
Characteristics of the development team: Team member’s skill level is an important factor
to deciding the life cycle model to use. If the development team is experienced in developing
similar software, then even an embedded software can be developed using the Iterative
Waterfall model. If the development team is entirely novice, then even a simple data
processing application may require a prototyping model.
Risk associated with the project: If the risks are few and can be anticipated at the start of
the project, then prototyping model is useful. If the risks are difficult to determine at the
beginning of the project but are likely to increase as the development proceeds, then the spiral
model is the best model to use.
Characteristics of the customer: If the customer is not quite familiar with computers, then
the requirements are likely to change frequently as it would be difficult to form complete,
consistent and unambiguous requirements. Thus, a prototyping model may be necessary to
reduce later change requests from the customers. Initially, the customer’s confidence is high
on the development team. During the lengthy development process, customer confidence
normally drops off as no working software is yet visible. So, the evolutionary model is useful
as the customer can experience a partially working software much earlier than whole
complete software. Another advantage of the evolutionary model is that it reduces the
customer’s trauma of getting used to an entirely new system.
32
Software Engineering | User Interface Design
User interface is the front-end application view to which user interacts in order to use the software.
The software becomes more popular if its user interface is:
Attractive
Simple to use
Responsive in short time
Clear to understand
Consistent on all interface screens
1. Command Line Interface: Command Line Interface provides a command prompt, where the
user types the command and feeds to the system. The user needs to remember the syntax of
the command and its use.
2. Graphical User Interface: Graphical User Interface provides the simple interactive interface
to interact with the system. GUI can be a combination of both hardware and software. Using
GUI, user interprets the software.
The analysis and design process of a user interface is iterative and can be represented by a spiral
model. The analysis and design process of user interface consists of four framework activities.
1. User, task, environmental analysis, and modeling: Initially, the focus is based on the
profile of users who will interact with the system, i.e. understanding, skill and knowledge,
type of user, etc, based on the user’s profile users are made into categories. From each
33
category requirements are gathered. Based on the requirements developer understand how to
develop the interface. Once all the requirements are gathered a detailed analysis is conducted.
In the analysis part, the tasks that the user performs to establish the goals of the system are
identified, described and elaborated. The analysis of the user environment focuses on the
physical work environment. Among the questions to be asked are:
o Where will the interface be located physically?
o Will the user be sitting, standing, or performing other tasks unrelated to the interface?
o Does the interface hardware accommodate space, light, or noise constraints?
o Are there special human factors considerations driven by environmental factors?
2. Interface Design: The goal of this phase is to define the set of interface objects and actions
i.e. Control mechanisms that enable the user to perform desired tasks. Indicate how these
control mechanisms affect the system. Specify the action sequence of tasks and subtasks, also
called a user scenario. Indicate the state of the system when the user performs a particular
task. Always follow the three golden rules stated by Theo Mandel. Design issues such as
response time, command and action structure, error handling, and help facilities are
considered as the design model is refined. This phase serves as the foundation for the
implementation phase.
3. Interface construction and implementation: The implementation activity begins with the
creation of prototype (model) that enables usage scenarios to be evaluated. As iterative design
process continues a User Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an interactive
environment can be used for completing the construction of an interface.
4. Interface Validation: This phase focuses on testing the interface. The interface should be in
such a way that it should be able to perform tasks correctly and it should be able to handle a
variety of tasks. It should achieve all the user’s requirements. It should be easy to use and
easy to learn. Users should accept the interface as a useful one in their work.
Golden Rules:
The following are the golden rules stated by Theo Mandel that must be followed during the design of
the interface.
Define the interaction modes in such a way that does not force the user into unnecessary or
undesired actions: The user should be able to easily enter and exit the mode with little or no
effort.
Provide for flexible interaction: Different people will use different interaction mechanisms,
some might use keyboard commands, some might use mouse, some might use touch screen,
etc, Hence all interaction mechanisms should be provided.
Allow user interaction to be interruptable and undoable: When a user is doing a sequence of
actions the user must be able to interrupt the sequence to do some other work without losing
the work that had been done. The user should also be able to do undo operation.
Streamline interaction as skill level advances and allow the interaction to be customized:
Advanced or highly skilled user should be provided a chance to customize the interface as
user wants which allows different interaction mechanisms so that user doesn’t feel bored
while using the same interaction mechanism.
Hide technical internals from casual users: The user should not be aware of the internal
technical details of the system. He should interact with the interface just to do his work.
34
Design for direct interaction with objects that appear on screen: The user should be able to
use the objects and manipulate the objects that are present on the screen to perform a
necessary task. By this, the user feels easy to control over the screen.
Reduce demand on short-term memory: When users are involved in some complex tasks the
demand on short-term memory is significant. So the interface should be designed in such a
way to reduce the remembering of previously done actions, given inputs and results.
Establish meaningful defaults: Always initial set of defaults should be provided to the
average user, if a user needs to add some new features then he should be able to add the
required features.
Define shortcuts that are intuitive: Mnemonics should be used by the user. Mnemonics means
the keyboard shortcuts to do some action on the screen.
The visual layout of the interface should be based on a real-world metaphor: Anything you
represent on a screen if it is a metaphor for real-world entity then users would easily
understand.
Disclose information in a progressive fashion: The interface should be organized
hierarchically i.e. on the main screen the information about the task, an object or some
behavior should be presented first at a high level of abstraction. More detail should be
presented after the user indicates interest with a mouse pick.
Allow the user to put the current task into a meaningful context: Many interfaces have dozens
of screens. So it is important to provide indicators consistently so that the user know about
the doing work. The user should also know from which page has navigated to the current
page and from the current page where can navigate.
Maintain consistency across a family of applications: The development of some set of
applications all should follow and implement the same design, rules so that consistency is
maintained among applications.
If past interactive models have created user expectations do not make changes unless there is
a compelling reason.
Basically, design is a two-part iterative process. First part is Conceptual Design that tells the
customer what the system will do. Second is Technical Design that allows the system builders to
understand the actual hardware and software needed to solve customer’s problem.
35
Conceptual design of system:
Coupling: Coupling is the measure of the degree of interdependence between the modules. A good
software will have low coupling.
36
Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data
coupling, the components are independent to each other and communicating through data.
Module communications don’t contain tramp data. Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from one module
to another module. Therefore, it involves tramp data. It may be necessary due to efficiency
factors- this choice made by the insightful designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information, then they
are said to be control coupled. It can be bad if parameters indicate completely different
behavior and good if parameters allow factoring and reuse of functionality. Example- sort
function that takes comparison function as an argument.
External Coupling: In external coupling, the modules depend on other modules, external to
the software being developed or to a particular type of hardware. Ex- protocol, external file,
device format, etc.
Common Coupling: The modules have shared data such as global data structures.The
changes in global data mean tracing back to all modules which access that data to evaluate
the effect of the change. So it has got disadvantages like difficulty in reusing modules,
reduced ability to control data accesses and reduced maintainability.
Content Coupling: In a content coupling, one module can modify the data of another
module or control flow is passed from one module to the other module. This is the worst form
of coupling and should be avoided.
Cohesion: Cohesion is a measure of the degree to which the elements of the module are functionally
related. It is the degree to which all elements directed towards performing a single task are contained
in the component. Basically, cohesion is the internal glue that keeps the module together. A good
software design will have high cohesion.
37
Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input for other
element, i.e., data flow between the parts. It occurs naturally in functional programming
languages.
Communicational Cohesion: Two elements operate on the same input data or contribute
towards the same output data. Example- update record int the database and send it to the
printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of execution.
Actions are still weakly connected and unlikely to be reusable. Ex- calculate student GPA,
print student record, calculate cumulative GPA, print cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module connected
with temporal cohesion all the tasks must be executed in the same time-span. This cohesion
contains the code for initializing all the parts of the system. Lots of different activities occur,
all at init time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A component
reads inputs from tape, disk, and network. All the code for these functions is in the same
component. Operations are related, but the functions are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst form
of cohesion. Ex- print next line and reverse the characters of a string in a single component.
38
Software Engineering | Information System Life Cycle
IN a large organisation, the database system is typically part of the information system which
includes all the resources that are involved in the collection, management, use and dissemination of
the information resources of the organisation. In the today’s world these resource includes the data
itself, DBMS software, the computer system software and storage media, the person who uses and
manages the data and the application programmers who develop these application. Thus the database
system is a part of much larger organizational information system.
In this article we will discuss about typical life cycle of an information system, and how the database
fits into this life cycle. Information cycle is also known as Macro life cycle.
1. Feasibility Analysis –
This phase basically concerned with following points:
o (a) Analyzing potential application areas.
o (b) Identifying the economics of information gathering.
o (c) Performing preliminary cost benefit studies.
o (d) Determining the complexity of data and processes.
o (e) Setting up priorities among application.
2. Requirements Collection and Analysis –
In this phase we basically do the following points:
o (a) Detailed requirements are collected by interacting with potential users and groups
to identify their particular problems and needs.
o (b) Inter application dependencies are identified.
o (c) Communication and reporting procedures are identified.
3. Design
This phase has following two aspects:
o (a) Design of database
o (b) Design of application system that uses and process the database.
4. Implementation –
In this phase following steps are implemented:
o (a) The information system is implemented
o (b) The database is loaded.
o (c) The database transaction are implemented and tested.
5. Validation and Acceptance Testing –
The acceptability of the system is meeting’s users requirements and performance criteria is
validated. The system is tested against performance criteria and behavior specification.
6. Deployment operation and maintenance –
This may be preceded by conversion of users from older system as well as by user training.
The operational phase starts when all system function are operational and have been
validated. As new requirements or application crop up, they pass through all the previous
phases until they are validated and incorporated into system. Monitoring and system
maintenance are important activities during operational phase.
39
Software Engineering | Database application system life cycle
Database application development is basically the process of obtaining following things.
1. Real-world requirements
2. Analyzing the real-world requirements
3. To design the data and functions of the system
4. Implementing the operations in the system.
Activities related to the database application system (micro) life cycle include the following:
1. System Definition –
The scope of the database system, its users, and its application are defined.The interfaces for various
categories of users, the response time constraints and storage and processing needs are identified.
2. Database design –
At the end of this phase a complete logical and physical design of the database system on the chosen
DBMS is ready.
3. Database implementation –
This comprises the process of specifying the conceptual, external, and internal database definition
creating empty database files, and implementing the software application.
5. Application conversion –
Any software application from a previous system are converted to the new system.
7. Operation –
The Database system and its application are put into operation Usually the old and new system are
operated in parallel for some time.
Activities 2, 3, 4 are part of the design and implementation phase of the larger information system
life cycle. Most databases in organization undergo all the preceding lifecycle activities. Most
databases undergo all the preceding life cycle activities. The conversion activities(4 and 5) are not
applicable when both the database and application activities are new. When an organization moves
40
from an established system to a new one activities 4 and 5 tend to be the most consuming and the
effort to accomplish them is often underestimated. In general there is often feedback among the
various steps because new requirements frequently arises at every stage.
Our goal is to produce a reliability prediction tool using PNZ models based on reliability predictions
and careful analysis of the sensitivity of various models. Therefore PNZ enables us to analyse that
how much reliability of a software system can be improved by using fault tolerance structures
techniques which are later discussed in this section.
Theorem:
Assume that the time-dependent fault content function and error detection rate are, respectively,
where a = a(0) is the parameter for the total number of initial faults that exist in the software before
testing, and is the initial per fault visibility or failure intensity. The mean value function of the
equation is given by
41
This model is known as the PNZ model. In other words, the PNZ model incorporates the imperfect
debugging phenomenon by assuming that faults can be introduced during the debugging phase at a
constant rate of fault per detected fault.
Therefore, the fault content rate function, a(t), is a linear function of the testing time. The model also
assumes that the fault detection rate function, b(t), is a nondecreasing S-shaped curve, which may
capture the “learning” process of the software testers.
where and N are the same as that defined in the J-M model and ti is the test time since the (i-1)th
failure.
We now wish to estimate N assuming that is given. Using the MLE method, the log likelihood
function is given by,
42
Taking the first derivative with respect to N, we have,
and
43
Therefore, the MLEs of N and can be found by solving the two equations simultaneously as
follows:
where
44
Software Project Management(SPM):
Feasibility study
Project Planning
Project Execution
Project Termination
Feasibility Study:
Feasibility study explores system requirements to determine project feasibility. There are several
fields of feasibility study including economic feasibility, operational feasibility, technical feasibility.
The goal is to determine whether the system can be implemented or not. The process of feasibility
study takes as input the requirement details as specified by the user and other domain-specific
details. The output of this process simply tells whether the project should be undertaken or not and if
yes, what would the constraints be. Additionally, all the risks and their potential effects on the
projects are also evaluated before a decision to start the project is taken.
45
Project Planning:
A detailed plan stating stepwise strategy to achieve the listed objectives is an integral part of any
project.
Planning consists of the following activities:
This step also involves the construction of a work breakdown structure(WBS). It also includes size,
effort, schedule and cost estimation using various techniques.
Project Execution:
A project is executed by choosing an appropriate software development lifecycle model(SDLC). It
includes a number of steps including requirements analysis, design, coding, testing and
implementation, testing, delivery and maintenance. There are a number of factors that need to be
considered while doing so including the size of the system, the nature of the project, time and budget
constraints, domain requirements, etc. An inappropriate SDLC can lead to failure of the project.
Project Termination:
There can be several reasons for the termination of a project. Though expecting a project to terminate
after successful completion is conventional, but at times, a project may also terminate without
completion. Projects have to be closed down when the requirements are not fulfilled according to
given time and cost constraints.
Some of the reasons for failure include:
Once the project is terminated, a post-performance analysis is done. Also, a final report is published
describing the experiences, lessons learned, recommendations for handling future projects.
46
Software Engineering | Project size estimation techniques
Estimation of the size of software is an essential part of Software Project Management. It helps the
project manager to further predict the effort and time which will be needed to build the project.
Various measures are used in project size estimation. Some of these are:
Lines of Code
Number of entities in ER diagram
Total number of processes in detailed data flow diagram
Function points
1. Lines of Code (LOC): As the name suggest, LOC count the total number of lines of source code
in a project. The units of LOC are:
The size is estimated by comparing it with the existing systems of same kind. The experts use it to
predict the required size of various components of software and then add them to get the total size.
Advantages:
Disadvantages:
2. Number of entities in ER diagram: ER model provides a static view of the project. It describes
the entities and its relationships. The number of entities in ER model can be used to measure the
estimation of size of project. Number of entities depends on the size of the project. This is because
more entities needed more classes/structures thus leading to more coding.
Advantages:
Disadvantages:
47
No fixed standards exist. Some entities contribute more project size than others.
Just like FPA, it is less used in cost estimation model. Hence, it must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD) represents
the functional view of a software. The model depicts the main processes/functions involved in
software and flow of data between them. Utilization of number of functions in DFD to predict
software size. Already existing processes of similar type are studied and used to estimate the size of
the process. Sum of the estimated size of each process gives the final estimated size.
Advantages:
Disadvantages:
Studying similar kind of processes to estimate size takes additional time and effort.
All software projects are not required to construction of DFD.
4. Function Point Analysis: In this method, the number and type of functions supported by the
software are utilized to find FPC(function point count). The steps in function point analysis are:
Count the number of functions of each proposed type: Find the number of functions
belonging to the following types:
o External Inputs: Functions related to data entering the system.
o External outputs:Functions related to data exiting the system.
o External Inquiries: They leads to data retrieval from system but don’t change the
system.
o Internal Files: Logical files maintained within the system. Log files are not included
here.
o External interface Files: These are logical files for other applications which are used
by our system.
Compute the Unadjusted Function Points(UFP): Categorise each of the five function types
as simple, average or complex based on their complexity. Multiply count of each function
type with its weighting factor and find the weighted sum. The weighting factors for each type
based on their complexity are as follows:
48
Function type Simple Average Complex
External Inputs 3 4 6
External Output 4 5 7
External Inquiries 3 4 6
Find Total Degree of Influence: Use the ’14 general characteristics’ of a system to find the
degree of influence of each of them. The sum of all 14 degrees of influences will give the
TDI. The range of TDI is 0 to 70. The 14 general characteristics are: Data Communications,
Distributed Data Processing, Performance, Heavily Used Configuration, Transaction Rate,
On-Line Data Entry, End-user Efficiency, Online Update, Complex Processing Reusability,
Installation Ease, Operational Ease, Multiple Sites and Facilitate Change.
Each of above characteristics is evaluated on a scale of 0-5.
Compute Value Adjustment Factor(VAF): Use the following formula to calculate VAF
VAF = (TDI * 0.01) + 0.65
Find the Function Point Count: Use the following formula to calculate FPC
FPC = UFP * VAF
Advantages:
Disadvantages:
49
Software Engineering | System configuration management
Whenever software is build, there is always scope for improvement and those improvements brings
changes in picture. Changes may be required to modify or update any existing solution or to create a
new solution for a problem. Requirements keeps on changing on daily basis and so we need to keep
on upgrading our systems based on the current requirements and needs to meet desired outputs.
Changes should be analyzed before they are made to the existing system, recorded before they are
implemented, reported to have details of before and after, and controlled in a manner that will
improve quality and reduce error. This is where the need of System Configuration Management
comes.
1. Identification and Establishment – Identifying the configuration items from products that
compose baselines at given points in time (a baseline is a set of mutually consistent
Configuration Items, which has been formally reviewed and agreed upon, and serves as the
basis of further development). Establishing relationship among items, creating a mechanism
to manage multiple level of control and procedure for change management system.
2. Version control – Creating versions/specifications of the existing product to build new
products from the help of SCM system. A description of version is given below:
50
Suppose after some changes, the version of configuration object changes from 1.0 to 1.1.
Minor corrections and changes result in versions 1.1.1 and 1.1.2, which is followed by a
major update that is object 1.2. The development of object 1.0 continues through 1.3 and 1.4,
but finally, a noteworthy change to the object results in a new evolutionary path, version 2.0.
Both versions are currently supported.
3. Change control – Controlling changes to Configuration items (CI). The change control
process is explained in Figure below:
51
A change request (CR) is submitted and evaluated to assess technical merit, potential side
effects, overall impact on other configuration objects and system functions, and the projected
cost of the change. The results of the evaluation are presented as a change report, which is
used by a change control board (CCB) —a person or group who makes a final decision on the
status and priority of the change. An engineering change Request (ECR) is generated for each
approved change.
Also CCB notifies the developer in case the change is rejected with proper reason. The ECR
describes the change to be made, the constraints that must be respected, and the criteria for
review and audit. The object to be changed is “checked out” of the project database, the
change is made, and then the object is tested again. The object is then “checked in” to the
database and appropriate version control mechanisms are used to create the next version of
the software.
52
SCM Tools –
Different tools are available in market for SCM like: CFEngine, Bcfg2 server, Vagrant,
SmartFrog, CLEAR CASETOOL (CC), SaltStack, CLEAR QUEST TOOL, Puppet, SVN-
Subversion, Perforce, TortoiseSVN, IBM Rational team concert, IBM Configuration
management version management, Razor, Ansible, etc. There are many more in the list.
It is recommended that before selecting any configuration management tool, have a proper
understanding of the features and select the tool which best suits your project needs and be
clear with the benefits and drawbacks of each before you choose one to use.
The key parameters which define the quality of any software products, which are also an outcome of
the Cocomo are primarily Effort & Schedule:
Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
Schedule: Simply means the amount of time required for the completion of the job, which is,
of course, proportional to the effort put. It is measured in the units of time such as weeks,
months.
Different models of Cocomo have been proposed to predict the cost estimation at different levels,
based on the amount of accuracy and correctness required. All of these models can be applied to a
variety of projects, whose characteristics determine the value of constant to be used in subsequent
calculations. These characteristics pertaining to different system types are mentioned below.
1. Organic – A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past and also the
team members have a nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team-size, experience, knowledge of the various programming
environment lie in between that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop compared to the organic
ones and require more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached type.
3. Embedded – A software project with requiring the highest level of complexity, creativity,
and experience requirement fall under this category. Such software requires a larger team size
53
than the other two models and also the developers need to be sufficiently experienced and
creative to develop such complex models.
All the above system types utilize different values of the constants used in Effort
Calculations.
The first level, Basic COCOMO can be used for quick and slightly rough calculations of
Software Costs. Its accuracy is somewhat restricted due to the absence of sufficient factor
considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO
additionally accounts for the influence of individual project phases, i.e in case of Detailed it
accounts for both these cost drivers and also calculations are performed phase wise
henceforth producing a more accurate result. These two models are further discussed below.
1. Basic Model –
The above formula is used for the cost estimation of for the basic COCOMO model,
and also is used in the subsequent models. The constant values a and b for the Basic
Model for the different categories of system:
Software Projects a b
54
2. Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the number of
lines of code and some constants evaluated according to the different software system.
However, in reality, no system’s effort and schedule can be solely calculated on the
basis of Lines of Code. For that, various other factors such as reliability, experience,
Capability. These factors are known as Cost Drivers and the Intermediate Model
utilizes 15 such drivers for cost estimation.
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language experience
55
Cost Drivers Very Low Low Nominal High Very High
Product Attributes
Required Software
0.75 0.88 1.00 1.15 1.40
Reliability
Size of Application
0.94 1.00 1.08 1.16
Database
Complexity of The
0.70 0.85 1.00 1.15 1.30
Product
Hardware Attributes
Runtime Performance
1.00 1.11 1.30
Constraints
Personnel attributes
Software engineer
1.42 1.17 1.00 0.86 0.70
capability
Virtual machine
1.21 1.10 1.00 0.90
experience
Programming language
1.14 1.07 1.00 0.95
experience
Project Attributes
56
Cost Drivers Very Low Low Nominal High Very High
Application of software
1.24 1.10 1.00 0.91 0.82
engineering methods
Required development
1.23 1.08 1.00 1.04 1.10
schedule
The project manager is to rate these 15 different parameters for a particular project on
a scale of one to three. Then, depending on these ratings, appropriate cost driver
values are taken from the above table. These 15 values are then multiplied to calculate
the EAF (Effort Adjustment Factor). The Intermediate COCOMO formula now takes
the form:
Software Projects a b
3. Detailed Model –
Detailed COCOMO incorporates all characteristics of the intermediate version with
an assessment of the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for each cost driver
attribute. In detailed cocomo, the whole software is divided into different modules and
then we apply COCOMO in different modules to estimate effort and then sum the
effort.
Cost Constructive model
The effort is calculated as a function of program size and a set of cost drivers are
given according to each phase of the software lifecycle.
Also read: Classical Waterfall Model, Iterative Waterfall Model, Prototyping Model, Spiral
Model
It is not a software process model. It is a framework which is used to analyse the approach
and techniques followed by any organization to develop a software product.
It also provides guidelines to further enhance the maturity of those software products.
It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
This model describes a strategy that should be followed by moving through 5 different levels.
Each level of maturity shows a process capability level. All the levels except level-1 are
further described by Key Process Areas (KPA’s).
Conceptually, key process areas form the basis for management control of the software project and
establish a context in which technical methods are applied, work products like models, documents,
data, reports, etc. are produced, milestones are established, quality is ensured and change is properly
managed.
58
The 5 levels of CMM are as follows:
Level-1: Initial –
No KPA’s defined.
Processes followed are adhoc and immature and are not well defined.
Unstable environment for software dvelopment.
No basis for predicting product quality, time for completion, etc.
Level-2: Repeatable –
KPA’s:
59
Project Planning- It includes defining resources required, goals, constraints, etc. for the
project. It presents a detailed plan to be followed systematically for successful completion of
a good quality software.
Configuration Management- The focus is on maintaining the performance of the software
product, including all its components, for the entire lifecycle.
Requirements Management- It includes the management of customer reviews and feedback
which result in some changes in the requirement set. It also consists of accommodation of
those modified requirements.
Subcontract Management- It focuses on the effective management of qualified software
contractors i.e. it manages the parts of the software which are developed by third parties.
Software Quality Assurance- It guarantees a good quality software product by following
certain rules and quality standard guidelines while development.
Level-3: Defined –
At this level, documentation of the standard guidelines and procedures takes place.
It is a well defined integrated set of project specific software engineering and management
processes.
KPA’s:
Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
Intergroup Coordination- It consists of planned interactions between different development
teams to ensure efficient and proper fulfilment of customer needs.
Organization Process Definition- It’s key focus is on the development and maintenance of the
standard development processes.
Organization Process Focus- It includes activities and practices that should be followed to
improve the process capabilities of an organization.
Training Programs- It focuses on the enhancement of knowledge and skills of the team
members including the developers and ensuring an increase in work efficiency.
Level-4: Managed –
At this stage, quantitative quality goals are set for the organization for software products as
well as software processes.
The measurements made help the organization to predict the product and process quality
within some limits defined quantitatively.
KPA’s:
60
Level-5: Optimizing –
This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
Use of new tools, techniques and evaluation of software processes is done to prevent
recurrence of known defects.
KPA’s:
1. Preliminary Analysis
2. System analysis and Requirement definition
3. System Design
4. Development
5. Integration and System Testing
6. Installation, Operation and Acceptance Testing
7. Maintenance
8. Disposal
We will be discussing these steps in brief and how risk assessment and management is incorporated
in these steps to ensure less risk in software being developed.
1. Preliminary analysis:
In this step you need to find out:
1. The organization’s objective
2. Nature and scope of problem under study
3. Propose alternative solutions and proposals after having the deep understanding of problem and
what competitors are doing
4. Describe costs and benefits.
61
1. Establish a process and responsibilities for risk management
2. Document Initial known risks
3. Project Manager should prioritize the risks
1. End user requirements are obtained through documentation, client interviews, observation and
questionnaires
2. Pros and cons of current system are identified so as to avoid the cons and carry forward the pros in
the new system.
3. Any Specific user proposals are used to prepare the specifications and solutions for the shortcomings
discovered in step two are found.
1. Identify assets that need to be protected and assigning their criticality in terms of
confidentiality, integrity and availability
2. Identify threats and resulting risk to those assets
3. Determine existing security controls to reduce that risks
In short we can divide this phase into five sub phases: Feasibility study, requirement
elicitation, requirement analysis, requirement validation, requirement documentations.
We will be discussing these phases in a bit details along with what risk factors are involved in
these sub phases.
4. Feasibility Study – This is the first and most important phase. Often this phase is conducted
as a standalone phase in big projects not as a sub phase under requirement definition phase.
This phase allows the team to get an estimate of major risk factors cost and time for a given
project. You might be wondering why this is so important? Feasibility study help us to get an
idea whether it is worthy to construct the system or not. It helps to identify main risk
factors.
Risk Factors –
Following is the list of risk factors for the feasibility study phase:
Project manager often make a mistake in estimating cost, time, resources and scope
of the project. Unrealistic budget, time, inadequate resources and unclear scope
often leads to project failure.
Unrealistic Budget: As discussed above inaccurate estimation of budget may lead to
project running out of funds early in the SDLC. Accurate estimation budget is directly
related to correct knowledge of time, effort and resources.
62
Unrealistic Schedule: Incorrect time estimation lead to a burden on developers by
project managers to deliver project on time. Thus compromising the overall quality
of the project and thus making the system less secure and more vulnerable.
Insufficient resources: In some case the technology, tools available are not up‐to
date to meet project requirements or resources(people, tools, technology) available
are not enough to complete the project. In any case it is the project will will get
delayed or in worst case it may lead to project failure.
Unclear project scope: Clear understanding of what project is supposed to do, which
functionalities are important, which functionalities are mandatory, which
functionalities can be considered as extra is very important for project managers.
Insufficient knowledge of the system may lead to project failure.
5. Requirement Elicitation – It starts with analysis of application domain. This phase requires
the participation from different stake holders to ensure efficient, correct and complete
gathering of system services, its performance and constraints. This data set is then reviewed
and articulated to make it ready for the next phase.
Risk Factors –
Incomplete requirements: In 60% of the cases users are unable to state all
requirements in the beginning. Therefore requirements have the most dynamic
nature in the complete SDLC Process. If any of the user needs, constraints or other
functional/non functional requirements are not covered then the requirement set is
said to be incomplete.
Inaccurate requirements: If the requirement set do not reflect real user needs then
in that case requirements are said to be inaccurate.
Unclear requirements: Often in the process of SDLC there exists a communication
gap between users and developers. This ultimately affects the requirement set. If
the requirements stated by users are not understandable by analyst and developers
then these requirements are said to be unclear.
Ignoring non functional requirements: Sometimes developers and analyst ignore the
fact that non functional requirements hold equal importance as functional
requirements. In this confusion they focus on delivering what system should do
rather than how system should be like scalability, maintainability, testability etc.
Conflicting user requirements: Multiple users in a system may have different
requirements. If not listed and analysed carefully, this may lead to inconsistency in
the requirements.
Gold plating: It is very important to list out all requirements in the beginning. Adding
requirements later during the development may lead to threats in the system. Gold
plating is nothing but adding extra functionality to the system that were not
considered earlier. Thus inviting threats and making the system vulnerable.
Unclear description of real operating environment: Insufficient knowledge of real
operating environment leads to certain missed vulnerabilities thus threats remain
undetected until later stage of software development life cycle.
6. Requirement Analysis Activity – In this step requirements that are gathered by interviewing
users or brainstorming or by another means will be: first analysed and then classified and
organised such as functional and non functional requirements groups and then these are
prioritized to get a better knowledge of which requirements are of high priority and need to
be definitely present in the system. After all these steps requirements are negotiated.
63
Risk Factors –
Risk management in this step has following task to do:
Non verifiable requirements: If a finite cost effective process like testing, inspection
etc is not available to check whether software meets the requirement or not then
that requirement is said to be non verifiable.
Infeasible requirement: if sufficient resources are not available to successfully
implement the requirement then it is said to be an infeasible requirement.
Inconsistent requirement: If a requirement contradicts with any other requirement
then the requirement is said to be inconsistent.
Non traceable requirement: It is very important for every requirement to have an
origin source. During documentation it is necessary to write origin source of each
requirement so that it can traced back in future when required.
Unrealistic requirement: If a requirement meets all above criteria like it is complete,
accurate, consistent, traceable, verifiable etc then that requirement is realistic
enough to be documented and implemented.
7. Requirement Validation Activity – This involves validating the requirements that are
gathered and analyzed till now to check whether they actually defines what user want from
the system.
Risk Factors –
Misunderstood domain specific terminology: Developers and Application specialists
often use domain specific terminology or we can say technical terms which are not
understandable for majority of end users. Thus creating a misunderstanding
between end users and developers.
Using natural language to express requirements: Natural language is not always the
best way to express requirements as different users may have different signs and
conventions. Thus it is advisable to use formal language for expressing and
documenting.
8. Requirement Documentation Activity – This steps involves creating a Requirement
Document(RD) by writing down all the agreed upon requirements using formal language. RD
serves as a means of communication between different stakeholders.
Risk Factors –
Inconsistent requirements data and RD: Sometimes it might happen, due to glitches
in gathering and documentation process, actual requirements may differ from the
documented ones.
Non modifiable RD: If during RD preparation, structuring of RD with maintainability
is not considered then it will become difficult to edit the document in the course of
change without rewriting it.
Refer for other phases of SDLC – Integrating Risk Management in SDLC | Set 2
64
Software Engineering | Integrating Risk Management in SDLC
| Set 2
Prerequisite – Integrating Risk Management in SDLC | Set 1
In this article we will be discussing System design and Development phase of SDLC.
3. System Design:
This is the second phase of SDLC wherein system architecture must be established and all
requirements that are documented needs to be addressed.
In this phase the system (operations and features) is described in detail using screen layouts,
pseudocode, business rules, process diagrams etc.
Accurate Classification of assets criticality
Planned controls accurately identified
1. Examine requirement Document – It is quite important for the developers to be a part of examining
requirement document to ensure understandability of requirements listed in requirement
document.
Risk Factors –
RD is not clear for developers: It is necessary for the developers to be involved in
requirements definition and analysis phase otherwise they won’t be having a good
understanding of the system to be developed. They will be unable to start designing on solid
understanding of the requirements of the system. Hence will land up creating a design for the
system other than the intended one.
2. Choosing the architectural design method activity – It is method to decompose system into
components. Thus it is a way to define software system components. There exist many methods for
architectural design like structured object oriented, Jackson system development and formal
methods. But there is no standard architectural design method.
Risk Factors –
Improper Architectural Design method: As discussed above there is no standard architectural
design method, one can choose most suitable method depending upon the project’s need. But
it is important to choose method with utmost care. If chosen incorrectly it may result in
problems in system implementation and integration. Even if implementation and integration
are successful it may be possible that the architectural design may not work successfully on
current machine. The choice of programming language depends upon the architectural model
chosen.
3. Choosing the programming language activity –Choosing programming language should be done side
by side with architectural method. As programming language should be compatible with the
architectural method chosen.
65
Risk Factors –
Improper choice of programming language: Incorrect choice of programming language may
not support chosen architectural method. Thus may reduce the maintainability and portability
of the system.
4. Constructing physical model activity – The physical model consisting of symbols is a simplified
description of hierarchical organized system.
Risk Factors –
o Complex system: If the system to be developed is very large and complex then it will create
problems for developers. as developers will get confused and will not be able to make out
from where to start and how to decompose such large and complex system into
components.
o Complicated design: For a large complex system it may be possible due to confusion and lack
of enough skills, developers may create a complicated design, which will be difficult to
implement.
o Large size components: In case of large size components that are further decomposable into
sub components, may suffer difficulty in implementation and also poses difficulty for
assigning functions to these components.
o Unavailability of expertise for reusability: Lack of proper expertise to determine the
components that can be reused pose a serious risk to project. As developing components
from scratch takes lot of time in comparison to reusing the components. Thus delaying the
projection completion.
o Less reusable components: Incorrect estimation of reusable components during analysis
phase leads to two serious risk to the project‐ first delay in project completion and second is
budget over run. Developers will be surprised to see that the percentage of the code that
was considered ready, needs to be rewritten from scratch and it will eventually make project
budget over run.
5. Verifying design activity – Verifying design means to ensure that the design is the correct solution
for the system under construction and it meets all user requirements.
Risk Factors –
o Difficulties in verifying design to requirements: Sometimes it is quite difficult for the
developer to check whether the proposed design meets all user requirements or not. In
order to make sure that design is correct solution for the system it is necessary that design
meets all requirements.
o Many feasible solutions: When verifying design, developer may come across many alternate
solutions for the same problem Thus, for choosing best possible design that meets all
requirements is difficult. Choice depends upon the system and its nature.
o Incorrect design: While verifying design it might be possible that the proposed design either
matches few requirements or no requirement at all. It may be possible that it is a completely
different design.
6. Specifying design activity – This activity involves following main tasks:
1. Identify the components and defines data flow between them
2. For each identified component state its function, data input, data output and resource
utilization.
66
Risk Factors –
o Difficulty in allocating functions to components: Developers may face difficulty in allocating
functions to components in two cases‐ first when the system is not decomposed correctly
and secondly if the requirement documentation is not done properly them in that case
developers will find it difficult to identify functions for the components as functional
requirements constitute the functions of the components.
o Extensive specification: Extensive specification of module processing should be avoided to
keep design document as small as possible.
o Omitting data processing functions: Data processing functions like create, read are the
operations that components perform on data. Accidental omission of these functions should
be avoided.
7. Documenting design activity – In this phase design document(DD) is prepared. This will help to
control and coordinate the project during implementation and other phases.
Risk Factors –
o Incomplete DD: Design document should be detailed enough explaining each component,
sub components, sub sub components in full detail so that developers may work
independently on different modules. If DD lacks this features then programmers cannot
work independently.
o Inconsistent DD: If same function is carried out by more than one component. Then in that
case it will result in redundancy in design document and will eventually result in inconsistent
document.
o Unclear DD: If the design document do not clearly define components and is written in
uncommon natural language, then in that case it might be difficult for the developers to
understand the proposed design.
o Large DD: Design document should be detailed enough to list all components will full details
about functions, input, output, resources required etc. It should not contain unnecessary
information. Large design document will be difficult for the programmers to understand .
4. Development:
This stage involves the actual coding of the software as per the agreed upon requirements between
developer and client.
Support from Risk Management Activities –
All designed controls are implemented during this stage.
1. Coding Activity – This step involves writing the source code for the system to be developed. User
interfaces are developed in this step. Each developed module is then tested in unit testing step.
Risk Factors –
o Unclear design document: If the design document is large and unclear it will be difficult for
the programmer to understand the document and to find out from where to start coding.
o Lack of independent working environment: Due to unclear and incomplete design document
it is difficult for the team of developers to assign independent modules to work on.
o Wrong user interface and user functions developed: Incomplete, inconsistent and unclear
design documents leads to wrongly implemented user interface and functions. Poor user
67
interface reduces the acceptability of the system in the real environment among the
customers.
o Programming language incompatible with architectural design: choice of architectural
method solely drives the choice of programming language. They must be chosen in sequence
otherwise if incompatible, programming language may not work with the chosen method.
o Repetitive code: In large projects there arises need to rewrite the same piece of code again
and again. This consumes lot of time and also increase lines of code.
o Modules developed by different programmers: In large projects, modules are divided
between the programmers. But different programmer has a different style and way of
thinking. This lead to a inconsistent, complex and ambiguous code.
2. Unit Testing Activity – Each module is tested individually to check whether it meets the specified
requirements or not and perform the functions it is intended to do.
Risk Factors –
o Lack of fully automated testing tools: Even till today unit testing is not fully automated. This
makes testing process boring and monotonous. Testers don’t bother to generate all possible
test cases.
o Code not understandable by reviewers: During unit testing, developers need to review and
make changes to the code. If the code is not understandable it will be very difficult to update
the code.
o Coding drivers and stubs: During unit testing, modules need data from other module or need
to pass data to other module. As no module is completely independent in itself. Stub is the
piece pf code that replaces the module that accepts data from module being tested. Driver is
the piece of code that replaces the module that passes data to the module being tested.
Coding drivers and stubs consumes lot of time and effort. Since these will not be delivered
with the final system so these are considered extras.
o Poor documentation of test cases: Test cases need to be documented properly so that these
can be used in future.
o Testing team not experienced: Testing team is not experienced enough to handle the
automated tools and to write short concise code for drivers and stubs.
o Poor regression testing: Regression testing means to rerun all successful test cases when a
change is made. This saves time and effort but it can be time consuming if all test cases are
selected for rerun.
Refer for other four phases – Integrating Risk Management in SDLC | Set 3
68
Software Engineering | Integrating Risk Management in SDLC
| Set 3
Prerequisite – Integrating Risk Management in SDLC | Set 1, and Set 2
In this article we will be discussing remaining four steps: Integration and System Testing,
Installation, Operation and Acceptance Testing, Maintenance and Disposal.
This phase includes three activities: Integration Activity, Integration Testing Activity, System
Testing Activity. We will be discussing these activities in a bit detail along with risk factors in each
activity.
1. Integration Activity – In this phase individual units are combined into one working system.
Risk Factors –
o Difficulty in combining components: Integration should be done incrementally else it will
very difficult to locate errors and bugs. The wrong sequence of integration will eventually
hamper the functionality for which the system was designed.
o Integrate wrong versions of components: Developing a system involves writing multiple
versions of the same component. If incorrect version of the component is selected for
integration it may not produce the desired functionality.
o Omissions: Integration of components should be done carefully. Single missed component
may result in error and bugs, that will be difficult to locate.
2. Integration Testing Activity – After integrating the components next step is to test whether the
components interface correctly and to evaluate their integration. This process is known as
integration testing.
Risk Factors –
o Bugs during integration: If wrong versions of components are integrated or components are
accidentally omitted, then it will result in bugs and errors in the resultant system.
o Data loss through interface: Wrong integration leads to a data loss between the components
where the number of parameters in the calling component do not match the number of
parameters in the called component.
o Desired functionality not achieved: Errors and bugs introduced during integration results in a
system that fails to generate the desired functionality.
o Difficulty in locating and repairing errors: If integration is not done incrementally, it results in
errors and bugs that are hard to locate. Even of the bugs are located, they need to be fixed.
Fixing error in one component may introduce error in other components. Thus it becomes
quite cumbersome to locate and repair errors.
3. System Testing Activity – In this step integrated system is tested to ensure that it meets all the
system requirements gathered from the users.
69
Risk Factors –
o Unqualified testing team: Lack of good testing team is a major setback for a good software
as testers may misuse the available resources and testing tools.
o Limited testing resources: Time, budget, tools if not used properly or unavailable may delay
project delivery.
o Not possible to test in real environment: Sometimes it is not able to test system in the real
environment due to lack of budget, time constraints etc.
o Testing cannot cope up with requirements change: Users requirements often change during
entire software development life cycle, so test cases should be designed to handle such
changes. If not designed properly they will not be able to cope up with change.
o System being tested is not testable enough: If the requirements are not verifiable, then In
that case it becomes quite difficult to test such system.
This is the last and longest phase in SDLC. In this system is delivered, installed, deployed and tested
for user acceptance.
1. Installation Activity – The software system is delivered and installed at the customer site.
Risk Factors –
o Problems in installation: If deployers are not experienced enough or if the system is complex
and distributed, then in that case it becomes difficult to install the software system.
o Change in environment: Sometimes the installed software system don’t work correctly in the
real environment, in some cases due to hardware advancement.
2. Operation Activity: Here end users are given training on how to use software system and its
services.
Risk Factors
o New requirements emerge: While using system, sometimes users feel need to add new
requirements.
o Difficulty in using system: Being a human it is always difficult in the beginning to accept a
change or we can say to accept a new system. But this should not go for a long otherwise
this will be a serious threat to acceptability of the system.
3. Acceptance Testing Activity – Delivered system is put into acceptance testing to check whether it
meets all user requirements or not.
Risk Factors –
o User resistance to change: It is human behavior to resist any new change in the
surroundings. But for the success of a new delivered system it is very important that the end
users accept the system and start using it.
70
o Too many software faults : Software faults should be discovered earlier before the system
operation phase, as discovery in the later phases leads to high cost in handling these faults.
o Insufficient data handling: New system should be developed keeping in mind the load of
user data it will have to handle in real environment.
o Missing requirements: while using the system it might be possible that the end users
discover some of the requirements and capabilities are missing.
Maintenance:
In this stage, the system is assessed to ensure it does not become obsolete. This phase also involves
continuous evaluation of the system in terms of performance and changes are made time to time to
initial software to make it up-to date.
Errors, faults discovered during acceptance testing are fixed in this phase. This step involves making
improvements to the system, fixing errors, enhancing services and upgrading software.
Risk Factors –
Budget overrun: Finding errors and fixing them involves repeating few steps in SDLC again. Thus
exceeding the budget.
Problems in upgrading: Constraints from end user or not so flexible architecture of the system forces
it to be not easily maintainable.
Disposal:
In this phase, plans are developed for discarding system information, hardware and software to make
transition to a new system. The purpose is to prevent any possibility of unauthorized disclosure of
sensitive data due to improper disposal of information. All of this should be done in accordance with
the organization’s security requirements.
Risk Factors –
Lack of knowledge for proper disposal: Proper disposal of information requires a experienced team,
having a plan on how to handle the residual data.
71
Lack of proper procedures: Sometimes in hurry to launch a new system, organization sidelines the
task of disposal. Procedures used to handle residual data should be properly documented, so that
they can be used in future.
1. Project planning
2. Project monitoring and control
Project planning
Project planning is undertaken immediately after the feasibility study phase and before the starting of
the requirement analysis and specification phase. Once a project has been found to be feasible,
Software project managers started project planning. Project planning is completed before any
development phase starts. Project planning involves estimating several characteristics of a project
and then plan the project activities based on these estimations. Project planning is done with most
care and attention. A wrong estimation can result in schedule slippage. Schedule delay can cause
customer dissatisfaction, which may lead to a project failure. For effective project planning, in
addition to a very good knowledge of various estimation techniques, past experience is also very
important. During the project planning the project manager performs the following activities:
1. Project Estimation: Project Size Estimation is the most important parameter based on which
all other estimations like cost, duration and effort are made.
o Cost Estimation: Total expenses to develop the software product is estimated.
o Time Estimation: The total time required to complete the project.
o Effort Estimation: The effort needed to complete the project is estimated.
The effectiveness of all later planning activities is dependent on the accuracy of these three
estimations.
2. Scheduling: After completion of estimation of all the project parameters, scheduling for
manpower and other resources are done.
3. Staffing: Team structure and staffing plans are made.
72
4. Risk Management: The project manager should identify the unanticipated risks that may
occur during project development risk, analysis the damage might cause these risks and take
risk reduction plan to cope up with these risks.
5. Miscellaneous plans: This includes making several other plans such as quality assurance
plan, configuration management plan, etc.
The order in which the planning activities are undertaken is shown in the below figure:
Project monitoring and control activities are undertaken once the development activities start.
The main focus of project monitoring and control activities is to ensure that the software
development proceeds as per plan. This includes checking whether the project is going on as
per plan or not if any problem created then the project manager must take necessary action to
solve the problem.
Role of a software project manager: There are many roles of a project manager in the
development of software.
o Lead the team: The project manager must be a good leader who makes a team of
different members of various skills and can complete their individual task.
o Motivate the team-member: One of the key roles of a software project manager is to
encourage team member to work properly for the successful completion of the
project.
o Tracking the progress: The project manager should keep an eye on the progress of
the project. A project manager must track whether the project is going as per plan or
not. If any problem arises, then take necessary action to solve the problem. Moreover,
check whether the product is developed by maintaining correct coding standards or
not.
o Liaison: Project manager is the link between the development team and the customer.
Project manager analysis the customer requirements and convey it to the development
team and keep telling the progress of the project to the customer. Moreover, the
project manager checks whether the project is fulfilling the customer requirements or
not.
o Documenting project report: The project manager prepares the documentation of
the project for future purpose. The reports contain detailed features of the product and
various techniques. These reports help to maintain and enhance the quality of the
project in the future.
73
Skills that are the most important to become a successful project manager are given below:
Types of Complexity:
Time Management Complexity: Complexities to estimate the duration of the project. It also
includes the complexities to make the schedule for different activities and timely completion
of the project.
Cost Management Complexity: Estimating the total cost of the project is a very difficult
task and another thing is to keep an eye that the project does not overrun the budget.
Quality Management Complexity: The quality of the project must satisfy the customer
requirements. It must assure that the requirements of the customer are fulfilled.
Risk Management Complexity: Risks are the unanticipated things that may occur during
any phases of the project. Various difficulties may occur to identify these risks and make
amendment plans to reduce the effects of these risks.
Human Resources Management Complexity: It includes all the difficulties regarding
organizing, managing and leading the project team.
Communication Management Complexity: All the members must interact with all the
other members and there must be a good communication with the customer.
Procurement Management Complexity: Projects need many services from third party to
complete the task. These may increase the complexity of the project to acquire the services.
Integration Management Complexity: The difficulties regarding to coordinate processes
and develop a proper project plan. Many changes may occur during the project development
and it may hamper the project completion, which increases the complexity.
o Invisibility: Until the development of a software project is complete, Software
remains invisible. Anything that is invisible, is difficult to manage and control.
Software project manager cannot view the progress of the project due to the
invisibility of the software until it is completely developed. The project manager can
monitor the modules of the software that have been completed by the development
team and the documents that have been prepared, which are a rough indicator of the
progress achieved. Thus the invisibility causes a major problem to the complexity of
managing a software project.
o Changeability: Requirements of a software product are undergone various changes.
Most of these change requirements come from the customer during the software
development. Sometimes these change requests resulted in redo some work, which
may cause various risks and increases the expenses. Thus frequent changes to the
requirements play a major role to make software project management complexity.
o Interaction: Even a moderate-sized software has millions of parts (functions) that
interact with each other in many ways such as data coupling, serial and concurrent
runs, state transitions, control dependency, file sharing, etc. Due to the inherent
complexity of the functioning of a software product in terms of the basic parts making
up the software, many types of risks are associated with its development. This makes
managing software projects much more difficult as compared to many other kinds of
project.
o Uniqueness: Every software project is usually associated with many unique features
or situations. This makes every software product much different from the other
software projects. This is unlike the projects in other domains such as building
construction, bridge construction, etc. where the projects are more predictable. Due to
this uniqueness of the software projects, during the software development, a project
manager faces many unknown problems that are quite dissimilar to other software
projects that he had encountered in the past. As a result, a software project manager
has to confront many unanticipated issues in almost every project that he manages.
o Exactness of the Solution: A small error can create a huge problem in a software
project. The solution must be exact according to its design. The parameters of a
function call in a program are required to be correct with the function definition. This
requirement of exact conformity of the parameters of a function introduces additional
risks and increases the complexity of managing software projects.
o Team-oriented and Intelect-intensive work: Software development projects are
team-oriented and intellect-intensive work. A software cannot be developed without
the interaction between developers. In a software development project, the life cycle
activities not only intellect-intensive, but each member has to typically interact,
review the work done by other members and interface with several other team
members creates various complexity to manage software projects.
o Huge task regarding Estimation: One of the most important aspects of software
project management is Estimation. During project planning, a project manager has to
estimate the cost of the project, probable duration to complete the project and how
much effort needed to complete the project based on size estimation. This estimation
is a very complex task, which increases the complexity of software project
management.
75
Software Engineering | Quasi renewal processes
Let {N(t), t > 0} be a counting process and let be the time between the and the
event of this process, .
Definition:
If the sequence of non-negative random variables {X1, X2, ….} is independent and
for where is a constant, then the counting process {N(t), t 0} is said to be a quasi-
renewal process with parameter and the first inter-arrival time .
When = 1, this process becomes the ordinary renewal process. This quasi-renewal process can be
used to model reliability growth processes in software testing phases and hardware burn-in stages for
> 1, and in hardware maintenance processes when 1.
76
4. The failure rate of Xn for n = 1, 2, 3, … is
Because of the non-negativity of and the fact that is not identically 0, we obtain
Proposition-1:
The shape parameters of are the same for n = 1, 2, 3, … for a quasi-renewal process if
follows the gamma, Weibull, or log normal distribution. This means that after “renewal”, the shape
parameters of the inter-arrival time will not change. In software reliability, the assumption that the
software debugging process does not change the error-free distribution type seems reasonable.
Thus, the error-free times of software during the debugging phase modeled by a quasi-renewal
process will have the same shape parameters. In this sense, a quasi-renewal process is suitable to
model the software reliability growth. It is worthwhile to note that,
Therefore, if the inter-arrival time represents the error-free time of a software system, then the
average error-free time approaches infinity when its debugging process is occurring for a long
debugging time.
Proposition-2:
The first inter-arrival distribution of a quasi-renewal process uniquely determines its renewal
function. If the inter-arrival time represents the error-free time (time to first failure), a quasi-renewal
process can be used to model reliability growth for both software and hardware.
77
Suppose that all faults of software have the same chance of being detected. If the inter-arrival time of
a quasi-renewal process represents the error-free time of a software system, then the expected
number of software faults in the time interval [0, t] can be defined by the renewal function, m(t),
with parameter . Denoted by , the number of remaining software faults at time t, it
follows that,
where is the number of faults that will eventually be detected through a software lifecycle
Tc.
1. Coutinho Model –
Coutinho adapted the Duane growth model to represent the software testing process.
Coutinho plotted the cumulative number of deficiencies discovered and the number of
correction actions made vs the cumulative testing weeks on log-log paper. Let N(t) denote the
cumulative number of failures and let t be the total testing time. The failure rate, (t), the
model can be expressed as
where are the model parameters. The least squares method can be used to
estimate the parameters of this model.
78
where are the unknown parameters. The function b(t) can be obtained as the
number of test cases or total testing time. Similarly, the failure rate function at time t is given
by
Wall and Ferguson tested this model using several software failure data and observed that
failure data correlate well with the model.
Assumptions:
The assumptions in this model include the following:
1. The program contains N initial faults which is an unknown but fixed constant.
2. Each fault in the program is independent and equally likely to cause a failure during a test.
3. Time intervals between occurrences of failure are independent of each other.
4. Whenever a failure occurs, a corresponding fault is removed with certainty.
5. The fault that causes a failure is assumed to be instantaneously removed, and no new faults
are inserted during the removal of the detected fault.
6. The software failure rate during a failure interval is constant and is proportional to the
number of faults remaining in the program.
The program failure rate at the ith failure interval is given by,
where
= a proportional constant, the contribution any one fault makes to the overall program
N = the number of initial faults in the program
79
and after the first failure, the failure intensity decreases to
and so on.
The partial distribution function(pdf) of is
1. All faults in a program are mutually independent of the failure detection point of view.
2. The number of failures detected at any time is proportional to the current number of faults in
a program. This means that the probability of the failures for faults actually occurring, i.e.,
detected, is constant.
3. The isolated faults are removed prior to future test occasions.
4. Each time a software failure occurs, the software error which caused it is immediately
removed, and no new errors are introduced.
where a is the expected total number of faults that exist in the software before testing and b is the
failure detection rate or the failure intensity of a fault.
Theorem:
80
The mean value function solution of the differential equation 1 is given by
For Type-I data, the estimate of parameters a and b of the Goel-Okumoto model using the MLE
method can be obtained by solving the following equations simultaneously:
Similarly, for Type-II data, the estimate of parameters a and b using the MLE
method can be obtained by solving the following equations:
Let and be the MLE of parameters a and b, respectively. We can then obtain the MLE of the mean
value function (MVF) and the reliability function as follows:
It is of interest to determine the variability of the number of failures at time t, N(t). One can
approximately obtain the confidence intervals for N(t) based on the Poisson distribution as
81
Mills’ Error Seeding Model
Mills’error seeding model proposed an error seeding method to estimate the number of errors in a
program by introducing seeded errors into the program. From the debugging data, which consist of
inherent errors and induced errors, the unknown number of inherent errors could be estimated. If
both inherent errors and induced errors are equally likely to be detected, then the probability of k
induced errors in r removed errors follows a hypergeometric distribution which is given by
where
N = total number of inherent errors
n1 = total number of induced errors
r = total number of errors removed during debugging
k = total number of induced errors in r removed errors
r – k = total number of inherent errors in r removed errors
where
Drawbacks:
1. It is expensive to conduct testing of the software and at the same time, it increases the testing
effort.
2. This method was also criticized for its inability to determine the type, location, and difficulty
level of the induced errors such that they would be detected equally likely as the inherent
errors.
Another realistic method for estimating the residual errors in a program is based on two independent
groups of programmers testing the program for errors using independent sets of test cases. Suppose
82
that out of a total number of N initial errors, the first programmer detects n1 errors (and does not
remove them at all) and the second independently detects r errors from the same program.
Assume that k common errors are found by both programmers. If all errors have an equal chance of
being detected, then the fraction detected by the first programmer (k) of a randomly selected subset
of errors (e.g., r) should equal the fraction that the first programmer detects (n1) of the total number
of initial errors N. In other words,
The probability of exactly N initial errors with k common errors in r detected errors by the second
programmer can be obtained using a hypergeometric distribution as follows:
Forward error recovery aims to identify the error and, based on this knowledge, correct the system
state containing the error. Exception handling in high-level languages, such as Ada and PL/1,
provides a system structure that supports forward recovery. Backward error recovery corrects the
system state by restoring the system to a state which occurred prior to the manifestation of the fault.
The recovery block scheme provides such a system structure. Another fault-tolerant software
technique commonly used is error masking. The NVP scheme uses several independently developed
83
versions of an algorithm. A final voting system is applied to the results of these N-versions and a
correct result is generated.
A fundamental way of improving the reliability of software systems depends on the principle of
design diversity where different versions of the functions are implemented. In order to prevent
software failure caused by unpredicted conditions, different programs (alternative programs) are
developed separately, preferably based on different programming logic, algorithm, computer
language, etc. This diversity is normally applied under the form of recovery blocks or N-version
programming.
Fault-tolerant software assures system reliability by using protective redundancy at the software
level. There are two basic techniques for obtaining fault-tolerant software: RB scheme and NVP.
Both schemes are based on software redundancy assuming that the events of coincidental software
failures are rare.
The alternate modules are identified by the keywords “else by” When all alternate modules
are exhausted, the recovery block itself is considered to have failed and the final keywords
“else error” declares the fact. In other words, when all modules execute and none produce
acceptable outputs, then the system falls. A reliability optimization model has been studied
by Pham (1989b) to determine the optimal number of modules in a recovery block scheme
that minimizes the total system cost given the reliability of the individual modules.
where
= probability of failure for version Pi
= probability that acceptance test i judges an incorrect result as correct
t = probability that acceptance test i judges a correct result as incorrect.
The above equation corresponds to the case when all versions fall the acceptance test. The
second term corresponds to the probability that acceptance test i judges an incorrect result as
correct at the ith trial of the n versions.
seq
par
P1(version 1)
P2(version 2)
.
.
.
Pn(version n)
decision V
Assume that a correct result is expected where there are at least two correct results.
The first term of this equation is the probability that all versions fail. The second term is the
probability that only one version is correct. The third term, d, is the probability that there are
at least two correct results but the decision algorithm fails to deliver the correct result.
It is worthwhile to note that the goal of the NVP approach is to ensure that multiple versions
will be unlikely to fail on the same inputs. With each version independently developed by a
different programming team, design approach, etc., the goal is that the versions will be
different enough in order that they will not fail too often on the same inputs. However,
multiversion programming is still a controversial topic.
The main difference between the recovery block scheme and the N-version programming is that the
modules are executed sequentially in the former. The recovery block generally is not applicable to
critical systems where real-time response is of great concern.
Correct faults.
Improve the design.
Implement enhancements.
Interface with other systems.
Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
Migrate legacy software.
Retire software.
86
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs
observed while the system is in use, or to enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on
new platforms, on new operating systems, or when they need the product to interface with
new hardware and software.
3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to
change different types of functionalities of the system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of
the software. It goals to attend problems, which are not significant at this moment but may
cause serious issues in future.
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
87
Used of Software Reverse Engineering –
Software Reverse Engineering is used in software design, reverse engineering enables the
developer or programmer to add new features to the existing software with or without
knowing the source code.
Reverse engineering is also useful in software testing, it helps the testers to study the virus
and other malware code .
Software Requirements
Software Engineering | Requirements Engineering Process
Requirement Engineering is the process of defining, documenting and maintaining the requirements.
It is a process of gathering and defining service provided by the system. Requirements Engineering
Process consists of the following main activities:
Requirements elicitation
Requirements specification
Requirements verification and validation
Requirements management
Requirements Elicitation:
It is related to the various ways used to gain knowledge about the project domain and requirements.
The various sources of domain knowledge include customers, business manuals, the existing
software of same type, standards and other stakeholders of the project.
The techniques used for requirements elicitation include interviews, brainstorming, task analysis,
Delphi technique, prototyping, etc. Some of these are discussed here. Elicitation does not produce
formal models of the requirements understood. Instead, it widens the knowledge domain of the
analyst and thus helps in providing input to the next stage.
Requirements specification:
This activity is used to produce formal software requirement models. All the requirements including
the functional as well as the non-functional requirements and the constraints are specified by these
models in totality. During specification, more knowledge about the problem may be required which
can again trigger the elicitation process.
The models used at this stage include ER diagrams, data flow diagrams(DFDs), function
decomposition diagrams(FDDs), data dictionaries, etc.
88
successive stages resulting in a lot of modification and rework.
The main steps for this process include:
The requirements should be consistent with all the other requirements i.e no two requirements
should conflict with each other.
The requirements should be complete in every sense.
The requirements should be practically achievable.
Reviews, buddy checks, making test cases, etc. are some of the methods used for this.
Requirements management:
Requirement management is the process of analyzing, documenting, tracking, prioritizing and
agreeing on the requirement and controlling the communication to relevant stakeholders. This stage
takes care of the changing nature of requirements. It should be ensured that the SRS is as modifiable
as possible so as to incorporate changes in requirements specified by the end users at later stages too.
Being able to modify the software as per requirements in a systematic and controlled manner in an
extremely important part of the requirements engineering process.
Functional requirements
Non-functional requirements
Domain requirements
89
Functional Requirements: These are the requirements that the end user specifically demands as
basic facilities that the system should offer. All these functionalities need to be necessarily
incorporated into the system as a part of the contract. These are represented or stated in the form of
input to be given to the system, the operation performed and the output expected. They are basically
the requirements stated by the user which one can see directly in the final product, unlike the non-
functional requirements.
For example, in a hospital management system, a doctor should be able to retrieve the information of
his patients. Each high-level functional requirement may involve several interactions or dialogues
between the system and the outside world. In order to accurately describe the functional
requirements, all scenarios must be enumerated.
There are many ways of expressing functional requirements e.g., natural language, a structured or
formatted language with no rigorous syntax and formal specification language with proper syntax.
Non-functional requirements: These are basically the quality constraints that the system must
satisfy according to the project contract. The priority or extent to which these factors are
implemented varies from one project to other. They are also called non-behavioral requirements.
They basically deal with issues like:
Portability
Security
Maintainability
90
Reliability
Scalability
Performance
Reusability
Flexibility
Interface constraints
Performance constraints: response time, security, storage space, etc.
Operating constraints
Life cycle constraints: mantainability, portability, etc.
Economic constraints
The process of specifying non-functional requirements requires the knowledge of the functionality of
the system, as well as the knowledge of the context within which the system will operate.
Domain requirements: Domain requirements are the requirements which are characteristic of a
particular category or domain of projects. The basic functions that a system of a specific domain
must necessarily exhibit come under this category. For instance, in an academic software that
maintains records of a school or college, the functionality of being able to access the list of faculty
and list of students of each grade is a domain requirement. These requirements are therefore
identified from that domain model and are not user specific.
Why SRS?
In order to fully understand one’s project, it is very important that they come up with a SRS listing
out their requirements, how are they going to meet it and how will they complete the project. It helps
the team to save upon their time as they are able to comprehend how are going to go about the
project. Doing this also enables the team to find out about the limitations and risks early on.
91
Project Plan: MeetUrMate
1. Introduction
This document lays out a project plan for the development of “MeetUrMate” open source
repository system by Anurag Mishra.
The intended readers of this document are current and future developers working on
“MeetUrMate” and the sponsors of the project. The plan will include, but is not restricted to,
a summary of the system functionality, the scope of the project from the perspective of the
“MeetUrMate” team (me and my mentors), scheduling and delivery estimates, project risks
and how those risks will be mitigated, the process by which I will develop the project, and
metrics and measurements that will be recorded throughout the project.
2. Overview
In today’s world, owning to the heavy workload on the employees, they are having huge
amount of stress in their lives. Even with the presence of so many gadgets in and around
them, they are not able to relieve their stress. I aim to develop an application that would
enable them to share the thing of their liking and meet the person who have the same passion
as theirs. For eg. If someone wants to share their art, they can share it through the platform, if
someone wants to sing any song, they can record it and share the same. They can also share
videos (with some funny commentary in the background), share mysteries which other people
can solve, post any question. Through my platform, I’ll enable them to meet people who
share the common interests and passion, chat with them and have some fun.
2.1 Customers
Everyone. Anyone can use this application ranging from a child to an old-age person.
2.2 Functionality
Users should be able to register through their already existing accounts.
They should be able to share snaps/videos/snaps.
People should be able to like and comment on any post. One person can follow another
person who share common interests and likings which would enable them to find mates
apart from their usual friend circle.
Each user can have his/her profile picture, status
People can post mysteries and other people can solve the mysteries.
Users will get points for the popularity of their posts/the number of mysteries they solve.
Add own funny commentary on any video
Post any questions regarding their interests and people can answer.
2.3 Platform
It will be launched both as a Web-based application and Mobile app for Android.
92
2.4 Development Responsibility
I, Anurag Mishra, would be developing the software and I am responsible for the creation of
the Database and all the other related stuffs.
Users should be able to register through their already existing accounts.
They should be able to share snaps/videos/snaps.
People should be able to like and comment on any post.
One person can follow another person who share common interests and likings which would
enable them to find mates apart from their usual friend circle.
Each user can have his/her profile picture, status.
People can post mysteries and other people can solve the mysteries.
Users will get points for the popularity of their posts/the number of mysteries they solve.
4. Deliverables
Feature specification
Product design
Test plan
Development document
Source code
5. Risk Management
1) People are already using Facebook to find friends. So, what would be the real cause that
would motivate them to join my application.
Even though most of the users would already be using Facebook, our platform would still
offer them many things that is not there on Facebook. For eg.
1. They don’t meet people who share common interests and passions as much. Our application
would enable them to meet people (apart from usual friends) who share common interests
and passions on a more frequent basis.
2. Users of fb cannot share songs on‐the‐go which they have sung whereas on our app they can
do that on‐the‐go.
3. People can post mysteries/cases and other people can solve it. Moreover, people will get
points in case they solve the mysteries or on the basis of popularity of their posts.
4. More importantly, people need not register for my application, but instead, they can login
using their already existing accounts of Google/Facebook.
93
Thus, I think that there is a considerable amount of difference between
Facebook/Instagram/Twitter and my application and it would attract many people.
Release
Milestone Description Release Date
Iteration
(Front‐end development)
Database for my application
M2 October 17, 2015 R1
(Back‐end)
Integrating views and designs
M3 (Integrating front‐end and back‐ November 12, 2015 R1
end)
Issue tracker, user reviews, web
M5 December 1, 2015 R2
design integration
7. Technical Process
Following would be the languages I would use to develop my application within the
stipulated time period:
1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is said to be
correct if it covers all the requirements that are actually expected from the system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all the pages,
resolving the to be determined parts to as much extent as possible as well as covering all the
functional and non‐functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set of
requirements. Examples of conflict include differences in terminologies used at separate places,
logical conflicts like time period of report generation, etc.
4. Unambiguousness:
An SRS is said to be unambiguous if all the requirements stated have only 1 interpretation. Some of
the ways to prevent unambiguousness include the use of modelling techniques like ER diagrams,
proper reviews and buddy checks, etc.
5. Ranking for importance and stability:
There should a criterion to classify the requirements as less or more important or more specifically
as desirable or essential. An identifier mark can be used with every requirement to indicate its rank
or stability.
6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting changes to
the system to some extent. Modifications should be properly indexed and cross‐referenced.
7. Verifiability:
An SRS is verifiable if there exists a specific technique to quantifiably measure the extent to which
every requirement is met by the system. For example, a requirement stating that the system must
be user‐friendly is not verifiable and listing such requirements should be avoided.
8. Traceability:
One should be able to trace a requirement to a design component and then to a code segment in
the program. Similarly, one should be able to trace a requirement to the corresponding test cases.
9. Testability:An SRS should be written in such a way that it is easy to generate test cases and test
plans from the document.
10. Design Independence:
There should be an option to choose from multiple design alternatives for the final system. More
specifically, the SRS should not include any implementation
95
11. Understandable by the customer:
An end user maybe an expert in his/her specific domain but might not be an expert in computer
science. Hence, the use of formal notations and symbols should be avoided to as much extent as
possible. The language should be kept easy and clear.
12. Right level of abstraction:
If the SRS is written for the requirements phase, the details should be explained explicitly. Whereas,
for a feasibility study, fewer details can be used. Hence, the level of abstraction varies according to
the purpose of the SRS.
There are a number of requirements elicitation methods. Few of them are listed below –
1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
96
5. Use Case Approach
The success of an elicitation technique used depends on the maturity of the analyst, developers, users
and the customer involved.
1. Interviews:
Objective of conducting an interview is to understand the customer’s expectations from the software.
It is impossible to interview every stakeholder hence representatives from groups are selected based
on their expertise and credibility.
1. In open ended interviews there is no pre‐set agenda. Context free questions may be asked to
understand the problem.
2. In structured interview, agenda of fairly open questions is prepared. Sometimes a proper
questionnaire is designed for the interview.
2. Brainstorming Sessions:
It is a group technique
It is intended to generate lots of new ideas hence providing a platform to share views
A highly trained facilitator is required to handle group bias and group conflicts.
Every idea is documented so that everyone can see it.
Finally a document is prepared which consists of the list of requirements and their priority if
possible.
It’s objective is to bridge the expectation gap – difference between what the developers think they
are supposed to build and what customers think they are going to get.
A team oriented approach is developed for requirements gathering.
Each attendee is asked to make a list of objects that are-
1. Part of the environment that surrounds the system
2. Produced by the system
3. Used by the system
Each participant prepares his/her list, different lists are then combined, redundant entries are
eliminated, team is divided into smaller sub-teams to develop mini-specifications and finally a draft
of specifications is written down using all the inputs from the meeting.
97
4. Quality Function Deployment:
In this technique customer satisfaction is of prime concern, hence it emphasizes on the requirements
which are valuable to the customer.
3 types of requirements are identified –
Normal requirements – In this the objective and goals of the proposed software are discussed with
the customer. Example – normal requirements for a result management system may be entry of
marks, calculation of results etc
Expected requirements – These requirements are so obvious that the customer need not explicitly
state them. Example – protection from unauthorised access.
Exciting requirements – It includes features that are beyond customer’s expectations and prove to
be very satisfying when present. Example – when an unauthorised access is detected, it should
backup and shutdown all processes.
1. Identify all the stakeholders, eg. Users, developers, customers etc
2. List out all requirements from customer.
3. A value indicating degree of importance is assigned to each requirement.
4. In the end the final list of requirements is categorised as –
o It is possible to achieve
o It should be deferred and the reason for it
o It is impossible to achieve and should be dropped off
This technique combines text and pictures to provide a better understanding of the requirements.
The use cases describe the ‘what’, of a system and not ‘how’. Hence they only give a functional view
of the system.
The components of the use case deign includes three major things – Actor, Use cases, use case
diagram.
1. Actor – It is the external agent that lies outside the system but interacts with it in some way. An
actor maybe a person, machine etc. It is represented as a stick figure. Actors can be primary actors
or secondary actors.
o Primary actors – It requires assistance from the system to achieve a goal.
o Secondary actor – It is an actor from which the system needs assistance.
2. Use cases – They describe the sequence of interactions between actors and the system. They
capture who(actors) do what(interaction) with the system. A complete set of use cases specifies all
possible ways to use the system.
3. Use case diagram – A use case diiagram graphically represents what happens when an actor
interacts with a system. It captures the functional aspect of the system.
o A stick figure is used to represent an actor.
o An oval is used to represent a use case.
o A line is used to represent a relationship between an actor and a use case.
For more information on use case diagram, refer to – Designing Use Cases for a Project
98
Software Engineering | Challenges in eliciting requirements
Prerequisite – Requirements Elicitation
Eliciting requirements is the first step of Requirement Engineering process. It helps the analyst to
gain knowledge about the problem domain which in turn is used to produce a formal specification of
the software. There are a number of issues and challenges encountered during this process. Some of
them are as follows:
1. Understanding large and complex system requirements is difficult –
The word ‘large’ represents 2 aspects:
o (i) Large constraints in terms of security, etc. due to a large number of users.
o (ii) Large number of functions to be implemented.
The complex system requirements include those requirements which are unclear and difficult
to implement.
2. Undefined system boundaries –
There might be no defined set of implementation requirements. The customer may go on to include
several unrelated and unnecessary functions besides the important ones, resulting in an extremely
large implementation cost which may exceed the decided budget.
3. Customers/Stakeholders are not clear about their needs. –
Sometimes, the customers themselves maybe unsure about the exhaustive list of functionalities they
wish to see in the software. This might happen when they have a very basic idea about their needs
but haven’t planned much about the implementation part.
4. Conflicting requirements are there –
There is a possibility that two different stakeholders of the project express demands which
contradict each other’s implementation. Also, a single stakeholder might also sometimes express
two incompatible requirements.
5. Changing requirements is another issue –
In case of successive interviews or reviews from the customer, there is a possibility that the
customer expresses a change in the initial set of specified requirements. While it is easy to
accommodate some of the requirements, it is often difficult to deal with such changing
requirements.
6. Partitioning the system suitably to reduce complexity –
The projects can sometimes be broken down into small modules or functionalities which are then
handled by separate teams. Often, more complex and large projects require more partitioning. It
needs to be ensured that the partitions are non‐overlapping and independent of each other.
7. Validating and Tracing requirements –
Cross‐checking the listed requirements before starting the implementation part is very important.
Also, there should be forward as well as backward traceability. For eg, all the entity names should be
the same everywhere, i.e., there shouldn’t be a case where ‘STUDENT’ and ‘STUDENTS’ are used at
separate places to refer to the same entity.
99
8. Identifying critical requirements –
Identifying the set of requirements which have to be implemented at any cost is very important. The
requirements should be prioritized so that crucial ones can be implemented first with the highest
priority.
9. Resolving the “to be determined” part of the requirements –
The TBD set of requirements include those requirements which are yet to be resolved in the future.
The number of such requirements should be kept as low as possible.
10. Proper documentation, proper meeting time and budget constraints –
Ensuring a proper documentation is an inherent challenge, especially in case of changing
requirements. The time and budget constraints too need to be handled carefully and systematically.
100
Software Testing and Debugging:
Software Engineering | Seven Principles of software testing
Software testing is a process of executing a program with the aim of finding the error. To make our
software perform well it should be error free. If testing is done successfully it will remove all the
errors from the software.
Testing shows presence of defects: The goal of software testing is to make the software fail.
Software testing reduces the presence of defects. Software testing talks about the presence of
defects and doesn’t talk about the absence of defects. Software testing can ensure that defects
are present but it can not prove that software is defects free. Even multiple testing can never
ensure that software is 100% bug-free. Testing can reduce the number of defects but not
removes all defects.
Exhaustive testing is not possible: It is the process of testing the functionality of a software
in all possible inputs (valid or invalid) and pre-conditions is known as exhaustive testing.
Exhaustive testing is impossible means the software can never test at every test cases. It can
test only some test cases and assume that software is correct and it will produce the correct
output in every test cases. If the software will test every test cases then it will take more cost,
effort, etc. and which is impractical.
Early Testing: To find the defect in the software, early test activity shall be started. The
defect detected in early phases of SDLC will very less expensive. For better performance of
software, software testing will start at initial phase i.e. testing will perform at the requirement
analysis phase.
Defect clustering: In a project, a small number of the module can contain most of the
defects. Pareto Principle to software testing state that 80% of software defect comes from
20% of modules.
Pesticide paradox: Repeating the same test cases again and again will not find new bugs. So
it is necessary to review the test cases and add or update test cases to find new bugs.
Testing is context dependent: Testing approach depends on context of software developed.
Different types of software need to perform different types of testing. For example, The
testing of the e-commerce site is different from the testing of the Android application.
Absence of errors fallacy: If a built software is 99% bug-free but it does not follow the user
requirement then it is unusable. It is not only necessary that software is 99% bug-free but it
also mandatory to fulfill all the customer requirements.
101
Software Engineering | Testing Guidelines
There are certain testing guidelines that should be followed while testing the software:
Development team should avoid testing the software: Testing should always be performed
by the testing team. The developer team should never test the software themselves. This is
because after spending several hours building the software, it might unconsciously become
too proprietorial and that might prevent seeing any flaws in the system. The testers should
have a destructive approach towards the product. Developers can perform unit testing and
integration testing but software testing should be done by the testing team.
Software can never be 100% bug-free: Testing can never prove the software to 100% bug-
free. In other words, there is no way to prove that the software is free of errors even after
making a number of test cases.
Start as early as possible: Testing should always starts parallelly alongside the requirement
analysis process. This is crucial in order to avoid the problem of defect migration. It is
important to determine the test objects and scope as early as possible.
Prioritize sections: If there are certain critical sections, then it should be ensured that these
sections are tested with the highest priority and as early as possible.
The time available is limited: Testing time for software is limited. It must be kept in mind
that the time available for testing is not unlimited and that an effective test plan is very
crucial before starting the process of testing. There should be some criteria to decide when to
terminate the process of testing. This criterion needs to be decided beforehand. For instance,
when the system is left with an acceptable level of risk or according to timelines or budget
constraints.
Testing must be done with unexpected and negative inputs: Testing should be done with
correct data and test cases as well as with flawed test cases to make sure the system is leak
proof. Test cases must be well documented to ensure future reuse for testing at later stages.
This means that the test cases must be enlisted with proper definitions and descriptions of
inputs passed and respective outputs expected. Testing should be done for functional as well
as the non-functional requirements of the software product.
Inspecting test results properly: Quantitative assessment of tests and their results must be
done. The documentation should be referred to properly while validating the results of the
test cases to ensure proper testing. Testing must be supported by automated tools and
techniques as much as possible. Besides ensuring that the system does what all it is supposed
to do, testers also need to ensure that the system does not perform operations which it isn’t
supposed to do.
Validating assumptions: The test cases should never be developed on the basis of
assumptions or hypothesis. They must always be validated properly. For instance, assuming
that the software product is free from any bugs while designing test cases may result in
extremely weak test cases.
102
Software Engineering | Types of Software Testing
Introduction:‐
Testing is a process of executing a program with the aim of finding error. To make our software
perform well it should be error free.If testing is done successfully it will remove all the errors from
the software.
Principles of Testing:‐
Types of Testing:‐
1. Unit Testing
It focuses on smallest unit of software design. In this we test an individual unit or group of inter
related units. It is often done by programmer by using sample input and observing its corresponding
outputs.
Example:
2. Integration Testing
The objective is to take unit tested components and build a program structure that has been dictated
by design. Integration testing is testing in which a group of components are combined to produce
output.
Integration testing is of four types: (i) Top down (ii) Bottom up (iii) Sandwich (iv) Big-Bang
Example
103
(b) White Box testing:- It is used for verification.
In this we focus on internal mechanism i.e.
how the output is achieved?
3. Regression Testing
Every time new module is added leads to changes in program. This type of testing make sure that
whole component works properly even after adding components to the complete program.
Example
4. Smoke Testing
This test is done to make sure that software under testing is ready or stable for further testing
It is called smoke test as testing initial pass is done to check if it did not catch the fire or smoked in
the initial switch on.
Example:
5. Alpha Testing
This is a type of validation testing. It is a type of acceptance testing which is done before the product
is released to customers. It is typically done by QA people.
Example:
6. Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software. This version
is released for the limited number of users for testing in real time environment
Example:
7. System Testing
In this software is tested such that it works fine for different operating system. It is covered under the
black box testing technique. In this we just focus on required input and output without focusing on
internal working.
In this we have security testing, recovery testing , stress testing and performance testing
Example:
104
This include functional as well as non functional
testing
8. Stress Testing
In this we gives unfavorable conditions to the system and check how they perform in those
condition.
Example:
9. Performance Testing
It is designed to test the run-time performance of software within the context of an integrated system.
It is used to test speed and effectiveness of program.
Example:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by context
free grammar. In this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead of
giving all of them separately we can group them together and test only one input of each group. The
idea is to partition the input domain of the system into a number of equivalence classes such that
each member of class works in a similar way, i.e., if a test case in one class results in some error,
other members of class would also result into same error.
1. Identification of equivalence class – Partition any input domain into minimum two sets: valid values
and invalid values. For example, if the valid range is 0 to 100 then select one valid input like 49 and
one invalid like 104.
105
2. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two invalid
inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
o Whole number which is a perfect square‐ output will be an integer.
o Whole number which is not a perfect square‐ output will be decimal number.
o Positive decimals
o Negative numbers(integer or decimal).
o Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test
cases are designed for boundary values of input domain then the efficiency of testing improves and
probability of finding errors also increase. For example – If valid range is 10 to 100 then test for
10,100 also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical input called
causes with corresponding actions called effect. The causes and effects are represented using
Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
106
It can be converted into decision table like:
Each column corresponds to a rule which will become a test case for testing. So there will be 4 test
cases.
5. Requirement based testing – It includes validating the requirements given in SRS of software
system.
6. Compatibility testing – The test case result not only depend on product but also infrastructure for
delivering functionality. When the infrastructure parameters are changed it is still expected to work
properly. Some parameters that generally affect compatibility of software are:
1. Processor (Pentium 3,Pentium 4) and number of processors.
2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back‐end components such as database servers.
107
4. Operating System (Windows, Linux, etc).
White box testing techniques analyze the internal structures the used data structures, internal design,
code structure and the working of the software rather than just the functionality as in black box
testing. It is also called glass box testing or clear box testing or structural testing.
Input: Requirements, Functional specifications, design documents, source code.
Processing: Performing risk analysis for guiding through the entire process.
Proper test planning: Designing test cases so as to cover entire code. Execute rinse‐repeat until
error‐free software is reached. Also, the results are communicated.
Output: Preparing final report of the entire testing process.
Testing techniques:
Statement coverage: In this technique, the aim is to traverse all statement at least once. Hence,
each line of code is tested. In case of a flowchart, every node must be traversed at least once. Since
all lines of code are covered, helps in pointing out faulty code.
Branch Coverge: In this technique, test cases are designed so that each branch from all decision
points are traversed at least once. In a flowchart, all edges must be traversed at least once.
108
4 test cases required such that all branches of all decisions are covered, i.e, all edges of
flowchart are covered
Condition Coverage: In this technique, all individual conditions must be covered as shown in the
following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
o #TC1 – X = 0, Y = 55
o #TC2 – X = 5, Y = 0
Multiple Condition Coverage: In this technique, all the possible combinations of the possible
outcomes of conditions are tested at least once. Let’s consider the following example:
109
0. READ X, Y
1. IF(X == 0 || Y == 0)
2. PRINT ‘0’
o #TC1: X = 0, Y = 0
o #TC2: X = 0, Y = 5
o #TC3: X = 55, Y = 0
o #TC4: X = 55, Y = 5
Basis Path Testing: In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that the
minimal number of test cases can be designed for each independent path.
Steps:
0. Make the corresponding control flow graph
1. Calculate the cyclomatic complexity
2. Find the independent paths
3. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.
4. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
5. V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
6. V(G) = Number of non‐overlapping regions in the graph
110
Example:
o #P1: 1 – 2 – 4 – 7 – 8
o #P2: 1 – 2 – 3 – 5 – 7 – 8
o #P3: 1 – 2 – 3 – 6 – 7 – 8
o #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
Loop Testing: Loops are widely used and these are fundamental to many algorithms hence, their
testing is very important. Errors often occur at the beginnings and ends of loops.
0. Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n‐1 ans n+1 passes
1. Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
2. Concatenated loops: Independent loops, one after another. Simple loop tests are applied for
each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
111
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in‐depth knowledge of the code and programming language as opposed
to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.
Problem identification and report preparation.
Assigning the report to software engineer to the defect to verify that it is genuine.
Defect Analysis using modeling, documentations, finding and testing candidate flaws, etc.
Defect Resolution by making required changes to the system.
Validation of corrections.
Debugging Strategies:
1. Study the system for the larger duration in order to understand the system. It helps debugger to
construct different representations of systems to be debugging depends on the need. Study of the
system is also done actively to find recent changes made to the software.
2. Backwards analysis of the problem which involves tracing the program backward from the location
of failure message in order to identify the region of faulty code. A detailed study of the region is
conducting to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using breakpoints or print
statements at different points in the program and studying the results. The region where the wrong
outputs are obtained is the region that needs to be focused to find the defect.
4. Using the past experience of the software debug the software with similar problems in nature. The
success of this approach depends on the expertise of the debugger.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs. A lot of public
domain software like gdb and dbx are available for debugging. They offer console-based command
line interfaces. Examples of automated debugging tools include code based tracers, profilers,
interpreters, etc.
Some of the widely used debuggers are:
Radare2
WinDbg
Valgrind
112
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas debugging
starts after a bug has been identified in the software. Testing is used to ensure that the program is
correct and it was supposed to do with a certain minimum success rate. Testing can be manual or
automated. There are several different types of testing like unit testing, integration testing, alpha and
beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some automated
tools available but is more of a manual process as every bug is different and requires a different
technique, unlike a pre-defined testing mechanism.
History: Selenium was developed by Jason Huggins in 2004 at ThoughtWorks. He was working on
an internal/web application at ThoughtWorks after some time he noticed that instead of testing his
application manually, he can automate his testing. He developed a JavaScript program to test his web
application, allowing him to automatically rerun tests. He called his program as
“JavaScriptTestRunner”. After some time this tool was open sourced and renamed as Selenium Core.
Selenium Remote Control was developed by Paul Hammant. The reason behind developing
Selenium RC was testers who are using Selenium core had to install the whole application under test
and the web server on their local computers because there were some restrictions forced by the same
origin policy. To overcome this restriction Paul Hammant came to a decision and developed a server
which will act as an HTTP proxy to trick the web browser, so that thinks that Selenium Core and the
113
web application being tested came from the same domain.
Selenium IDE was developed by Shinya Kasatani of Japan. It was implemented as a Firefox add-
on/plugin and now we can use Selenium IDE on every web browser. He gave Selenium IDE to the
Selenium project in 2006.
Selenium Grid was developed by Philippe Hanrigou in 2008. It is a server that allows the test to use
web browser instance running on remote machines. It provides an ability to run the test on a remote
web browser, which helps to divide a load of testing across multiple machines and it will save
enormous time. It allows executing parallel test across different platforms and operating system. Grid
provided, as open source, a similar capability to the private Google cloud for Selenium RC. Pat
Lightbody had already made a private cloud system named as “HostedQA” and sold it to Gomez,
Inc.
Selenium WebDriver was developed by Simon Stewart in 2006. WebDriver automates and controls
initiated by the web browser. It does not rely on JavaScript for automation. It controls the browser
directly by communicating with it. It was the first cross-platform testing framework that could
control the browser from the OS level.
In 2009, after a meeting whole Selenium team decided to merge the two projects Selenium RC and
WebDriver, and call it as Selenium 2.0.
Pros:
It is an open-source tool.
Provide base, for extensions.
It provides multi-browser support.
No programming language experience required while using Selenium IDE.
The user can set breakpoints and debug.
It provides record and playback functions.
Cons:
Selenium RC: RC stands for Remote Control. It allows the programmers to code in different
programming languages like C#, Java, Perl, PHP, Python, Ruby, Scala, Groovy. The figure shows
how Remote Control Server works.
114
Pros:
Cons:
Selenium Web Driver: Selenium Web Driver automates and controls initiated by the web browser.
It does not rely on JavaScript for automation. It controls the browser directly by communicating with
it. The figure shows how web driver works as an interface between Drivers and Bindings
115
Pros:
Cons:
Selenium Grid: Basically, it is a server that allows the test to use web browser instance running on
remote machines. It provides an ability to run the test on a remote web browser, which helps to
divide a load of testing across multiple machines and it will save enormous time. It allows executing
parallel test across different platforms and operating system.
116
Selenium Grid is a network of HUB & nodes. Each node registers to the HUB with a certain
configuration and HUB is aware of the browsers available on the node. When a request comes to the
HUB for a specific browser [with desired capabilities object], the HUB, if found a match for the
requested web browser, redirects the call to that particular Grid Node and then a session is
established bi-directionally and execution starts.
error as the error may potentially belong to any of the modules being integrated. So, debugging
errors reported during big bang integration testing are very expensive to fix.
Advantages:
It is convenient for small systems.
Disadvantages:
There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
High risk critical modules are not isolated and tested on priority since all modules are tested at once.
Advantages:
In bottom‐up testing, no stubs are required.
A principle advantage of this integration testing is that several disjoint subsystems can be tested
simultaneously.
Disadvantages:
Driver modules must be produced.
In this testing, the complexity that occurs when the system is made up of a large number of small
subsystem.
Advantages:
Separately debugged module.
Few or no drivers needed.
It is more stable and accurate at the aggregate level.
Disadvantages:
Needs many Stubs.
118
Modules at lower level are tested inadequately.
Advantages:
Mixed approach is useful for very large projects having several sub projects.
This Sandwich approach overcomes this shortcoming of the top‐down and bottom‐up approaches.
Disadvantages:
For mixed integration testing, require very high cost because one part has Top‐down approach while
another part has bottom‐up approach.
This integration testing cannot be used for smaller system with huge interdependence between
different modules.
119