New Manual Notes
New Manual Notes
Software Testing is a process of executing a program or application with the intent of finding
the software bugs. It can also be stated as the process of validating and verifying that a
software program or application or product, which meets the business and technical requirements
that guided its design and development, works as expected, and can be implemented with the
same characteristic.
Software Quality:
Quality software means reasonably bug or defect free, delivered on time and within budget,
What to Test?
First, test what’s important. Focus on the core functionality—the parts that are critical or popular
—before looking at the ‘nice to have’ features. Concentrate on the application’s capabilities in
common usage situations before going on to unlikely situations. For example, if the application
retrieves data and performance are important, test reasonable queries with a normal load on the
server before going on to unlikely ones at peak usage times. It’s worth saying again: focus on
what’s important. Good business requirements will tell you what’s important.
What is Waterfall model? What are the advantages, disadvantages, and when to use Waterfall
model?
The Waterfall Model is the first Process Model to be introduced. It is also referred to as a linear
sequential life cycle model. It is very simple to understand and use. In a waterfall model,
each phase must be completed fully before the next phase can begin. At the end of each phase,
a review takes place to determine if the project is on the right path and whether or not to
continue or discard the project. In waterfall model phases do not overlap.
Diagram of Waterfall-model:
What is Incremental model? What are the advantages, disadvantages, and when to use
Incremental model?
In incremental model, the whole requirement is divided into various builds. Multiple development
cycles take place here, “making the life cycle a multi-waterfall” cycle. Cycles are divided up into
smaller, more easily managed modules. Each module passes through the requirements, design,
implementation and testing phases. A working version of software is produced during the first
module, so you have working software early on during the software life cycle. Each subsequent
release of the module adds function to the previous release. The process continues till the
complete system is achieved.
What is V-model? What are the advantages, disadvantages, and when to use V-model?
V- Model means Verification and Validation model. Just like the waterfall model, the V-Shaped life
cycle is a sequential path of execution of processes. Each phase must be completed before the
next phase begins. Testing of the product is planned in parallel with a corresponding phase of
development.
Diagram of V-model:
Requirements:
Like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model
before development is started, a system test plan is created. The test plan focuses on meeting
the functionality specified in the requirements gathering.
The high-level design (HLD):
Phase focuses on system architecture and design. It provides overview of solution, platform,
system, product and service/process. An integration test plan is created in this phase as well in
order to test the pieces of the software systems ability to work together.
The low-level design (LLD):
Phase is where the actual software components are designed. It defines the actual logic for each
and every component of the system. Class diagram with all the methods and relation between
classes comes under LLD. Component tests are created in this phase as well.
The implementation:
Phase is, again, where all coding takes place. Once coding is complete, the path of execution
continues up the right side of the V where the test plans developed earlier are now put to use.
Coding:
This is at the bottom of the V-Shape model. Module design is converted into code by developers.
Advantages of V-model:
Simple and easy to use.
Testing activities like planning, test designing happens well before coding. This saves a
lot of time. Hence higher chance of success over the waterfall model.
Proactive defect tracking – that is defects are found at early stage.
Avoids the downward flow of the defects.
Works well for small projects where requirements are easily understood.
Disadvantages of V-model:
Very rigid and least flexible.
Software is developed during the implementation phase, so no early prototypes of the
software are produced.
If any changes happen in midway, then the test documents along with requirement
documents has to be updated.
When to use the V-model:
The V-shaped model should be used for small to medium sized projects where requirements
are clearly defined and fixed.
The V-Shaped model should be chosen when ample technical resources are available with
needed technical expertise.
High confidence of customer is required for choosing the V-Shaped model approach. Since,
no prototypes are produced, there is a very high risk involved in meeting customer
expectations.
The spiral model is similar to the incremental model, with more emphasis placed on
risk analysis.
Psychology of testing
Comparison of the mindset of the tester and developer:
The testing and reviewing of the applications are different from the analyzing and developing of
it. By this we mean to say that if we are building or developing applications we are working
positively to solve the problems.
during the development process and to make the product according to the user specification.
However while Testing or reviewing a product we are looking for the defects or failures in the
product. Thus building the software requires a different mindset from testing the software.
The balance between self-testing and independent testing:
The comparison made on the mindset of the tester and the developer in the above article is just
to compare the two different perspectives. It does not mean that the tester cannot be the
programmer, or that the programmer cannot be the tester, although they often are separate
roles.
This degree of independence avoids author bias and is often more effective at finding defects
and failures.
There are several levels of independence in software testing which is listed here from the lowest
level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test team.
iv. Tests by a person from a different organization or company, such as outsourced testing or
certification by an external body.
Clear and courteous communication and feedback on defects between tester and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed when
someone points them out. So, when as testers we run a test which is a good test from our
viewpoint because we found the defects and failures in the software. But at the same time we
need to be very careful as how we react or report the defects and failures to the programmers.
We are pleased because we found a good bug but how will the requirement analyst, the designer,
developer, project manager and customer react.
The people who build the application may react defensively and take this reported defect as
personal criticism.
The project manager may be annoyed with everyone for holding up the project.
The customer may lose confidence in the product because he can see defects.
Independent testing:
There are several levels of independence which is listed here from the lowest level of
independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
Iii. Tests by the person from some different group such as an independent test team.
Iv. Tests by a person from a different organization or company, such as outsourced testing or
certification by an external body.
Component testing:
Component testing is also known as module and program testing. It finds the defects in the
module and verifies the functioning of software.
Component testing is done by the tester.
Component testing may be done in isolation from rest of the system depending on the
development life cycle model chosen for that particular application.
Integration testing:
Integration testing tests integration or interfaces between components, interactions
to different parts of the system such as an operating system, file system and
hardware or interfaces between systems.
Integration testing is done by a specific integration tester or test team.
Big Bang integration testing:
In Big Bang integration testing all components or modules are integrated
simultaneously, after which everything is tested as a whole.
Big Bang testing has the advantage that everything is finished before integration
testing starts.
The major disadvantage is that in general it is time consuming and difficult to trace the
cause of failures because of this late integration.
Incremental testing:
Another extreme is that all programmers are integrated one by one, and a test is carried out
after each step.
Top down:
Testing takes place from top to bottom, following the control flow or architectural structure (e.g.
starting from the GUI or main menu). Components or systems are substituted by stubs.
Bottom up:
Testing takes place from the bottom of the control flow upwards. Components or systems are
substituted by drivers.
Functional incremental:
Integration and testing takes place on the basis of the functions and functionalities, as
documented in the functional specification.
System testing
In system testing the behavior of whole system/product is tested as defined by the
scope of the development project or product.
It may include tests based on risks and/or requirement specifications, business process,
use cases, or other high level descriptions of system behavior, interactions with the
operating systems, and system resources.
System testing is most often the final test to verify that the system to be delivered
meets the specification and its purpose.
System testing is carried out by specialists testers or independent testers.
System testing should investigate both functional and non-functional requirements of
the testing.
Acceptance testing
After the system test has corrected all or most defects, the system will be delivered to
the user or customer for acceptance testing.
Acceptance testing is basically done by the user or customer although other
stakeholders may be involved as well.
The goal of acceptance testing is to establish confidence in the system.
Acceptance testing is most often focused on a validation type testing.
Acceptance testing may occur at more than just a single level, for example:
The types of acceptance testing are:
The User Acceptance test: focuses mainly on the functionality thereby validating the
fitness-for-use of the system by the business user. The user acceptance test is
performed by the users and
application managers.
The Operational Acceptance test: also known as Production acceptance test validates
whether the system meets the requirements for operation. In most of the organization
the operational acceptance test is performed by the system administration before the
system is released. The operational acceptance test may include testing of
backup/restore, disaster recovery, maintenance tasks and periodic check of security
vulnerabilities.
Contract Acceptance testing: It is performed against the contract’s acceptance
criteria for producing custom developed software. Acceptance should be formally defined
when the contract is agreed.
Compliance acceptance testing: It is also known as regulation acceptance testing is
performed against the regulations which must be adhered to, such as governmental,
legal or safety regulations.
Beta testing
It is also known as field testing. It takes place at customer’s site. It sends the system
to users who install it and use it under real-world working conditions.
A beta test is the second phase of software testing in which a sampling of the intended
audience tries the product out. The goal of beta testing is to place your application in the
hands of real users outside of your own engineering team to discover any flaws or issues
from the user’s perspective that you would not want to have in your final, released
version of the application.
You have the opportunity to get your application into the hands of users prior to releasing it
to the general public.
Users can install, test your application, and send feedback to you during this beta testing
period.
Your beta testers can discover issues with your application that you may have not noticed,
such as confusing application flow, and even crashes.
Using the feedback you get from these users, you can fix problems before it is released to
the general public.
The more issues you fix that solve real user problems, the higher the quality of
your application when you release it to the general public.
Having a higher-quality application when you release to the general public will increase
customer satisfaction.
These users, who are early adopters of your application, will generate excitement about
your application.
Functional testing
In functional testing basically the testing of the functions of component or system is
done. It refers to activities that verify a specific action or function of the code. Functional
test tends to answer the questions like “can the user do this” or “does this particular
feature work”. This is typically described in a requirements specification or in a
functional specification.
The techniques used for functional testing are often specification-based. Testing
functionality can be done from two perspectives:
Requirement-based testing:
In this type of testing the requirements are prioritized depending on the risk criteria and
accordingly the tests are prioritized. This will ensure that the most important and most critical
tests are included in the testing effort.
Business-process-based testing:
In this type of testing the scenarios involved in the day-to-day business use of the system are
described. It uses the knowledge of the business processes. For example, a personal and payroll
system may have the business process along the lines of: someone joins the company, employee
is paid on the regular basis and employee finally leaves the company.
Non-functional testing
In non-functional testing the quality characteristics of the component or system is tested. Non-
functional refers to aspects of the software that may not be related to a specific function or user
action such as scalability or security. Eg. How many people can log in at once? Non-functional
testing is also performed at all levels like functional testing.
Non-functional testing includes:
Functionality testing
Reliability testing
Usability testing
Efficiency testing
Maintainability testing
Portability testing
Baseline testing
Compliance testing
Documentation testing
Endurance testing
Load testing
Performance testing
Compatibility testing
Security testing
Scalability testing
Volume testing
Stress testing
Recovery testing
Internationalization testing and Localization testing
Functionality testing:
Functionality testing is performed to verify that a software application performs and functions
correctly according to design specifications. During functionality testing we check the core
application functions, text input, menu functions and installation and setup on localized machines,
etc.
Reliability testing:
Reliability Testing is about exercising an application so that failures are discovered and removed
before the system is deployed. The purpose of reliability testing is to determine product
reliability, and to determine whether the software meets the customer’s reliability requirements.
Usability testing:
In usability testing basically the testers tests the ease with which the user interfaces can be used.
It tests that whether the application or the product built is user-friendly or not
Usability testing includes the following five components:
Learnability:
How easy is it for users to accomplish basic tasks the first time they encounter the design?
Efficiency:
How fast can experienced users accomplish tasks?
Memo ability:
When users return to the design after a period of not using it, does the user remember enough to
use it
effectively the next time, or does the user have to start over again learning everything?
Errors:
How many errors do users make, how severe are these errors and how easily can they recover
from the errors?
Satisfaction:
How much does the user like using the system?
Efficiency testing:
Efficiency testing test the amount of code and testing resources required by a program to
perform a particular
function. Software Test Efficiency is number of test cases executed divided by unit of time
(generally per hour)
.Maintainability testing:
It basically defines that how easy it is to maintain the system. This means that how easy it is to
analyze, change and test the application or product.
Portability testing:
It refers to the process of testing the ease with which a computer software component or
application can be moved from one environment to another, e.g. moving of any application from
Windows 2000 to Windows XP. This is usually measured in terms of the maximum amount of effort
permitted. Results are measured in terms of the time required to move the software and
complete the and documentation updates.
Baseline testing:
It refers to the validation of documents and specifications on which test cases would be designed.
The requirement specification validation is baseline testing.
Compliance testing:
It is related with the IT standards followed by the company and it is the testing done to find
the deviations from the company prescribed standards.
Documentation testing:
As per the IEEE Documentation describing plans for, or results of, the testing of a system or
component, Types include test case specification, test incident report, test log, test plan, test
procedure, test report. Hence the testing of all the above mentioned documents is known as
documentation testing.
Endurance testing:
Endurance testing involves testing a system with a significant load extended over a
significant period of time, to discover how the system behaves under sustained use. For
example, in software testing, a system may behave exactly as expected when tested for 1
hour but when the same system is tested for 3 hours, problems such as memory leaks cause
the system to fail or behave randomly.
Load testing:
A load test is usually conducted to understand the behavior of the application under a
specific expected load. Load testing is performed to determine a system’s behavior under
both normal and at peak conditions. It helps to identify the maximum operating capacity of
an application as well as any bottlenecks and determine which element is causing
degradation. E.g. If the number of users are in creased then how much CPU, memory will be
consumed, what is the network and bandwidth response time
Performance testing:
Performance testing is testing that is performed, to determine how fast some aspect of a
system performs under a particular workload. It can serve different purposes like it can
demonstrate that the system meets performance criteria. It can compare two systems to find
which performs better. Or it can measure what part of the system or workload causes the
system to perform badly.
Compatibility testing:
Compatibility testing is basically the testing of the application or the product built with the
computing environment. It tests whether the application or the software product built is
compatible with the hardware, operating system, database or other system software or not.
Security testing:
Security testing is basically to check that whether the application or the product is secured or
not. Can anyone came tomorrow and hack the system or login the application without any
authorization. It is a process to determine that an information system protects data and
maintains functionality as intended.
Scalability testing:
It is the testing of a software application for measuring its capability to scale up in terms of
any of its non-functional capability like load supported, the number of transactions, the data
volume etc.
Volume testing:
Volume testing refers to testing a software application or the product with a certain amount
of data. E.g., if we want to volume test our application with a specific database size, we need
to expand our database to that size and then test the application’s performance on it.
Stress testing:
It involves testing beyond normal operational capacity, often to a breaking point, in order to
observe the results. It is a form of testing that is used to determine the stability of a given
system. It put greater emphasis on robustness, availability, and error handling under a
heavy load, rather than on what would be considered correct behavior under normal
circumstances. The goals of such tests may be to ensure the software does not crash in
conditions of insufficient computational resources (such as memory or disk space).
Recovery testing:
Recovery testing is done in order to check how fast and better the application can recover
after it has gone through any type of crash or hardware failure etc. Recovery testing is the
forced failure of the software in a variety of ways to verify that recovery is properly
performed. For example, when an application is receiving data from a network, unplug
the connecting cable. After some time, plug the cable back in and analyze the application’s
ability to continue receiving data from the point at which the network connection got
disappeared. Restart the system while a browser has a definite number of sessions and
check whether the browser is able to recover all of them or not.
Internationalization testing and Localization testing:
Internationalization is a process of designing a software application so that it can be adapted
to various languages and regions without any changes. Whereas Localization is a process of
adapting internationalized software for a specific region or language by adding local specific
components and translating text.
Confirmation testing or re-testing:
When a test fails because of the defect then that defect is reported and a new version of the
software is expected that has had the defect fixed. In this case we need to execute the test again to
confirm that whether t he defect got actually fixed or not. This is known as confirmation testing and
also known as re-testing. It is important to ensure that the test is executed in exactly the same way
it was the first time using the same inputs, data and environments.
Hence, when the change is made to the defect in order to fix it then confirmation testing or re-
testing is helpful.
Regression testing
During confirmation testing the defect got fixed and that part of the application started working
as intended. But there might be a possibility that the fix may have introduced or uncovered a different
defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to do
regression testing. The purpose of a regression testing is to verify that modifications in the
software or the environment have not caused any unintended adverse side effects and that the system
still meets its requirements. Regression testing are mostly automated because in order to fix the defect
the same test is carried out again and again and it will be very tedious to do it manually.
Regression tests are executed whenever the software changes, either as a result of fixes or new or
changed functionality.
Maintenance Testing
Once a system is deployed it is in service for years and decades. During this time the system and
its operational environment is often corrected, changed or extended. Testing that is provided during
this phase is called maintenance testing.
First one is, testing the changes that has been made because of the correction in the system
or if the system is extended or because of some additional features added to it.
Second one is regression tests to prove that the rest of the system has not been affected by
the maintenance work.
1. Static Techniques
2. Dynamic Techniques
Below is the tree structure of the testing techniques:
Static Testing
Static testing is the testing of the software work products manually, or with a set of tools,
but they are not executed.
It starts early in the Life cycle and so it is done during the verification process.
It does not need computer as the testing of program is done without executing the program.
For example: reviewing, walk through, inspection, etc.
Informal reviews
Informal reviews are applied many times during the early stages of the life cycle of the
document. Atwo person team can conduct an informal review. In later stages these reviews often involve
more people and a meeting. The goal is to keep the author and to improve the quality of the
document. The most important thing to keep in mind about the informal reviews is that they are not
documented.
Formal review
Formal reviews follow a formal process. It is well structured and regulated.
A formal review process consists of six main steps:
Planning
Kick-off
Preparation
Review meeting
Rework
Follow-up
1. Planning
The documents should not reveal a large number of major defects.
The documents to be reviewed should be with line numbers.
The documents should be cleaned up by running any automated checks that apply.
The author should feel confident about the quality of the document so that he can join
the review team with that document.
Types of review
The main review types that come under the static testing are mentioned below:
Walkthrough:
It is not a formal process
It is led by the authors
Author guide the participants through the document according to his or her thought process to
achieve a common understanding and to gather feedback.
Useful for the people if they are not from the software discipline, who are not used to or cannot
easily understand software development process.
Is especially useful for higher level documents like requirementspecification, etc.
The goals of a walkthrough:
To present the documents both within and outside the software discipline in order to gather
the information regarding the topic under documentation.
To explain or do the knowledge transfer and evaluate the contents of the document
To achieve a common understanding and to gather feedback.
To examine and discuss the validity of the proposed solutions
Technical review:
It is less formal review
It is led by the trained moderator but can also be led by a technical expert
It is often performed as a peer review without management participation
Defects are found by the experts (such as architects, designers, key users) who focus on the
content of the document.
In practice, technical reviews vary from quite informal to very formal
The goals of the technical review are:
To ensure that an early stage the technical concepts are used correctly
To access the value of technical concepts and alternatives in the product
To have consistency in the use and representation of technical concepts
To inform participants about the technical content of the document
Inspection:
It is the most formal review type
It is led by the trained moderators
During inspection the documents are prepared and checked thoroughly by the reviewers
before the meeting
It involves peers to examine the product
A separate preparation is carried out during which the product is examined and the defects
are found
The defects found are documented in a logging list or issue log
A formal follow-up is carried out by the moderator applying exit criteria
The goals of inspection are:
It helps the author to improve the quality of the document under inspection
It removes defects efficiently and as early as possible
It improve product quality
It create common understanding by exchanging information
It learn from defects found and prevent the occurrence of similar defects
Test design
Basically test design is the act of creating and writing test suites for testing software.
Test analysis and identifying test conditions gives us a generic idea for testing which covers
quite a large range of possibilities. But when we come to make a test case we need to be
very specific. In fact now we need the exact and detailed specific input. But just having some
values to input to the system is not a test, if you don’t know what the system is supposed to
do with the inputs, you will not be able to tell that whether your test has passed or failed.
Test cases can be documented as described in the IEEE 829 Standard for Test
Documentation.
Once a given input value has been chosen, the tester needs to determine what the expected
result of entering that input would be and document it as part of the test case. Expected results
include information displayed on a screen in response to an input. If we don’t decide on the expected
results before we run a test then there might be a chance that we will notice that there is something
wildly wrong
The requirements for a given function or feature have changed. Some of the fields now have
different ranges that can be entered. Which tests were looking at those boundaries? They
now need to be changed. How many tests will actually be affected by this change in the
requirements? These questions can be answered easily if the requirements can easily be
traced to the tests.
A set of tests that has run OK in the past has now started creating serious problems. What
functionality do these tests actually exercise? Traceability between the tests and the
requirement being tested enables the functions or features affected to be identified more
easily.
Before delivering a new release, we want to know whether or not we have tested all of the
specified requirements in the requirements specification. We have the list of the
tests that have passed – was
every requirement tested?
Test implementation
The document that describes the steps to be taken in running a set of tests and specifies the
executable order of the tests is called a test procedure in IEEE 829, and is also known as a test
script. When test Procedure Specification is prepared then it is implemented and is called Test
implementation. Test script is also used to describe the instructions to a test execution tool. An
automation script is written in a programming language that the tool can understand. (This is an
automated test procedure.)
The tests that are intended to be run manually rather than using a test execution tool can be
called as manual test script. The test procedures, or test scripts, are then formed into a test
execution schedule that specifies which procedures are to be run first – a kind of superscript.
Writing the test procedure is another opportunity to prioritize the tests, to ensure that the
best testing is done in t he time available. A good rule of thumb is ‘Find the scary stuff first’. However
the definition of what is ‘scary’ depends on the business, system or project and depends up on the
risk of the project.
Static technique
Dynamic technique
Suppose we test only the valid boundary values 1 and 99 and nothing in between. If
both tests pass, this seems to indicate that all the values in between should also work.
However, suppose that one page prints correctly, but 99 pages do not
The first task is to identify a suitable function or subsystem which reacts according to a
combination of inputs or events. The system should not contain too many inputs otherwise the
number of combinations will become unmanageable. It is better to deal with large numbers of
conditions by dividing them into subsets and dealing with the subsets one at a time. Once you
have identified the aspects that need to be combined, then you put them into a table listing all
the combinations of true and false for each of the aspects.
Let us consider an example of a loan application, where you can enter the amount of the monthly
repayment or the number of years you want to take to pay it back (the term of the loan). If you
enter both, the system will make a compromise between the two if they conflict. The two
conditions are the loan amount and the term, so we put them in a table (see Table 4.2).
For use case testing, we would have a test of the success scenario and one testing for each
extension. In this example, we may give extension 4b a higher priority than 4a from a security
point of view.
System requirements can also be specified as a set of use cases. This approach can make it
easier to involve the users in the requirements gathering and definition process.
As its name implies, exploratory testing is about exploring, finding out about the software,
what it does, what it doesn’t do, what works and what doesn’t work. The tester is constantly
making decisions about what to test next and where to spend the (limited) time. This is an
approach that is most useful when there are no or poor specifications and when time is
severely limited.
Exploratory testing is a hands-on approach in which testers are involved in minimum
planning and maximum test execution.
The planning involves the creation of a test charter, a short declaration of the scope of a
short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be
used.
The test design and test execution activities are performed in parallel typically without
formally documenting the test conditions, test cases or test scripts. This does not mean that
other, more formal testing techniques will not be used. For example, the tester may decide
to us boundary value analysis but will think through and test the most important boundary
values without necessarily writing them down. Some notes will be written during the
exploratory-testing session, so that a report can be produced afterwards.
Test logging is undertaken as test execution is performed, documenting the key aspects of
what is tested, any defects found and any thoughts about possible further testing.
It can also serve to complement other, more formal testing, helping to establish greater
confidence in the software. In this way, exploratory testing can be used as a check on the
formal test process by helping to ensure that the most serious defects have been found.
There is danger in using a coverage measure. But, 100% coverage does not mean 100% tested.
Coverage techniques measure only one dimension of a multi-dimensional concept. Two different
test cases may achieve exactly the same coverage but the input data of one may find an error
that the input data of the other doesn’t.
Benefit of code coverage measurement:
It creates additional test cases to increase coverage
It helps in finding areas of a program not exercised by a set of test cases
It helps in determining a quantitative measure of code coverage, which indirectly measure
the quality of the application or product.
Drawback of code coverage measurement:
One drawback of code coverage measurement is that it measures coverage of what has been
written, i.e. the code itself; it cannot say anything about the software that has not been
written.
If a specified function has not been implemented or a function was omitted from the
specification, then structure-based techniques cannot say anything about them it only looks
at a structure which is already there.
Types of coverage
There are many types of test coverage. Test coverage can be used in any level of the testing.
Test coverage can be measured based on a number of different structural elements in a system
or component. Coverage can be measured at component testing level, integration-testing level or
at system- or acceptance-testing levels. For example, at system or acceptance level, the
coverage items may be requirements, menu options, screens, or typical business transactions. At
integration level, we could measure coverage of interfaces or specific interactions that have been
tested.
First, by writing a test plan it guides our thinking. Writing a test plan forces us to confront the challenges
that await us
And focus our thinking on important topics.
By using a template for writing test plans helps us remember the important challenges. You can use the
IEEE 829 test plan template shown in this chapter, use someone else’s template, or create your own
template over time.
Second, the test planning process and the plan itself serve as the means of communication with other
members of the project team, testers, peers, managers and other stakeholders. This communication
allows the test plan to influence the project team and the project team to influence the test plan,
especially in the areas of organization-wide testing policies and motivations; test scope, objectives and
critical areas to test; project and product risks, resource considerations and constraints; and the
testability of the item under test. We can complete this communication by circulating one or two test plan
drafts and through review meetings. Such a draft will include many notes such as the following:
Third, the test plan helps us to manage change. During early phases of the project, as we gather more
information, we revise our plans At times it is better to write multiple test plans in some situations. For
example, when we manage both integration and system test levels, those two test execution periods
occur at different points in time and have different objectives. For some systems projects, a hardware test
plan and a software test plan will address different techniques and tools as well as different audiences.
However, there are chances that these test plans can get overlapped, hence, a master test plan should be
made that addresses the common elements of both the test plans can reduce the amount of redundant
documentation.
What is in scope and what is out of scope for this testing effort?
What are the test objectives?
What are the important project and product risks? (details on risks will discuss in Section 5.5).
What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
What is most critical for this product and project?
Which aspects of the product are more (or less) testable?
What should be the overall test execution schedule and how should we decide the order in which
to run specific tests? (Product and planning risks, discussed later in this chapter, will influence the
answers to these questions.)
How to split the testing work into various levels (e.g., component, integration, system and
acceptance).
If that decision has already been made, you need to decide how to best fit your testing work in
the level you are responsible for with the testing work done in those other test levels.
During the analysis and design of tests, you’ll want to reduce gaps and overlap between levels
and, during test execution, you’ll want to coordinate between the levels. Such details dealing with
inter-level coordination are often addressed in the master test plan.
In addition to integrating and coordinating between test levels, you should also plan to integrate
and coordinate all the testing work to be done with the rest of the project. For example, what items
must be acquired for the testing?
When will the programmers complete work on the system under test?
What operations support is required for the test environment?
What kind of information must be delivered to the maintenance team at the end of testing?
How many resources are required to carry out the work.
Now, think about what would be true about the project when the project was ready to start executing
tests. What would be true about the project when the project was ready to declare test execution done?
At what point can you safely start a particular test level or phase, test suite or test target? When can you
finish it? The factors to consider in such decisions are often called ‘entry criteria’ and ‘exit criteria.’ For
such criteria, typical factors are:
Acquisition and supply: the availability of staff, tools, systems and other materials required.
Test items: the state that the items to be tested must be in to start and to finish testing.
Defects: the number known to be present, the arrival rate, the number predicted to remain, and
the number resolved.
Tests: the number run, passed, failed, blocked, skipped, and so forth.
Coverage: the portions of the test basis, the software code or both that have been tested and
which have not.
Quality: the status of the important quality characteristics for the system.
Money: the cost of finding the next defect in the current level of testing compared to the cost of
finding it in the next level of testing (or in production).
Risk: the undesirable outcomes that could result from shipping too early (such as latent defects
or untested areas) – or too late (such as loss of market share).
When writing exit criteria, we try to remember that a successful project is a balance of quality, budget,
schedule and feature considerations. This is even more important when applying exit criteria at the end of
the project.
Suppose that you have identified performance as a major area of risk for your product. So,
performance testing is an activity in the test execution phase. You now estimate the tasks involved
with running a performance test, how long those tasks will take and how many times you will need to
run the performance tests.
Now, those tests need to be developed by someone. So, performance test development entails
activities in test analysis, design and implementation. You now estimate the tasks involved in
developing a performance test, such as writing test scripts and creating test data. Typically,
performance tests need to be run in a special test environment that is designed to look like the
production or field environment.
You now estimate tasks involved in acquiring and configuring such a test environment, such as
getting the right hardware, software and tools and setting up hardware, software and tools.
Not everyone knows how to use performance-testing tools or to design performance tests. So,
performance-testing training or staffing is an activity in the test planning phase. Depending on the
approach you intend to take, you now estimate the time required to identify and hire a performance
test professional or to train one or more people in your organization to do the job.
Finally, in many cases a detailed test plan is written for performance testing, due to its
differences from other test types. So, performance-testing planning is an activity in the test planning
phase. You now estimate the time required to draft, review and finalize a performance test plan.
Even the best estimate must be negotiated with management. Negotiating sessions exhibit amazing
variety, depending on the people involved. However, there are some classic negotiating positions. It’s not
unusual for the test leader or manager to try to sell the management team on the value added by the
testing or to alert management to the potential problems that would result from not testing enough. It’s
not unusual for management to look for smart ways to accelerate the schedule or to press for equivalent
coverage in less time or with fewer resources. In between these positions, you and your colleagues can
reach compromise, if the parties are willing. Our experience has been that successful negotiations about
estimates are those where the focus is less on winning and losing and more about figuring out how best to
balance competing pressures in the realms of quality, schedule, budget and features.
The test strategies or approaches you pick will have a major influence on the testing effort. In this section,
let’s look at factors related to the product, the process and the results of testing.
In Product factors the presence of sufficient project documentation is important so that the testers can
figure out what the system is, how it is supposed to work and what correct behavior looks like. This will
help us do our job more efficiently.
Rather than see the choice of strategies, particularly the preventive or reactive strategies, as an either/or
situation, we’ll let you in on the worst-kept secret of testing (and many other disciplines): There is no one
best way. We suggest that you adopt whatever test approaches make the most sense in your particular
situation, and feel free to borrow and blend.
How do you know which strategies to pick or blend for the best chance of success? There are
many factors to consider, but let us highlight a few of the most important:
Risks: Risk management is very important during testing, so consider the risks and the level of
risk. For a well-established application that is evolving slowly, regression is an important risk, so
regression-averse strategies make sense. For a new application, a risk analysis may reveal different
risks if you pick a risk-based analytical strategy.
Skills: Consider which skills your testers possess and lack because strategies must not only be
chosen, they must also be executed. . A standard compliant strategy is a smart choice when you lack
the time and skills in your team to create your own approach.
Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If
the objective is to find as many defects as possible with a minimal amount of up-front time and effort
invested – for example, at a typical independent test lab – then a dynamic strategy makes sense.
Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this
case, you may need to plan a methodical test strategy that satisfies these regulators that you have
met all their requirements.
Product: Some products like, weapons systems and contract-development software tend to have
well-specified requirements. This leads to synergy with a requirements-based analytical strategy.
Business: Business considerations and business continuity are often important. If you can use a
legacy system as a model for a new system, you can use a model-based strategy.
You must choose testing strategies with an eye towards the factors mentioned earlier, the schedule,
budget, and feature constraints of the project and the realities of the organization and its politics.
We mentioned above that a good team can sometimes triumph over a situation where materials, process
and delaying factors are ranged against its success. However, talented execution of an unwise strategy is
the equivalent of going very fast down a highway in the wrong direction. Therefore, you must make smart
choices in terms of testing strategies.
Give the test team and the test manager feedback on how the testing work is going, allowing
opportunities to guide and improve the testing and the project.
Provide the project team with visibility about the test results.
Measure the status of the testing, test coverage and test items against the exit criteria to
determine whether the test work is done.
Gather data for use in estimating future test efforts.
For small projects, the test leader or a delegated person can gather test progress monitoring information
manually using documents, spreadsheets and simple databases. But, when working with large teams,
distributed projects and long-term test efforts, we find that the efficiency and consistency of data
collection is done by the use of automated tools.
One way to keep the records of test progress information is by using the IEEE 829 test log template. While
much of the information related to logging events can be usefully captured in a document, we prefer to
capture the test-by-test information in spreadsheets (see Figure 5.1).
Let us take an
example as shown in Figure 5.1, columns A and B show the test ID and the test case or test suite name.
The state of the test case is shown in column C (‘Warn’ indicates a test that resulted in a minor failure).
Column D shows the tested configuration, where the codes A, B and C correspond to test environments
described in detail in the test plan. Columns E and F show the defect (or bug) ID number (from the
defect-tracking database) and the risk priority number of the defect (ranging from 1, the worst, to 25, the
least risky). Column G shows the initials of the tester who ran the test. Columns H through L capture data
for each test related to dates, effort and duration (in hours). We have metrics for planned and actual
effort and dates completed which would allow us to summarize progress against the planned schedule and
budget. This spreadsheet can also be summarized in terms of the percentage of tests which have been run
and the percentage of tests which have passed and failed.
Figure 5.1 might show a snapshot of test progress during the test execution Period. During the analysis,
design and implementation of the tests, such a worksheet would show the state of the tests in terms of
their state of development.
In addition to test case status, it is also common to monitor test progress during the test execution period
by looking at the number of defects found and fixed. Figure 5.2 shows a graph that plots the total number
of defects opened and closed over the course of the test execution so far. It also shows the planned test
period end date and the planned number of defects that will be found. Ideally, as the project approaches
the planned end date, the total number of defects opened will settle in at the predicted number and the
total number of defects closed will converge with the total number opened. These two outcomes tell us
that we have found enough defects to feel comfortable that we’re done testing, that we have no reason to
think many more defects are lurking in the product, and that all known defects have been resolved.
Charts such as Figure 5.2 can also be used to show failure rates ordefect density. When reliability is a
key concern, we might be more concerned with the frequency with which failures are observed (called
failure rates) than with how many defects are causing the failures (called defect density).
In organizations that are looking to produce ultra-reliable software, they may plot the number of
unresolved defects normalized by the size of the product, either in thousands of source lines of code
(KSLOC), function points (FP) or some other metric of code size. Once the number of unresolved defects
falls below some predefined threshold – for example, three per million lines of code – then the product
may be deemed to have met the defect density exit criteria.
That is why it is said, test progress monitoring techniques vary considerably depending on the preferences
of the testers and stakeholders, the needs and goals of the project, regulatory requirements, time and
money constraints and other factors.
In addition to the kinds of information shown in the IEEE 829 Test Log Template, Figures 5.1 and Figure
5.2, other common metrics for test progress monitoring include:
Test control
Projects do not always open up as planned. If the planned product and the actual product is different then
risks become occurrences, stakeholder needs evolve, the world around us changes. Hence it is required
and needed to bring the project back under control.
Test control is about guiding and corrective actions to try to achieve the best possible outcome for the
project. The specific guiding actions depend on what we are trying to control. Let us take few hypothetical
examples:
A portion of the software under test will be delivered late but market conditions dictate that we
cannot change the release date. At this point of time test control might involve re-prioritizing the
tests so that we start testing against what is available now.
For cost reasons, performance testing is normally run on weekday evenings during off-hours in
the production environment. Due to unexpected high demand for your products, the company has
temporarily adopted an evening shift that keeps the production environment in use 18 hours a day,
five days a week. In this context test control might involve rescheduling the performance tests for the
weekend.
Configuration management is a topic that is very complex. So, advanced planning is very important to
make this work. During the project planning stage – and perhaps as part of your own test plan – make
sure that configuration management procedures and tools are selected. As the project proceeds, the
configuration process and mechanisms must be implemented, and the key interfaces to the rest of the
development process should be documented.
The chance of a risk becoming an outcome is dependent on the level of risk associated with its possible
negative consequences.
For example, most people are expected to catch a cold in the course of their lives, usually more than
once. But the healthy individual suffers no serious consequences. Therefore, the overall level of risk
associated with colds is low for this person. In other hand the risk of a cold for an elderly person with
breathing difficulties would be high. So, in his case the overall level of risk associated with cold is high.
1. Product risk (factors relating to what is produced by the work, i.e. the thing we are testing).
2. Project risk (factors relating to the way the work is carried out, i.e. the test project)
If the software skips some key function that the customers specified, the users required or the
stakeholders were promised.
If the software is unreliable and frequently fails to work.
If software fail in ways that cause financial or other damage to a user or the company that user
works for.
If the software has problems related to a particular quality characteristic, which might not be
functionality, but rather security, reliability, usability, maintainability or performance.
Two quick tips about product risk analysis:
First, remember to consider both likelihood of occurrence of the risk and the impact of the risk. While you
may feel proud by finding lots of defects but testing is also about building confidence in key functions. We
need to test the things that probably won’t break but would be very bad if they did.
Second, early risk analysis, are often educated guesses. At key project milestones it’s important to
ensure that you revisit and follow up on the risk analysis.
Risk such as the late delivery of the test items to the test team or availability issues with the test
environment.
There are also indirect risks such as excessive delays in repairing defects found in testing or
problems with getting professional system administration support for the test environment.
For any risk, project risk or product risk we have four typical actions that we can take:
Mitigate: Take steps in advance to reduce the possibility and impact of the risk.
Contingency: Have a plan in place to reduce the possibility of the risk to become an outcome.
Transfer: Convince some other member of the team or project stakeholder to reduce the
probability or accept the impact of the risk.
Ignore: Ignore the risk, which is usually a good option only when there is little that can be done
or when the possibility and impact of that risk are low in the project.
Risk analysis
There are many techniques to analyze the testing. They are:
One technique for risk analysis is a close reading of the requirements specification, design
specifications, user documentation and other items.
Another technique is brainstorming with many of the project stakeholders.
Another is a sequence of one-on-one or small-group sessions with the business and technology
experts in the company.
Some people use all these techniques when they can. To us, a team-based approach that
involves the key stakeholders and experts is preferable to a purely document-based approach, as
team approaches draw on the knowledge, wisdom and insight of the entire team to determine what
to test and how much.
The scales used to rate possibility and impact vary. Some people rate them high, medium and low. Some
use a 1-10 scale. The problem with a 1-10 scale is that it’s often difficult to tell a 2 from a 3 or a 7 from
an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high,
medium, low and very low) tends to work well.
Let us also discuss some risks which occur usually along with some options for managing them:
Logistics or product quality problems that block tests: These can be made moderate by
careful planning, good defect triage and management, and robust test design.
Test items that won’t install in the test environment: These can be mitigated through
smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous
integration. Having a defined uninstall process is a good contingency plan.
Excessive change to the product that invalidates test results or requires updates to test
cases, expected results and environments: These can be mitigated through good change-control
processes, robust test design and light weight test documentation. When severe incidents occur,
transference of the risk by escalation to management is often in order.
Insufficient or unrealistic test environments that yield misleading results: One option is
to transfer the risks to management by explaining the limits on test results obtained in limited
environments. Mitigation – sometimes complete alleviation – can be achieved by outsourcing tests
such as performance tests that are particularly sensitive to proper test environments.
Let us also go through some additional risks and perhaps ways to manage them:
The defect detection percentage, which compares field defects with test defects, is an important metric
of the effectiveness of the test process.
Here is an example of a DDP formula that would apply for calculating DDP for the last level of testing prior
to release to the field:
It is not just defects that give rise to failure. Failures can also be caused because of the other reasons also like:
Because of the environmental conditions as well like a radiation burst, a strong magnetic field, electronic field or
pollution could cause faults in hardware or firmware. Those faults might prevent or change the execution of
software.
Failures may also arise because of human error in interacting with the software, perhaps a wrong input value
being entered or an output being misinterpreted.
Finally failures may also be caused by someone deliberately trying to cause a failure in the system.
New: When a defect is logged and posted for the first time. It’s state is given as new.
Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he
assigns the bug to corresponding developer and the developer team. It’s state given as assigned.
Open: At this state the developer has started analyzing and working on the defect fix.
Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug
status as ‘Fixed’ and the bug is passed to testing team.
Pending retest: After fixing the defect the developer has given that particular code for retesting to the tester.
Here the testing is pending on the testers end. Hence its status is pending retest.
Retest: At this stage the tester do the retesting of the changed code which developer has given to him to
check whether the defect got fixed or not.
Verified: The tester tests the bug again after it got fixed by the developer. If the bug is not present in the
software, he approves that the bug is fixed and changes the status to “verified”.
Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to
“reopened”. The bug goes through the life cycle once again.
Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the
software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and
approved.
Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug
status is changed to “duplicate“.
Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is
changed to “rejected”.
Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The
reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low,
lack of time for the release or the bug may not have major effect on the software.
Not a bug: The state given as “Not a bug” if there is no change in the functionality of the application. For an
example: If customer asks for some change in the look and field of the application like change of colour of some
text then it is not a bug but just some change in the looks of the application.
Priority
1) Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given
defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in
this case clicking the remote link by an user is rare but the impact of application crashing is severe. So the
severity is high but priority is low.
Severity can be of following types:
Critical: The defect that results in the termination of the complete system or one or more component of the
system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable
alternative method to achieve the required results then the severity will be stated as critical.
Major: The defect that results in the termination of the complete system or one or more component of the
system and causes extensive corruption of the data. The failed function is unusable but there exists an
acceptable alternative method to achieve the required results then the severity will be stated as major.
Moderate: The defect that does not result in the termination, but causes the system to produce incorrect,
incomplete or inconsistent results then the severity will be stated as moderate.
Minor: The defect that does not result in the termination and does not damage the usability of the system and
the desired results can be easily obtained by working around the defects then the severity is stated as minor.
Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the
look and field of the application then the severity is stated as cosmetic.
2) Priority:
Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait? This priority
status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is
mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer
requirements. For example: If the company name is misspelled in the home page of the website, then the
priority is high and severity is low to fix it.
Priority can be of following types:
Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious
defect have been fixed.
Medium: The defect should be resolved in the normal course of development activities. It can wait until a new
build or version is created.
High: The defect must be resolved as soon as possible because the defect is affecting the application or the
product severely. The system cannot be used until the repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not
allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow
to save the record then this is high priority and high severity bug.)
High Priority & Low Severity:
The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority:
An error which occurs on the functionality of the application (for which there is no workaround) and will not
allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity:
Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).
There are some important and critical factors related to this major activity.Let us have a bird’s eye view
of those factors first.
a. Test Cases are prone to regular revision and update:
We live in a continuously changing world, software are also not immune to changes. Same holds good
for requirements and this directly impacts the test cases. Whenever, requirements are altered, TCs
need to be updated. Yet, it is not only the change in requirement that may cause revision and update to
TCs.
During the execution of TCs, many ideas arise in the mind, many sub-conditions of a single TC cause
update and even addition of TCs. Moreover, during regression testing several fixes and/or ripples
demand revised or new TCs.
b. Test Cases are prone to distribution among the testers who will execute these:
Of course there is hardly the case that a single tester executes all the TCs. Normally there are several
testers who test different modules of a single application. So the TCs are divided among them according
to their owned areas of application under test. Some TCs related to integration of application, may be
executed by multiple testers while some may be executed only by a single tester.
The clearest area of any application where this behavior can definitely be observed is the
interoperability between different modules of same or even different applications. Simply speaking,
wherever the different modules or applications are interdependent, the same behavior is reflected in the
TCs.
Website Cookie Testing, Test cases for testing web application cookies
What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server. This information is
later used by web browser to retrieve information from that machine. Generally cookie contains
personalized user data or information that is used to communicate between different web pages.
Why Cookies are used?
Cookies are nothing but the user’s identity and used to track where the user navigated throughout the
web site pages. The communication between web browser and web server is stateless.
For example if you are accessing domain http://www.example.com/1.html then web browser will simply
query to example.com web server for the page 1.html. Next time if you type page as
http://www.example.com/2.html then new request is send to example.com web server for sending
2.html page and web server don’t know anything about to whom the previous page 1.html served.
What if you want the previous history of this user communication with the web server? You need to
maintain the user state and interaction between web browser and web server somewhere. This is where
cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web
server.
How cookies work?
The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There
are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol
does not keep any record of previously accessed web page history. While Stateful HTTP protocol do
keep some history of previous web browser and web server interactions and this protocol is used by
cookies to maintain the user interactions.
Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally
a call to some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file
on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:
Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;
When user visits the same page or domain later time this cookie is read from disk and used to identify
the second visit of the same user on that domain. Expiration time is set while writing the cookie. This
time is decided by the application that is going to use the cookie.
1) Session cookies:
This cookie is active till the browser that invoked the cookie is open. When we close the browser this
session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies:
The cookies that are written permanently on user machine and lasts for months or years.
Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk drive. The path
where the cookies get stored depends on the browser. Different browsers store cookie in different
paths. E.g. Internet explorer store cookies on path “C:\Documents and Settings\Default User\
Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or
user name like “ranga” etc.
The cookie path can be easily found by navigating through the browser options. In Mozilla Firefox
browser you can even see the cookies in browser options itself. Open the Mozila browser, click on Tools-
>Options->Privacy and then “Show cookies” button.
How cookies are stored?
When user visits certain pages they are asked which pages they don’t want to visit or display. User
options are get stored in cookie and till the user is online, those pages are not shown to him.
3) User tracking:
To track number of unique visitors online at particular time.
4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies control these
advertisements. When and which advertisement should be shown? What is the interest of the user?
Which keywords he searches on the site? All these things can be maintained using cookies.
5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.
Drawbacks of cookies:
1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to
warn before writing any cookie or disabled the cookies completely then site containing cookie will be
completely disabled and can not perform any operation resulting in loss of site traffic.
2) Too many Cookies:
If you are writing too many cookies on every page navigation and if user has turned on option to warn
before writing cookie, this could turn away user from your site.
Web Testing
Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
1) Functionality Testing:
Test for – all the links in web pages, database connection, forms used in the web pages for submitting
or getting information from user, Cookie testing.
Forms are the integral part of any web site. Forms are used to get information from users and to keep
interaction with them. So what should be checked on these forms?
First check all the validations on each field.
Check for the default values of fields.
Wrong inputs to the fields in the forms.
Options to create forms if any, form delete, view or modify the forms.
Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you
edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated
correctly. More on database testing could be load on DB, we will address this in web load or
performance testing below.
2) Usability Testing:
Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user
using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided
instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.
Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys
users and should not be used in site theme. You can follow some standards that are used for web page
and content building. These are common accepted standards like as I mentioned above about annoying
colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be
placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all
for UI testing
Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with
proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly.
These are all optional items and if present should be validated.
3) Interface Testing:
4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:
Browser compatibility
Operating system compatibility
Mobile browsing
Printing options
Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations
and settings that your web page should be compatible with. Your web site coding should be cross
browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing
security checks or validations then give more stress on browser compatibility testing of your web
application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL,
Safari, Opera browsers with different versions.
OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new
technologies used in web development like graphics designs, interface calls like different API’s may not
be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with
different OS flavors.
Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile
browsers. Compatibility issues may be there on mobile.
Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting
printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.
5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain
in peak load times? Site should handle many simultaneous user requests, large input data from users,
Simultaneous connection to DB, heavy load on specific pages etc.
Stress testing:
Generally stress means stretching the system beyond its specification limits. Web stress testing is
performed to break the site by giving stress and checked how system reacts to stress and how system
recovers from crashes.
Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different hardware
platforms is checked for software, hardware memory leakage errors,
6) Security Testing:
Test by pasting internal url directly into browser address bar without login. Internal pages should not
open.
If you are logged in using username and password and browsing internal pages then try changing url
options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try
directly changing the url site ID parameter to different site ID which is not related to logged in user.
Access should denied for this user to view others stats.
Try some invalid inputs in input fields like login username, password, input text boxes. Check the
system reaction on all invalid inputs.
Web directories or files should not be accessible directly unless given download option.
Test the CAPTCHA for automates scripts logins.
Test if SSL is used for security measures. If used proper message should get displayed when user
switch from non-secure http:// pages to secure https:// pages and vice versa.
All transactions, error messages, security breach attempts should get logged in log files somewhere on
web server.
What is the difference between client-server testing and web based testing and what are things that
we need to test in such applications?
This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having front-end and backend.
The application launched on front-end will be having forms and reports which will be monitoring and
manipulating data
E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.,
The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase
How will you report this bug effectively?
Steps To Reproduce:
Mark them in blue colour: These are previous weeks deliverables which should get released as soon as
possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:
2) New tasks:
List all next weeks new task here. You can use black colour for this.
Project:
Scheduled Task:
Date of release:
C) Defect status:
Active defects:
List all active defects here with Reporter, Module, Severity, priority, assigned to.
Closed Defects:
List all closed defects with Reporter, Module, Severity, priority, assigned to.
Test cases:
List total number of test cases wrote, test cases passed, test cases failed, test cases to be executed.
This template should give you the overall idea of the status report. Don’t ignore the status report. Even
if your managers are not forcing you to write these reports they are most important for your work
assessment in future.
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to
distinguish between average bug report and a good bug report. How to distinguish a good or bad bug
report? It’s simple, apply following characteristics and techniques to report a bug.
1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are
using any automated bug-reporting tool then this unique number will be generated automatically each
time you report the bug. Note the number and brief description of each bug you reported.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce
the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to
reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in
minimum words yet in effective way. Do not combine multiple problems even they seem to be similar.
Write different reports for each problem.
How to Report a Bug?
Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix,
SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000,
Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest
priority” and P5 as ” Fix when time permits”.
Severity:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you
can specify email address of that developer. Else keep it blank this will assign bug to module owner or
Manger will assign bug to developer. Possibly add the manager email address in CC list.
URL:
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what
the problem is and where it is.
Description:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:
If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug
report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug
report later on then chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:
Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without
any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic
nature of the bug.
3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one
module can occur in other similar modules as well. Even you can try to find more severe version of the
bug you found.
4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will
unnecessarily increase the development and testing time. Communicate well through your bug report
summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.
5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that
can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear
bug report.
6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or
to attack any individual.
A place to start
The benefit of a community’s prior experiences
A common language and a shared vision
A framework for prioritizing actions
A way to define what improvement means for your organization
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5
as shown below:
Initial
Managed
Defined
Quantitatively Managed
Optimizing
Maturity
levels consist of a predefined set of process areas. The maturity levels are measured by the achievement of
the specific and generic goals that apply to each predefined set of process areas. The following sections
describe the characteristics of each maturity level in detail.
Maturity Level 1 – Initial:
Company has no standard process for software development. Nor does it have a project-tracking system that
enables developers to predict costs or finish dates with any accuracy.
In detail we can describe it as given below:
At maturity level 1, processes are usually ad hoc and chaotic.
The organization usually does not provide a stable environment. Success in these organizations
depends on the competence and heroics of the people in the organization and not on the use of
proven processes.
Maturity level 1 organizations often produce products and services that work but company has no
standard process for software development. Nor does it have a project-tracking system that enables
developers to predict costs or finish dates with any accuracy.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in
the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 – Managed:
Company has installed basic software management processes and controls. But there is no consistency or
coordination among different groups.
In detail we can describe it as given below:
At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2
process areas. In other words, the projects of the organization have ensured that requirements are managed
and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during
times of stress. When these practices are in place, projects are performed and managed according to their
documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The status of the work
products and the delivery of services are visible to management at defined points.
Commitments are established among relevant stakeholders and are revised as needed. Work products are
reviewed with stakeholders and are controlled.
The work products and services satisfy their specified requirements, standards, and objectives.
Maturity Level 3 – Defined:
Company has pulled together a standard set of processes and controls for the entire organization so that
developers can move between projects more easily and customers can begin to get consistency from different
groups.
In detail we can describe it as given below:
At maturity level 3, an organization has achieved all the specific andgeneric goals.
At maturity level 3, processes are well characterized and understood, and are described in standards,
procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process
descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be
quite different in each specific instance of the process (for example, on a particular project). At maturity level 3,
the standards, process descriptions, and procedures for a project are tailored from the organization’s set of
standard processes to suit a particular project or organizational unit.
The organization’s set of standard processes includes the processes addressed at maturity level 2 and maturity
level 3. As a result, the processes that are performed across the organization are consistent except for the
differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described in more detail and more
rigorously than at maturity level 2.
At maturity level 3, processes are managed more proactively using an understanding of the interrelationships of
the process activities and detailed measures of the process, its work products, and its services.
Maturity Level 4 – Quantitatively Managed:
In addition to implementing standard processes, company has installed systems to measure the quality of
those processes across all projects.
In detail we can describe it as given below:
At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to
maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Sub-processes are selected that significantly contribute to overall process performance.
These selected sub-processes are controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established and used as criteria in managing
processes. Quantitative objectives are based on the needs of the customer, end users, organization, and
process implementers. Quality and process performance are understood in statistical terms and are managed
throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically analyzed.
Special causes of process variation are identified and, where appropriate, the sources of special causes are
corrected to prevent future occurrences.
Quality and process performance measures are incorporated into the organizations measurement repository to
support fact-based decision making in the future.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At
maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques,
and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.
Maturity Level 5 – Optimizing:
Company has accomplished all of the above and can now begin to see patterns in performance over time, so it
can tweak its processes in order to improve productivity and reduce defects in software development across the
entire organization.
In detail we can describe it as given below:
At maturity level 5, an organization has achieved all the specific goalsof the process areas assigned to
maturity levels 2, 3, 4, and 5 and thegeneric goals assigned to maturity levels 2 and 3.
Processes are continually improved based on a quantitative understanding of the common causes of variation
inherent in processes.
Maturity level 5 focuses on continually improving process performance through both incremental and innovative
technological improvements.
Quantitative process-improvement objectives for the organization are established, continually revised to reflect
changing business objectives, and used as criteria in managing process improvement.
The effects of deployed process improvements are measured and evaluated against the quantitative process-
improvement objectives. Both the defined processes and the organization’s set of standard processes are
targets of measurable improvement activities.
Optimizing processes that are agile and innovative depends on the participation of an empowered workforce
aligned with the business values and objectives of the organization.
The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to
accelerate and share learning. Improvement of the processes is inherently part of everybody’s role, resulting in
a cycle of continual improvement.
A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At
maturity level 4, processes are concerned with addressing special causes of process variation and providing
statistical predictability of the results. Though processes may produce predictable results, the results may be
insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing
common causes of process variation and changing the process (that is, shifting the mean of the process
performance) to improve process performance (while maintaining statistical predictability) to achieve the
established quantitative process-improvement objectives.
Smoke Testing Vs Sanity Testing - Key Differences
Smoke Testing Sanity Testing
Smoke Testing is performed to ascertain that the critical Sanity Testing is done to check the new functionality /
functionalities of the program is working fine bugs have been fixed
This testing is performed by the developers or testers Sanity testing is usually performed by testers
Smoke testing is a subset of Regression testing Sanity testing is a subset of Acceptance testing
Smoke testing is like General Health Check Up Sanity Testing is like specialized health check up
Software Testing Life Cycle STLC
Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But
practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the
different stages in STLC. Lets look into them in detail.
Requirement Analysis
During this phase, test team studies the requirements from a
testing point of view to identify the testable requirements. The
QA team may interact with various stakeholders (Client,
Business Analyst, Technical Leads, System Architects etc) to
understand the requirements in detail. Requirements could be
either Functional (defining what the software must do) or Non
Functional (defining system performance /security
availability ) .Automation feasibility for the given testing project
is also done in this stage.
Activities
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM).
Identify test environment details where testing is supposed to
be carried out.
Automation feasibility analysis (if required).
Deliverables
RTM
Automation feasibility report. (if applicable)
Test Planning
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine
effort and cost estimates for the project and would prepare and finalize the Test Plan.
Activities
Preparation of test plan/strategy document for various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Training requirement
Deliverables
Test plan /strategy document.
Effort estimation document.
Test Case Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is
identified/created and is reviewed and then reworked as well.
Activities
Create test cases, automation scripts (if applicable)
Review and baseline test cases and scripts
Create test data (If Test Environment is available)
Deliverables
Test cases/scripts
Test data
STLC Stage Entry Criteria Activity Exit Criteria Deliverables
Requirement Requirements Document Analyse business functionality to know Signed off RTM RTM
Analysis available (both the business modules and module Test automation Automation
functional and non specific functionalities. feasibility report feasibility report
functional) Identify all transactions in the modules. signed off by the (if applicable)
Acceptance criteria Identify all the user profiles. client
defined. Gather user interface/authentication,
Application architectural geographic spread requirements.
document available. Identify types of tests to be performed.
Gather details about testing priorities
and focus.
Prepare Requirement Traceability
Matrix (RTM).
Identify test environment details where
testing is supposed to be carried out.
Automation feasibility analysis (if
required).
Test Planning Requirements Analyze various testing approaches Approved test Test plan/strategy
Documents available plan/strategy document.
Requirement Finalize on the best suited approach document. Effort estimation
Traceability matrix. Preparation of test plan/strategy Effort estimation document.
Test automation document for various types of testing document signed
feasibility document. Test tool selection off.
Test effort estimation
Resource planning and determining
roles and responsibilities.
Test case Requirements Create test cases, automation scripts Reviewed and Test cases/scripts
development Documents (where applicable) signed test Test data
RTM and test plan Review and baseline test cases and Cases/scripts
Automation analysis scripts Reviewed and
report Create test data signed test data
Test System Design and Understand the required architecture, Environment setup Environment
Environment architecture documents environment set-up is working as per ready with test
setup are available Prepare hardware and software the plan and data set up
Environment set-up plan requirement list checklist Smoke Test
is available Finalize connectivity requirements Test data setup is Results.
Prepare environment setup checklist complete
Setup test Environment and test data Smoke test is
Perform smoke test on the build successful
Accept/reject the build depending on
smoke test result
Test Execution Baselined RTM, Test Execute tests as per plan All tests planned are Completed RTM
Plan , Test case/scripts Document test results, and log defects executed with execution
are available for failed cases Defects logged and status
Test environment is Update test plans/test cases, if tracked to closure Test cases
ready necessary updated with
Test data set up is done Map defects to test cases in RTM results
Unit/Integration test Retest the defect fixes Defect reports
report for the build to be Regression testing of application
tested is available Track the defects to closure
Test Cycle Testing has been Evaluate cycle completion criteria Test Closure report Test Closure
closure completed based on - Time, Test coverage , Cost , signed off by client report
Test results are Software Quality , Critical Business Test metrics
available Objectives
Defect logs are available Prepare test metrics based on the
above parameters.
Document the learning out of the
project
Prepare Test closure report
Qualitative and quantitative reporting
of quality of the work product to the
customer.
Test result analysis to find out the
defect distribution by type and severity