0% found this document useful (0 votes)
20 views12 pages

Revision Testing

Uploaded by

Noritie Yampakat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views12 pages

Revision Testing

Uploaded by

Noritie Yampakat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Fundamental of Testing

What is Testing 4. System Design


o SRS converted to system design
 Process of executing program or application plan known “Design Specification”
with intent finding software bugs. o Technical architects and developers
 Process validating and verifying software develop logical plan system then
program or application or product meets the reviewed by stakeholders
business and technical requirements that o Feedback and suggestion collected
guided design and development, works as o Most challenging to ensure the
expected and can be implemented with the design has tight security and less or
same characteristic. no exposure to vulnerabilities
 Validation determines if system complies o Must be corrected at high priority,
with requirements and performs functions failing has the higher rate of project
which intended and meets organization’s failure and cost overruns
goals and user needs.
 Verification is makes sure the product 5. Development
designed to deliver all functionality to the o Start code and develop design
customer and done at the starting of o Developed unit tests for module,
development process. peer review other module’s unit
tests, deploy builds to the intended
environment and execute unit tests
Software Development Life Cycle
6. Testing
 Process used to develop and deliver high o Quality check takes place
quality software. o Developed software assessed to
 Necessary because: - ensure all the specified
requirements are met
 Enhance quality of software o Focus to find defects
 Reduce rate of vulnerabilities o During test case execution, all
 Meet and exceed Customer’s defects found reported in test
expectations management tool and decision
considering defects as valid or
 Stages SDLC invalid depends on developers
 o Invalid – rejected and close
1. Planning o Valid – fixed the code
o Defects found will go through
2. Requirements Gathering Defect Life Cycle
o What is needed and what not
needed 7. Deployment
o Interview, workshops and surveys o Users can use the software
o Deploying build to production can
3. Analysis or System Analysis be complicated process
o Analyse each achievable o If project is existing application,
requirement technology migration carried out
o Documented as SRS or FRS and can be extensive procedure
o Risks predicted are well-planned o Ensure application continues
o User requirements analysed to function while deployment in
ensure they can be met process
o Help enhancing way software
should behave 8. Operation and Maintenance
o System divided into smaller so o Any issues user face considered
requirements can be prioritized and post-Production issues that
taken up for development addressed and resolved by internal
o Effectively manageable for all team which maintenance team
resources to work at all stages o This stage addresses minor change
requests, code fixes and deploys in
short intervals

SDLC Entry and Exit Criteria b. Makes sure customer finds organization
reliable and their satisfaction in the
1. Planning application is maintained
Entry: Requirements collection form  If not, they switch to competitor
customer and stakeholders organization
Exit: Acceptance of project and planning  Proper testing prevents monetary losses

2. System Analysis and Requirements c. Ensure the Quality of the Product


Entry: Requirements and Change requests  Deliver good quality product on times
Exit: SRS Document builds customers confidence in team
and organization
3. Design
Entry: SRS Document d. Provide facilities to customer
Exit: Design Specification Document  Deliver high quality product or software
which requires lower maintenance cost
4. Development and results into more accurate,
Entry: SRS and DS Document consistent and reliable results
Exit: Build, unit tests and set up  High quality product typically has fewer
defects and requires lesser maintenance
5. Testing effort, which reduced costs
Entry: SRS Document and Deploy build
Exit: Test cases, system testing and defects e. Required for effective performance of
and their closure software application or product

6. Deployment f. Ensure application should not result into any


Entry: No high priority defects failures
Exit: Build deployment to Production  Proper testing ensures bugs and issues
detected early
7. Operation and Maintenance  If late, it very expensive to fix since
Entry: User use the software and report might require redesign, re-
issues they find, change requests implementation and retesting
Exit: Rectify issues and deploy code fix to
Production g. Required to stay in business
 User not inclined to use software that
has bugs
Why Testing is Necessary?
 May not adopt a software if not happy
with stability of application
a. Required to point out defects and errors  Poor quality software may result lack of
that made during development process adoption product and may losses which
 Programmer may make mistake during business may not recover from
implementation of software
 Could be many reasons like lack of
Summary
experience, lack of knowledge of
 To prevent software form failure in
programming language, insufficient
operation
experience in the domain and incorrect
implementation algorithm due to
complex logic or simply human error
General Testing Principles Fundamental Test Activities

1. Testing shows the presence of defects, not 1. Test Planning and Control
their absence  Involves producing document that
 Can show that defects present describe overall approach and test
 Cannot prove that there are no defects objectives
 Reduces probability of undiscovered  Involves reviewing the test basis,
defects remaining in software identifying test conditions based on
 Even no defects found, testing not a analysis test items, writing test cases an
proof of correctness designing the test environment
 Completion or exit criteria must be
2. Exhaustive testing is impossible specified to know when testing is
 Testing everything is not feasible except complete
for trivial cases  Purpose: -
 Risk analysis, test techniques and  Determine scope and risks and
priorities should be used to focus test identifying objectives of testing
effort than attempting test exhaustively  Determine required test resources
like people, test environment
3. Early testing saves time and money  Schedule test analysis and design
 Find defects early, both static and tasks, test implementation,
dynamic test should start execution and evolution
 Referred as shift left  Comparing actual progress against plan
 Helps reduce or eliminate costly and reporting status including
changes deviations form the plan
 Involves taking action necessary to
4. Defects cluster together meet mission and objectives of project
 Small number modules contain most
defects discovered during pre-release 2. Test Analysis and Test Design
testing or responsible for most  Purpose: -
operational failures  Review test basis
 Predicted defects cluster and actual The test basis is information which
observed defect clusters in test or test cases are based requirements,
operation are important input a risk design specifications, product risk
analysis used to focus the test effort analysis, architecture and interfaces
 Identify test conditions
5. Beware of Pesticide Paradox  Design the tests
 Tests repeated over and over again no  Design test environment set-up and
longer find any new defects identify required infrastructure and
 To detect new defects, existing tests tools
and test data may need changing and
new tests may need be written  Compare actual results with
expected results
6. Testing is context dependent
 Testing done differently in different
contexts

7. Absence of error is a fallacy


 To expect that just finding and fixing
large number of defects will ensure
success of system
3. Test Implementation and Execution 5. Test Closure Activities
 Involves actual running the specified  Done when software ready to be
test on computer system either delivered
manually or using automated test tool  Purpose: -
 Purpose: -  Check which planned deliverables
 Develop and prioritize test cases are delivered and ensure all incident
using technique and create test reports have been solved
data  Finalize and archive test ware such
 Create test suites form test cases as scripts, test environment for later
for efficient test execution which is reuse
collection of test cases that used to  Handover test ware to maintenance
test software program organization and will give support to
 Re-execute tests that previously software
failed in order to confirm fix  Evaluate how testing went and
 Log the outcome of test execution learn lessons for future releases and
Test log is status of test case projects
 Compare actual results with  Can be close for other reasons: -
expected results  Project cancelled
 Some target achieved
4. Evaluating Exit Criteria and Reporting  Maintenance release or update is
 Process defining when to stop testing done
depends on coverage of code,
functionality or risk and also depends Psychology of Testing
on business risk, cost and time and vary
from project to project Different between Tester and Developer
 Purpose: - a. Developers’ views
 Assess if more test needed or if exit  Testing is process to prove that
criteria specified should be changed software works correctly
 Write test summary report for b. QA views
stakeholder  Testing is process to prove that
 Exit criteria come into picture when: - software does not work
 Maximum test cases executed with c. Manager’s View
certain pass percentage  Testing is process to detect the defects
 Bug rate falls below certain level and minimize the risk associated with
 Achieve deadlines residual defects

2. Test Independence
Why do we Test?  More effective
 Author should not test their own work
a. Primarily to find faults in software  Assumption made carried into testing: -
b. Can be perceived as being destructive  People see what they see
process not constructive  Emotional attachment with product
 We’re Human
1. Tester Developer relationship
 Communication is the key 3. Levels of Independence
 Must be constructive not destructive  Test case are design: -
 Developer inform tester any changes  by person who write software under
 Tester report problem clearly test (very low)
 by another person (low)
 by people from another department
(medium)
 by people form another
organisation (very medium)
 not chosen by person (high)
4. Testers Characteristics
 Good communicators  Product
 Multitalented  Shall ensure that deliverables
 Happy when finding faults provide meet the highest
professional standard possible
5. Developers Characteristic
 Specializes  Judgement
 Creative  Shall maintain integrity and
 Trained independence in professional
judgement

Code of Ethics  Management


 Test manager and leader shall
 Sets forth the values, principles and subscribe to and promote ethical
standard that guide tester to perform tasks approach to management of
appropriately and helps use the information software testing
in ethical and appropriate manner
 Profession
1. ISTQB Code of Ethics for Test Professional  Shall advance integrity and
o Involvement in software testing enables reputation of profession consistent
individuals learn confidential and with public interest
privileged information
o Necessary to ensure information is not  Colleagues
put to inappropriate use  Shall be fair to and supportive of
colleagues and promote cooperation
2. Code of Ethics (Certified Software Tester) with software developers
 Public
 Shall act consistently with the public  Self
interest  Shall participate lifelong learning
regarding practice of profession and
 Client and Employer promote ethical approach to
 Shall act in manner that in best practice of profession
interests of their client and
employer, consistent with public
interest
Black Box Testing Test Design Technique

Static Testing  Specification Based Technique (BBT)


o Without executing program  Structure Based Technique (WBT)
o Verification activities  Experience Based Technique
o Preventing defects
o Assessment on code and documentation Black Box
 Internal structure, design, implementation
Dynamic Testing of item tested is not known to tester
o Execute program  Acceptance Testing and System Testing
 Test Basis: SRS
o Validation activities
o Finding and fixing defects
White Box
o Observing system behaviour by providing
 Internal structure, design, implementation
input values of item tested is known to tester
 Unit Testing and Integration Testing
Terms in Dynamic Testing  Test Basis: Detailed Design

1. Test Design Specification Experience Based Technique


 Document specifying the test conditions  Uses knowledge and experience of people
for test item to derive test cases
 Detailed test approach and identifying  Knowledge about likely defects and their
associated high level test cases distribution

2. Test Condition Black Box Design Technique


 Item or event like function, transaction,
features, quality attributes or structural a. Equivalence Partitioning
element of component or system that  Inputs to software or system divided
could be verified by one or more test into groups that expected to exhibit
cases similar behaviour so likely to processes
in same way
3. Test Case Specification  Cover all Valid and Invalid partitions
 Document specifying set of test cases  Applicable at all levels of testing
for test item  Used to achieve input and output
coverage goals
4. Test Procedure Specification
 Document specifying sequence of b. Boundary Values Analysis
actions for execution of test  Complement equivalence partitioning
 Known as test script or manual test  Test on boundary values minimum and
script maximum

5. Test Execution Schedule c. Decision Table


 Scheme for execution of test  Applicable for system requirements or
procedures that included in test specifications contain logical conditions
execution schedule in their context and and complex business rules that system
in order which they to be executed is implement
 Check specification and define input
6. Test Script conditions and actions of system and
 Commonly used refer to test procedures state either “True” or “False”
specification especially automated one  One test case per column
 Steps: -
1. Identifying all conditions and rules
2. Construct complete DT
3. Refine DT
d. State Transition Testing - Each scenario, identify one or more
 Allows tester to view software in term TC
of states, transition between states, 1. Parameters of any test case
inputs or events that trigger state  Conditions
changes and actions which may result  Input (Data Values)
from transition  Expected Result
 Combined with possible event and  Actual Results
showing valid and invalid transition
 Used within embedded software - Each TC, identify conditions that
industry and technical automation in cause to execute
general 1. Use matrix with column for
 Suitable for modelling business object conditions and for each state: -
having specific states or testing screen-  Valid
dialogue flows  Invalid
 Not Applicable
e. Use Case Testing
 Interactions between actors produce - Complete TC by adding data values
result of value to system user or 1. Design real input data values
customer that make such conditions to be
 Described at abstract level or at system valid or invalid and the
level scenarios to happen
 Designing test cases form use cases 2. Look at use case constructs and
may combined with other specification- branches
based test technique
 Advantages: - Summary Highlighted
- Uncovering defects in process flows
during real-world use of system  Equivalence Partitioning
- Useful for designing acceptance  Representative inputs or outputs
tests with user participation  Valid data inputs
- Uncover integration defects caused  Invalid data inputs
by interaction and interference of
different components
 Boundary Value Analysis
 Use case scenario
 Area of validity
- Instance of use case
 Boundaries of validity
- Use case execution where a specific
user executes the use case in
specific way  State Transition Testing
 Deriving Test Cases from use case  Complex states and state transitions
- Identify use case scenario
1. Use simple matrix that can be  Decision Table Testing
implemented in spreadsheet,  Conditions and actions
database or test management
tool  Use Case Testing
2. Number scenario and define  Scenarios of system use
combination basic and
alternative flows that lead them
3. Many scenarios possible one-
use case
4. Not all documented use
iterative process
5. Not all may be tested
White Box Testing and Experience Experience Based Testing
Based Testing
 Intuitional testing approach based on
Structure Based Technique experience of similar software or technology
 Use with formal technique
 Detecting different defects which hard to
 Derive TC from software code or design that
find in formal technique is possible
structure based on an identified structure of
 Different types of defects
software or system
 Defects not consistent, depends on
 Create TC more to increase test coverage
experience of tester
 Control flow testing, basis path testing and
 Error Guessing
elementary comparison testing
o Design TC using experience of similar
 Component level – structure of code itself
software
 Integration level – structure of tree call
o Record all defects of previous stage and
 System level – structure of menu structure,
business process or web page structure design TC to attack defects
o Possible to design TC through possible
defects, common sense, experience and
Coverage
reason of software failures
 Extent that structure has been exercised by
o To reinforce formal tech
test suite
 Checklist Based
o Measure thoroughness of testing
o General Checklist
o If not 100%, more test may design to
- List if test items to be preformed
test items that missing and can increase
o Functional (BB) Checklist
coverage
- Has abstract layer and grouping
 Classification coverage
form top function of system
o Structure based (formal) – branch
structure to bottom unit or
coverage, condition coverage and
component
statement coverage
o System Component Checklist
 Extension of concept coverage that formal
- Has different levels of grouping
tech guarantees
form subsystem or modules of high-
 Informal coverage
level including statements or data
o requirement coverage, functional
o Checklist vs TC
coverage and entry exit coverage
- Reflect experience and know-how
o EPC, BVAC and DTTC
of checklist require by
documentation-based TC
Control Flow Testing - Can used for test basis review
 Exploratory Testing Approach
How many TC needed for o To design, execute, record and learn in
 Decision Coverage given period of time, using test chart
 Statement Coverage include test objectives and items
 Condition Coverage o Execute test items first not designing
TC and build test design with
Basis Path Testing/ Cyclomatic Complexity knowledge learned from execution test
= E-N+2 items in case don’t have specification,
= Number of regions +1 lack of time to test, need to make up
= number of decision node +1 for formal test design and want to have
confidence that you had detected all
fatal defects
 Scripted Based Testing Approach
 Classification Tree Method
 Factor Selecting Test Design Tech
 Type of system
 Type of risk
 Regulatory standards
 Test objective
 Use case modules
Test Management c. Role of Test Manager
 Coordinate test strategy and plan
1. Test Organization with project managers and others
a. Independent Testing  Write or review test strategy for
- Test tasks can be taken by people project and test policy for
in specific test role or in different organization
- Independent testers improve  Contribute testing perspective to
effectiveness of defect finding other project activities
- Why?
o Safety-critical project d. Testers Roles
o Large project  Review and contribute to test plans
 Create test specifications
o Higher/multiple test levels
 Review tests developed by others
- Advantages
o Able to see other and different
e. Soft Skills for Tester
possibilities for defects
 Team-minded, political and
o Able to test assumptions made
diplomatic skills
by developers
 Confident attitude
- Disadvantages
 Accuracy and Creativity
o Higher effort for communication
o Test can be seen as bottleneck
2. Test Planning and Estimation
o Developers can lose
a. Test Planning
responsibility - All projects require plans and
- Level of IS (Low to High) strategies to define how testing will
o The developer be conducted
o Independent testers attached to - Levels: -
development team 1. Test Policy – defines how
o Independent permanent test organization will conduct testing
team, centre of excellence 2. Master Test Plan – defines how
within organization project will conduct testing
o Independent testers or test (Test objects, test tasks,
team provided by operational acceptance criteria etc.)
business unit 3. Functional Test Plan
o Specialist testers such as 4. System Integration Test Plan
usability tester, security tester 5. UAT Test Plan – defines how
or performance tester each level testing will be
o Outsourced test team or testers conducted
like contractor or other
organization b. Test Planning Activities
- Determine scope and risks
b. Independence and Test Organizations - Identify objectives of testing
 Programming - Define overall approach testing
 Components Testing - Schedule test activities
(Other developers, dedicated - Assign resources to activities
developer’s module testes
 Integration Testing
(Dedicated developers, testers, test
team, test lab)
 System Testing
(Test team, test lab)
 Acceptance Testing
(Test lab)
c. Entry Criteria (defines when to start g. Test Approach
testing) - Implementation of test strategy for
- Deliverables (test objects, items), specific project
lab (test cases, data, tools) and - Test design technique, test types,
teams (developers, test) ready entry and exit criteria
- Complete or partially testable code - Types of TA
available o Analytical: risks and
- Requirement defined and approved requirement based
- Test cases developed and ready o Model-Based: mathematical
models
d. Exit Criteria (define when stop testing) o Methodical: checklist
- Deadlines meet or budget depleted o Process or standard compliant:
- Execution all test cases IEEE829
- Identified detects corrected o Dynamic and Heuristic
- No high priority bug has been left Approach: exploratory testing
out o Consultative: directed approach
o Regression averse approaches:
e. Test Estimation automated and regression
- Expert estimation via people in testing
charge of tasks or external experts
- Analogy estimation based on
3. Progress Monitoring and Control
metrics of previous or similar
 Test monitoring
project based on typical values
- Process evaluating and providing
- Factors influence test effort: -
feedback of currently in progress
 Characteristic of product
testing phase
(quality, size, requirements)
 Test control
 Characteristic of development
- Activity of guiding and taking
(stability, tools, skills)
corrective action based on some
 Outcome of testing (defects,
metrics or information to improve
amount of rework)
efficiency and quality
 Activity
f. Test Strategy
o Providing feedback to team and
- High level description of test level
other required stakeholders about
to be performed and testing within
progress of testing efforts
levels for organization or
o Broadcasting results testing to
programme
- How to test (objectives, scope, associated members
schedule, test environment, risk o Finding and tracking test metrics
analysis) o Planning and estimation and
- TS Selection deciding future course of action
 Strategy short or long term based on metrics calculated
 Organization type and size
 Project requirements (safety a. Test Metric
and security) - Measurable characteristic of TC,
 Product development mode test run or test iteration with
information about scale used for
measuring
o Defects-Based Metric (number
of detected defects)
o TC-Based Metrics (number of
planned TC)
o Test Object-Based Metrics
(code coverage)
b. Test Reporting a. Types of Risk
- Effectively communicating test - Project Risk (organizational factors,
findings to other project technical issues and supplier issues)
stakeholders  Risks regarding achievement of
- Created either at key milestones or project objectives
end of test level
- Describe results of given level or - Product Risk
phase of testing  Risks regarding TO and usage
- Metrics collected at end of test level
should consider evaluation and b. Risk Based Testing
decision making in: - - Use risk to prioritize and emphasize
o Adequacy of test objectives for appropriate tests during test
test level execution
o Adequacy of selected TS - testing functionality which has
o Effectiveness tests in achieving highest impact and probability
defined objectives failure
- measures how well at finding and
c. Test Control removing defects in critical areas
- Guiding and corrective actions to try - use risk analysis to: -
achieve best possible outcome for  identify proactive opportunities
project to remove or prevent defects
through non-testing activities
4. Configuration Management  select which test activities to
 Process of: - perform
- Identifying and defining items in
system 6. Incident and Management
- Controlling changes of items  Events encountered during testing that
throughout their life cycle requires review
- Recording and reporting status  Process for logging, recording and
items and change requests resolving incidents asap to restore
- Verifying completeness and business process or service back to
correctness items normal
 Terms in IM
 CM and Testing - Severity – potential impact of
- All element test ware can clearly incident will decide severity can be
define and held under version major, minor, fatal or critical for
control immediate resolution
- Changes are traceable and can be - Priority – set according severity and
related each other and to influence on working status of
development units system can be high, medium or low,
- Traceability during test process or very high or urgent
product life cycle ensured - Incident Status – current state that
- All document and software have handling incident can be new, in
specific allocation within test progress, resolved and closed
documentation

5. Risk and Testing


 Something that has not happened yet
and may never happen
 Potential problem > negative
consequences
Testing throughout the Software  Levels coordinated and verified each
Life Cycle stage
1. Component Testing – focused
functional verification each module
 Various processes or methodologies that
2. Integration Testing – focused on
being selected for development of project
interfaces and transactions
depend on project’s amin and goals
3. System Testing – focused on whole
 Many models to achieve different required
product
objectives
4. Acceptance Testing – focused on
user’s expectations
1. V-Models
 Linear model each stage has
2. Iterative Model
corresponding testing activity
 Combines elements waterfall model
 Every phase complete execution before
applied in iterative fashion
execution next phase begins
 First increment generally core product
 Expensive and time-consuming models
 Each increment build submits to
 Requirement cannot be changed
customer for suggesting any
 Uses cases – project were failures and
modifications
downtimes unacceptable
 Next increment implements customer
suggestion and add additional
 Advantages: - requirements from previous
 Easy and simple to use  Process repeated until product
 Many testings’ activity completed
 Suitable for small project
 Disadvantages
 Advantages
 Not suitable for large and composite
- Flexible because cost development
projects
low and initial product deliver faster
 Requirement not constant
- Easier to test and debug in smaller
iteration
 Verification - Working software generated quickly
o Process to find whether software in software life cycle
meets specified requirement
o Evaluates intermediate product  Disadvantages
o Check whether software - Cost final product may cross the
constructed according requirement cost initially estimation
and design specifications - Model requires very clear and
o Describe output are as per inputs complete planning
o Plans, requirement, specification, - Planning design required before
code evaluated in verifications whole system broken into smaller
increments
 Validation
o Process check software meets  Testing Activities
requirements and expectation - Performed as soon as function
customer delivered
o Evaluates final product - Integration tests and systems test
o Check specification correct and executed on each iteration and
satisfy business need require strong configuration
o Explain output accepted by user or management
not - Customer more involved acceptance
o Completed after verification testing
o Actual product tested under
validation 3. Waterfall, Spiral, Rational Unified Process,
Agile, Scrum, Extreme Programming,
Kanban

You might also like