0% found this document useful (0 votes)
51 views

Refactoring and Test Strategy (PDF - Io)

The document discusses various concepts related to refactoring code including reasons for refactoring, guidelines for refactoring, code smells, types of testing, and integration testing strategies. It provides details on unit testing, integration testing, system testing, usability testing, acceptance testing, and more.

Uploaded by

mca.2022
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Refactoring and Test Strategy (PDF - Io)

The document discusses various concepts related to refactoring code including reasons for refactoring, guidelines for refactoring, code smells, types of testing, and integration testing strategies. It provides details on unit testing, integration testing, system testing, usability testing, acceptance testing, and more.

Uploaded by

mca.2022
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Refactoring is a change made to the internal structure of software to make it easier to

understand and cheaper to modify without changing its observable behaviour.


Reasons for Refactoring: 1.Improves the design of software 2.Make it easier to understand the
software 4.Find bugs in the software 5.Helps in making the Program faster 6.A class doesn't do
very much 7.Code is duplicated 8.A loop is too long or too deeply nested 9.A parameter list has too
many components 10.Changes require parallel modifications to multiple classes 11.A routine uses
more features of another class than of its own class 12.A primitive data type is overloaded 13.One
class is overly intimate with another 14.A routine has a poor name 15.Data members are public
16. Global variables are used 17.A subclass uses only a small percentage of its parents’ routines
Comments are used to explain difficult code 18. A class interface does not provide a
consistent level of abstraction 19.A parameter list has too many components
20.Changes within a class tends to be compartmentalized 21.Changes require
parallel modifications to multiple classes
22.Inheritance hierarchies have to be modified in parallel 23.Case statements have to be modified
in parallel 24.Related data items that are used together are not organized into classes 25.A routine
uses more features of another class than of its own class 26.A primitive data type is overloaded
27.A class doesn't do very much 28.A chain of routine passes tramp data 29.A middleman object
isn’t doing anything 30.One class is overly intimate with another 31.A routine has a poor name
32. Data members are public 32.A subclass uses only a small percentage of its parents’ routines
33.Comments are used to explain difficult code 34.Global variables are used 35.A routine uses
setup code before a routine call or takedown code after a routine call 36.A program contains code
that seems like it might be needed someday

When not to refactor:1.Don’t use refactoring as a cover for code and fix
2.Avoid refactoring instead of rewriting
When to refactor: Rule of three 1.Refactor when you add function 2.Refactor when you need to
fix a bug 3.Refactor as you do a code review
Improves Design by:1.Design decay 2.Putting on weight! 3.Eliminate duplicate c

Refactoring Guidelines: 1.Save the code you start with 2.Keep refactoring small 3.Do refactoring
one at a time 4.Make a list of steps you intend to take 5.Make a parking lot 6.Make frequent
checkpoints 7.Use your compiler warnings 8.Retest 9.Add test cases 10.Review the changes
11.Adjust your approach depending on the risk level of the refactoring

Refactoring Strategies: 1.Refactor when you add a routine 2.Refactor when you add a class
3.Refactor when you fix a defect 4.Target error-prone modules 5.Target high-complexity modules
6.In a maintenance environment, improve the parts you touch 7.Define an interface between
clean code and ugly code, and then move code across the interface

Code smells:1.Indicators of trouble 2.Duplicated code 3.Same expression in two methods of same
class 4.Extract Method
5.Same expression in two sibling classes 6.Extract method, pull up field Extract class
Types of testing: 1.Unit or module tests :Single program/component/module
2.Integration tests :Interface between system parts
3.External functions tests :External system specifications
4.Regression Tests :Subset of previously run tests
5.System tests :Verify/validate system to its initial objective
6.Acceptance tests :User’s requirement
7.Installation tests :Installability & operability

Unit testing: 1.White box-oriented testing 2.Can be conducted in parallel


3.Module interfaces are tested for proper information flow 4.Local data structure are examined to
ensure that integrity is maintained 5.Boundary conditions are tested 6.Basis path testing should be
used 7.All error handling paths should be tested 8.WBT: Tester may be biased by previous
experience in developing the module (cut/copy/paste) and coverage may be limited 9.Highest
error yield of all testing techniques 10.May go upto 70%.

Integration Testing: 1.Systematic technique for constructing the program structure


2.While at the same time conducting tests to uncover errors associated with interfacing
3.Objective: from the unit tested components, build a program structure that is dictated
by design

Three types: 1.Big-Bang Approach 2.Top-down integration testing 3.Bottom-up integration testing

Big bang approach: 1.Non-incremental integration 2.All components are combined and tested at
one time 3.Results in Chaos! 4.Set of errors that cannot be traceable to modules
5.Difficult to correct 6.Results in endless repetitions of testing process

Top down:
1. Main control module used as a test driver and stubs are substitutes for components directly
subordinate to it
2. Subordinate stubs are replaced one at a time with real components (following
the depth-first or breadth-first approach)
3. Tests are conducted as each component is integrated

4. On completion of each set of tests another stub is replaced with a real component.
5. Regression testing may be used to ensure that new errors have not been introduced

Bottom-up integration testing:

1. Low level components are combined in clusters that perform a specific software
function (builds)

2. A driver (control program) is written to coordinate test case input and output.
3. The cluster is tested
4. Drivers are removed and clusters are combined moving upward in the program
Top-Down v/s Bottoms Up
Features:
1.The control program is tested first 2. Modules are integrated one at a time 3. Major
emphasis is on interface testing
1. Allows early testing aimed at proving feasibility and practicality of particular
modules 2. Modules can be integrated in various clusters as desired 3. Major
emphasis is on module functionality and performance

Progressive v/s regression testing: 1.Most test cases begin as progressive test cases and
eventually become regression test cases 2.Regression tests are for the life of the product
3. Regression testing - not another testing, it is re-execution of some or all tests developed
for a specific testing activity 4.Regression testing may be performed for each activity – unit
test, usability test, function test, system test etc.

Smoke testing: 1.Software components already translated into code are integrated into a build
2. A series of tests designed to expose errors that will keep the build from performing its
functions are created 3.The build is integrated with the other builds and the entire product
is smoke tested daily (either top-down or bottom integration may be used) 4.Benefits of
smoke testing to a complex, time critical engineering projects
5.Integration risk is minimized: daily run 6.The quality of end-product is improved: early detection
7.Error-diagnosis and correction are simplified: focus on new increments 8.Progress
is easier to assess: integrated & demonstrated
Function Testing: 1.Ensure that each function or performance characteristic conforms to its
specification 2.Deviations (deficiencies) must be negotiated with the customer to establish a
means for resolving the errors 3.Configuration review or audit is used to ensure that all elements
of the software configuration have been properly developed, cataloged, and documented to allow
its support during its maintenance phase
Usability testing: Requires a real user to interact with end product
Characteristics that can be tested include:
1.Accessibility: Can the user enter, navigate and exit with relative ease?
2.Responsiveness: Can the users do what they want when they want in a way that’s clear
3.Efficiency: Can users do what they want in minimum amount of steps and time?
4.Comprehensibility: Do users understand the product structure, its help system and the
documentation?
System testing: 1.Process of attempting to demonstrate that a program or system does not meet
its original requirements and objectives as stated in the requirements specification.
2. System test is performed by a special test organization which has a link with end users
3.Uncovers problems that have been undiscovered throughout the entire development
process. 4.System testing : find those cases in which system does not work, regardless of
its specifications. 5.Some areas of attention: operational errors and intentional misuse
System testing categories:
Load/Stress testing: To identify the peak load conditions at which the system fails
Volume testing: To determine the level of continuous heavy load at which the system will
fail.
Configuration testing: To find those planned legal hardware and /or software
configurations on which the system will not operate correctly.
Compatibility testing: To expose those areas where system has improper
compatibility.
Security testing: To find ways to break the security provisions of the system.
Performance testing: To determine actual performance of the system against the
performance objectives under peak and normal conditions.
Install-ability: To identify the ways in which the system installation procedures lead to
incorrect results
Acceptance testing: 1.Making sure the software works correctly for intended user in his or her
normal work environment 2.Performed by customer in production mode
3.Alpha test (version of the complete software is tested by customer under the
supervision of the developer at the developer’s site)
4.Beta test (version of the complete software is tested by customer at his or her own site
without the developer being present)
Alpha & Beta tests are regressive tests, used for products
Alpha & Beta tests are conducted for pre-specified period
Why Beta testing: 1.Expert consulting: Appealing to experts
2.Magazine review: Garner favor 3.Customer relationship building: Preferential treatment,
marketing gimmick 4.Polishing the user interface design based on customer usage patterns: Fine
tuning 5.Compatibility testing: Wider variety 6.General quality assurance: More users, more
defects
Effectiveness of beta testing: 1.Most beta testers do not report either defects or comments
2.Supervised user testing can produce more focused feedback than beta testing 3.Beta
testing is most useful for compatibility testing and as a marketing gimmick

You might also like