Refactoring and Test Strategy (PDF - Io)
Refactoring and Test Strategy (PDF - Io)
When not to refactor:1.Don’t use refactoring as a cover for code and fix
2.Avoid refactoring instead of rewriting
When to refactor: Rule of three 1.Refactor when you add function 2.Refactor when you need to
fix a bug 3.Refactor as you do a code review
Improves Design by:1.Design decay 2.Putting on weight! 3.Eliminate duplicate c
Refactoring Guidelines: 1.Save the code you start with 2.Keep refactoring small 3.Do refactoring
one at a time 4.Make a list of steps you intend to take 5.Make a parking lot 6.Make frequent
checkpoints 7.Use your compiler warnings 8.Retest 9.Add test cases 10.Review the changes
11.Adjust your approach depending on the risk level of the refactoring
Refactoring Strategies: 1.Refactor when you add a routine 2.Refactor when you add a class
3.Refactor when you fix a defect 4.Target error-prone modules 5.Target high-complexity modules
6.In a maintenance environment, improve the parts you touch 7.Define an interface between
clean code and ugly code, and then move code across the interface
Code smells:1.Indicators of trouble 2.Duplicated code 3.Same expression in two methods of same
class 4.Extract Method
5.Same expression in two sibling classes 6.Extract method, pull up field Extract class
Types of testing: 1.Unit or module tests :Single program/component/module
2.Integration tests :Interface between system parts
3.External functions tests :External system specifications
4.Regression Tests :Subset of previously run tests
5.System tests :Verify/validate system to its initial objective
6.Acceptance tests :User’s requirement
7.Installation tests :Installability & operability
Three types: 1.Big-Bang Approach 2.Top-down integration testing 3.Bottom-up integration testing
Big bang approach: 1.Non-incremental integration 2.All components are combined and tested at
one time 3.Results in Chaos! 4.Set of errors that cannot be traceable to modules
5.Difficult to correct 6.Results in endless repetitions of testing process
Top down:
1. Main control module used as a test driver and stubs are substitutes for components directly
subordinate to it
2. Subordinate stubs are replaced one at a time with real components (following
the depth-first or breadth-first approach)
3. Tests are conducted as each component is integrated
4. On completion of each set of tests another stub is replaced with a real component.
5. Regression testing may be used to ensure that new errors have not been introduced
1. Low level components are combined in clusters that perform a specific software
function (builds)
2. A driver (control program) is written to coordinate test case input and output.
3. The cluster is tested
4. Drivers are removed and clusters are combined moving upward in the program
Top-Down v/s Bottoms Up
Features:
1.The control program is tested first 2. Modules are integrated one at a time 3. Major
emphasis is on interface testing
1. Allows early testing aimed at proving feasibility and practicality of particular
modules 2. Modules can be integrated in various clusters as desired 3. Major
emphasis is on module functionality and performance
Progressive v/s regression testing: 1.Most test cases begin as progressive test cases and
eventually become regression test cases 2.Regression tests are for the life of the product
3. Regression testing - not another testing, it is re-execution of some or all tests developed
for a specific testing activity 4.Regression testing may be performed for each activity – unit
test, usability test, function test, system test etc.
Smoke testing: 1.Software components already translated into code are integrated into a build
2. A series of tests designed to expose errors that will keep the build from performing its
functions are created 3.The build is integrated with the other builds and the entire product
is smoke tested daily (either top-down or bottom integration may be used) 4.Benefits of
smoke testing to a complex, time critical engineering projects
5.Integration risk is minimized: daily run 6.The quality of end-product is improved: early detection
7.Error-diagnosis and correction are simplified: focus on new increments 8.Progress
is easier to assess: integrated & demonstrated
Function Testing: 1.Ensure that each function or performance characteristic conforms to its
specification 2.Deviations (deficiencies) must be negotiated with the customer to establish a
means for resolving the errors 3.Configuration review or audit is used to ensure that all elements
of the software configuration have been properly developed, cataloged, and documented to allow
its support during its maintenance phase
Usability testing: Requires a real user to interact with end product
Characteristics that can be tested include:
1.Accessibility: Can the user enter, navigate and exit with relative ease?
2.Responsiveness: Can the users do what they want when they want in a way that’s clear
3.Efficiency: Can users do what they want in minimum amount of steps and time?
4.Comprehensibility: Do users understand the product structure, its help system and the
documentation?
System testing: 1.Process of attempting to demonstrate that a program or system does not meet
its original requirements and objectives as stated in the requirements specification.
2. System test is performed by a special test organization which has a link with end users
3.Uncovers problems that have been undiscovered throughout the entire development
process. 4.System testing : find those cases in which system does not work, regardless of
its specifications. 5.Some areas of attention: operational errors and intentional misuse
System testing categories:
Load/Stress testing: To identify the peak load conditions at which the system fails
Volume testing: To determine the level of continuous heavy load at which the system will
fail.
Configuration testing: To find those planned legal hardware and /or software
configurations on which the system will not operate correctly.
Compatibility testing: To expose those areas where system has improper
compatibility.
Security testing: To find ways to break the security provisions of the system.
Performance testing: To determine actual performance of the system against the
performance objectives under peak and normal conditions.
Install-ability: To identify the ways in which the system installation procedures lead to
incorrect results
Acceptance testing: 1.Making sure the software works correctly for intended user in his or her
normal work environment 2.Performed by customer in production mode
3.Alpha test (version of the complete software is tested by customer under the
supervision of the developer at the developer’s site)
4.Beta test (version of the complete software is tested by customer at his or her own site
without the developer being present)
Alpha & Beta tests are regressive tests, used for products
Alpha & Beta tests are conducted for pre-specified period
Why Beta testing: 1.Expert consulting: Appealing to experts
2.Magazine review: Garner favor 3.Customer relationship building: Preferential treatment,
marketing gimmick 4.Polishing the user interface design based on customer usage patterns: Fine
tuning 5.Compatibility testing: Wider variety 6.General quality assurance: More users, more
defects
Effectiveness of beta testing: 1.Most beta testers do not report either defects or comments
2.Supervised user testing can produce more focused feedback than beta testing 3.Beta
testing is most useful for compatibility testing and as a marketing gimmick